2026-04-18 00:00:14.297923 | Job console starting 2026-04-18 00:00:14.319316 | Updating git repos 2026-04-18 00:00:14.656411 | Cloning repos into workspace 2026-04-18 00:00:15.090649 | Restoring repo states 2026-04-18 00:00:15.124820 | Merging changes 2026-04-18 00:00:15.124843 | Checking out repos 2026-04-18 00:00:15.963907 | Preparing playbooks 2026-04-18 00:00:17.422955 | Running Ansible setup 2026-04-18 00:00:25.341108 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-18 00:00:26.588624 | 2026-04-18 00:00:26.588738 | PLAY [Base pre] 2026-04-18 00:00:26.637588 | 2026-04-18 00:00:26.637702 | TASK [Setup log path fact] 2026-04-18 00:00:26.684368 | orchestrator | ok 2026-04-18 00:00:26.721166 | 2026-04-18 00:00:26.721281 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-18 00:00:26.784192 | orchestrator | ok 2026-04-18 00:00:26.793390 | 2026-04-18 00:00:26.808171 | TASK [emit-job-header : Print job information] 2026-04-18 00:00:26.885320 | # Job Information 2026-04-18 00:00:26.885457 | Ansible Version: 2.16.14 2026-04-18 00:00:26.885485 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-18 00:00:26.885512 | Pipeline: periodic-midnight 2026-04-18 00:00:26.885530 | Executor: 521e9411259a 2026-04-18 00:00:26.885547 | Triggered by: https://github.com/osism/testbed 2026-04-18 00:00:26.885564 | Event ID: f16c9507bcc94f4aa7b5d8b5bd883ac2 2026-04-18 00:00:26.890988 | 2026-04-18 00:00:26.891062 | LOOP [emit-job-header : Print node information] 2026-04-18 00:00:27.118872 | orchestrator | ok: 2026-04-18 00:00:27.119048 | orchestrator | # Node Information 2026-04-18 00:00:27.119088 | orchestrator | Inventory Hostname: orchestrator 2026-04-18 00:00:27.119116 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-18 00:00:27.119139 | orchestrator | Username: zuul-testbed05 2026-04-18 00:00:27.119160 | orchestrator | Distro: Debian 12.13 2026-04-18 00:00:27.119184 | orchestrator | Provider: static-testbed 2026-04-18 00:00:27.119205 | orchestrator | Region: 2026-04-18 00:00:27.119226 | orchestrator | Label: testbed-orchestrator 2026-04-18 00:00:27.119246 | orchestrator | Product Name: OpenStack Nova 2026-04-18 00:00:27.119266 | orchestrator | Interface IP: 81.163.193.140 2026-04-18 00:00:27.148948 | 2026-04-18 00:00:27.149053 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-18 00:00:28.513908 | orchestrator -> localhost | changed 2026-04-18 00:00:28.520506 | 2026-04-18 00:00:28.520608 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-18 00:00:30.779807 | orchestrator -> localhost | changed 2026-04-18 00:00:30.805461 | 2026-04-18 00:00:30.805570 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-18 00:00:31.501754 | orchestrator -> localhost | ok 2026-04-18 00:00:31.507404 | 2026-04-18 00:00:31.507498 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-18 00:00:31.534804 | orchestrator | ok 2026-04-18 00:00:31.558890 | orchestrator | included: /var/lib/zuul/builds/1d4be2a3a6a642618b6b9325a3780e9a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-18 00:00:31.575324 | 2026-04-18 00:00:31.575435 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-18 00:00:34.051812 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-18 00:00:34.051969 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/1d4be2a3a6a642618b6b9325a3780e9a/work/1d4be2a3a6a642618b6b9325a3780e9a_id_rsa 2026-04-18 00:00:34.051998 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/1d4be2a3a6a642618b6b9325a3780e9a/work/1d4be2a3a6a642618b6b9325a3780e9a_id_rsa.pub 2026-04-18 00:00:34.052019 | orchestrator -> localhost | The key fingerprint is: 2026-04-18 00:00:34.052038 | orchestrator -> localhost | SHA256:QEUneOgNlo1pT3nunp8wgjStXh7dRdwq91xbkmcuW5U zuul-build-sshkey 2026-04-18 00:00:34.052055 | orchestrator -> localhost | The key's randomart image is: 2026-04-18 00:00:34.052105 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-18 00:00:34.052126 | orchestrator -> localhost | | .X+.. | 2026-04-18 00:00:34.052143 | orchestrator -> localhost | | .X =o. . . | 2026-04-18 00:00:34.052160 | orchestrator -> localhost | | +.* o o .| 2026-04-18 00:00:34.052176 | orchestrator -> localhost | | .oo . . o.| 2026-04-18 00:00:34.052192 | orchestrator -> localhost | | o S. . *E*| 2026-04-18 00:00:34.052213 | orchestrator -> localhost | | . + ... + B=| 2026-04-18 00:00:34.052229 | orchestrator -> localhost | | o +.+.. ..=| 2026-04-18 00:00:34.052245 | orchestrator -> localhost | | . o ooo . + | 2026-04-18 00:00:34.052262 | orchestrator -> localhost | | . . .o . | 2026-04-18 00:00:34.052278 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-18 00:00:34.052317 | orchestrator -> localhost | ok: Runtime: 0:00:00.981621 2026-04-18 00:00:34.058847 | 2026-04-18 00:00:34.058933 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-18 00:00:34.096227 | orchestrator | ok 2026-04-18 00:00:34.106929 | orchestrator | included: /var/lib/zuul/builds/1d4be2a3a6a642618b6b9325a3780e9a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-18 00:00:34.124011 | 2026-04-18 00:00:34.124113 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-18 00:00:34.142372 | orchestrator | skipping: Conditional result was False 2026-04-18 00:00:34.148895 | 2026-04-18 00:00:34.148976 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-18 00:00:35.508809 | orchestrator | changed 2026-04-18 00:00:35.533403 | 2026-04-18 00:00:35.533498 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-18 00:00:35.854327 | orchestrator | ok 2026-04-18 00:00:35.859689 | 2026-04-18 00:00:35.859774 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-18 00:00:36.379716 | orchestrator | ok 2026-04-18 00:00:36.384584 | 2026-04-18 00:00:36.384658 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-18 00:00:36.942270 | orchestrator | ok 2026-04-18 00:00:36.954316 | 2026-04-18 00:00:36.957476 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-18 00:00:36.977384 | orchestrator | skipping: Conditional result was False 2026-04-18 00:00:37.007387 | 2026-04-18 00:00:37.008136 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-18 00:00:38.200356 | orchestrator -> localhost | changed 2026-04-18 00:00:38.211309 | 2026-04-18 00:00:38.211399 | TASK [add-build-sshkey : Add back temp key] 2026-04-18 00:00:38.980335 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/1d4be2a3a6a642618b6b9325a3780e9a/work/1d4be2a3a6a642618b6b9325a3780e9a_id_rsa (zuul-build-sshkey) 2026-04-18 00:00:38.980524 | orchestrator -> localhost | ok: Runtime: 0:00:00.035083 2026-04-18 00:00:38.987281 | 2026-04-18 00:00:38.987368 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-18 00:00:39.432830 | orchestrator | ok 2026-04-18 00:00:39.437646 | 2026-04-18 00:00:39.437723 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-18 00:00:39.470479 | orchestrator | skipping: Conditional result was False 2026-04-18 00:00:39.540807 | 2026-04-18 00:00:39.540903 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-18 00:00:40.097047 | orchestrator | ok 2026-04-18 00:00:40.125515 | 2026-04-18 00:00:40.125626 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-18 00:00:40.192905 | orchestrator | ok 2026-04-18 00:00:40.204243 | 2026-04-18 00:00:40.204340 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-18 00:00:41.030114 | orchestrator -> localhost | ok 2026-04-18 00:00:41.041926 | 2026-04-18 00:00:41.042025 | TASK [validate-host : Collect information about the host] 2026-04-18 00:00:42.552544 | orchestrator | ok 2026-04-18 00:00:42.581456 | 2026-04-18 00:00:42.581570 | TASK [validate-host : Sanitize hostname] 2026-04-18 00:00:42.726001 | orchestrator | ok 2026-04-18 00:00:42.730882 | 2026-04-18 00:00:42.730974 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-18 00:00:44.289341 | orchestrator -> localhost | changed 2026-04-18 00:00:44.294481 | 2026-04-18 00:00:44.294571 | TASK [validate-host : Collect information about zuul worker] 2026-04-18 00:00:44.988592 | orchestrator | ok 2026-04-18 00:00:44.992912 | 2026-04-18 00:00:44.993003 | TASK [validate-host : Write out all zuul information for each host] 2026-04-18 00:00:46.366323 | orchestrator -> localhost | changed 2026-04-18 00:00:46.374731 | 2026-04-18 00:00:46.374814 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-18 00:00:46.699355 | orchestrator | ok 2026-04-18 00:00:46.704376 | 2026-04-18 00:00:46.704451 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-18 00:01:53.466367 | orchestrator | changed: 2026-04-18 00:01:53.466590 | orchestrator | .d..t...... src/ 2026-04-18 00:01:53.466625 | orchestrator | .d..t...... src/github.com/ 2026-04-18 00:01:53.467249 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-18 00:01:53.467307 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-18 00:01:53.467332 | orchestrator | RedHat.yml 2026-04-18 00:01:53.485440 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-18 00:01:53.485458 | orchestrator | RedHat.yml 2026-04-18 00:01:53.485510 | orchestrator | = 2.2.0"... 2026-04-18 00:02:16.008225 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-18 00:02:16.023512 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-18 00:02:16.181458 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-18 00:02:16.726432 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-18 00:02:16.798572 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-18 00:02:17.237342 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-18 00:02:17.346352 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-18 00:02:18.139274 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-18 00:02:18.139332 | orchestrator | 2026-04-18 00:02:18.139339 | orchestrator | Providers are signed by their developers. 2026-04-18 00:02:18.139344 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-18 00:02:18.139351 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-18 00:02:18.139357 | orchestrator | 2026-04-18 00:02:18.139361 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-18 00:02:18.139372 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-18 00:02:18.139376 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-18 00:02:18.139380 | orchestrator | you run "tofu init" in the future. 2026-04-18 00:02:18.140042 | orchestrator | 2026-04-18 00:02:18.140119 | orchestrator | OpenTofu has been successfully initialized! 2026-04-18 00:02:18.140153 | orchestrator | 2026-04-18 00:02:18.140158 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-18 00:02:18.140163 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-18 00:02:18.140183 | orchestrator | should now work. 2026-04-18 00:02:18.140187 | orchestrator | 2026-04-18 00:02:18.140191 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-18 00:02:18.140195 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-18 00:02:18.140207 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-18 00:02:18.337364 | orchestrator | Created and switched to workspace "ci"! 2026-04-18 00:02:18.337538 | orchestrator | 2026-04-18 00:02:18.337548 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-18 00:02:18.337554 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-18 00:02:18.337560 | orchestrator | for this configuration. 2026-04-18 00:02:18.762111 | orchestrator | ci.auto.tfvars 2026-04-18 00:02:18.774083 | orchestrator | default_custom.tf 2026-04-18 00:02:22.515021 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-18 00:02:23.064257 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-18 00:02:23.461497 | orchestrator | 2026-04-18 00:02:23.461861 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-18 00:02:23.461875 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-18 00:02:23.461915 | orchestrator | + create 2026-04-18 00:02:23.461940 | orchestrator | <= read (data resources) 2026-04-18 00:02:23.461962 | orchestrator | 2026-04-18 00:02:23.461970 | orchestrator | OpenTofu will perform the following actions: 2026-04-18 00:02:23.462283 | orchestrator | 2026-04-18 00:02:23.462404 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-18 00:02:23.462467 | orchestrator | # (config refers to values not yet known) 2026-04-18 00:02:23.462474 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-18 00:02:23.462480 | orchestrator | + checksum = (known after apply) 2026-04-18 00:02:23.462487 | orchestrator | + created_at = (known after apply) 2026-04-18 00:02:23.462493 | orchestrator | + file = (known after apply) 2026-04-18 00:02:23.462499 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.462549 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.462557 | orchestrator | + min_disk_gb = (known after apply) 2026-04-18 00:02:23.462564 | orchestrator | + min_ram_mb = (known after apply) 2026-04-18 00:02:23.462570 | orchestrator | + most_recent = true 2026-04-18 00:02:23.462577 | orchestrator | + name = (known after apply) 2026-04-18 00:02:23.462583 | orchestrator | + protected = (known after apply) 2026-04-18 00:02:23.462590 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.462599 | orchestrator | + schema = (known after apply) 2026-04-18 00:02:23.462629 | orchestrator | + size_bytes = (known after apply) 2026-04-18 00:02:23.462635 | orchestrator | + tags = (known after apply) 2026-04-18 00:02:23.462642 | orchestrator | + updated_at = (known after apply) 2026-04-18 00:02:23.462648 | orchestrator | } 2026-04-18 00:02:23.463635 | orchestrator | 2026-04-18 00:02:23.464036 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-18 00:02:23.464070 | orchestrator | # (config refers to values not yet known) 2026-04-18 00:02:23.464077 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-18 00:02:23.464088 | orchestrator | + checksum = (known after apply) 2026-04-18 00:02:23.464094 | orchestrator | + created_at = (known after apply) 2026-04-18 00:02:23.464119 | orchestrator | + file = (known after apply) 2026-04-18 00:02:23.464125 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.464132 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.464139 | orchestrator | + min_disk_gb = (known after apply) 2026-04-18 00:02:23.464145 | orchestrator | + min_ram_mb = (known after apply) 2026-04-18 00:02:23.464211 | orchestrator | + most_recent = true 2026-04-18 00:02:23.464218 | orchestrator | + name = (known after apply) 2026-04-18 00:02:23.464224 | orchestrator | + protected = (known after apply) 2026-04-18 00:02:23.464235 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.464311 | orchestrator | + schema = (known after apply) 2026-04-18 00:02:23.464318 | orchestrator | + size_bytes = (known after apply) 2026-04-18 00:02:23.464324 | orchestrator | + tags = (known after apply) 2026-04-18 00:02:23.464330 | orchestrator | + updated_at = (known after apply) 2026-04-18 00:02:23.464337 | orchestrator | } 2026-04-18 00:02:23.464474 | orchestrator | 2026-04-18 00:02:23.464495 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-18 00:02:23.464503 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-18 00:02:23.464509 | orchestrator | + content = (known after apply) 2026-04-18 00:02:23.464516 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-18 00:02:23.464522 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-18 00:02:23.464529 | orchestrator | + content_md5 = (known after apply) 2026-04-18 00:02:23.464535 | orchestrator | + content_sha1 = (known after apply) 2026-04-18 00:02:23.464542 | orchestrator | + content_sha256 = (known after apply) 2026-04-18 00:02:23.464548 | orchestrator | + content_sha512 = (known after apply) 2026-04-18 00:02:23.464554 | orchestrator | + directory_permission = "0777" 2026-04-18 00:02:23.464561 | orchestrator | + file_permission = "0644" 2026-04-18 00:02:23.464567 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-18 00:02:23.464573 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.464579 | orchestrator | } 2026-04-18 00:02:23.464783 | orchestrator | 2026-04-18 00:02:23.464932 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-18 00:02:23.464945 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-18 00:02:23.464982 | orchestrator | + content = (known after apply) 2026-04-18 00:02:23.464989 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-18 00:02:23.464999 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-18 00:02:23.465022 | orchestrator | + content_md5 = (known after apply) 2026-04-18 00:02:23.465028 | orchestrator | + content_sha1 = (known after apply) 2026-04-18 00:02:23.465038 | orchestrator | + content_sha256 = (known after apply) 2026-04-18 00:02:23.465054 | orchestrator | + content_sha512 = (known after apply) 2026-04-18 00:02:23.465078 | orchestrator | + directory_permission = "0777" 2026-04-18 00:02:23.465089 | orchestrator | + file_permission = "0644" 2026-04-18 00:02:23.465127 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-18 00:02:23.465135 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.465186 | orchestrator | } 2026-04-18 00:02:23.465399 | orchestrator | 2026-04-18 00:02:23.465462 | orchestrator | # local_file.inventory will be created 2026-04-18 00:02:23.465470 | orchestrator | + resource "local_file" "inventory" { 2026-04-18 00:02:23.465476 | orchestrator | + content = (known after apply) 2026-04-18 00:02:23.465482 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-18 00:02:23.465544 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-18 00:02:23.465582 | orchestrator | + content_md5 = (known after apply) 2026-04-18 00:02:23.465594 | orchestrator | + content_sha1 = (known after apply) 2026-04-18 00:02:23.465602 | orchestrator | + content_sha256 = (known after apply) 2026-04-18 00:02:23.465626 | orchestrator | + content_sha512 = (known after apply) 2026-04-18 00:02:23.465637 | orchestrator | + directory_permission = "0777" 2026-04-18 00:02:23.465647 | orchestrator | + file_permission = "0644" 2026-04-18 00:02:23.465662 | orchestrator | + filename = "inventory.ci" 2026-04-18 00:02:23.465677 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.465739 | orchestrator | } 2026-04-18 00:02:23.466547 | orchestrator | 2026-04-18 00:02:23.466603 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-18 00:02:23.466611 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-18 00:02:23.466647 | orchestrator | + content = (sensitive value) 2026-04-18 00:02:23.466658 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-18 00:02:23.466665 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-18 00:02:23.466671 | orchestrator | + content_md5 = (known after apply) 2026-04-18 00:02:23.466697 | orchestrator | + content_sha1 = (known after apply) 2026-04-18 00:02:23.466720 | orchestrator | + content_sha256 = (known after apply) 2026-04-18 00:02:23.466727 | orchestrator | + content_sha512 = (known after apply) 2026-04-18 00:02:23.466734 | orchestrator | + directory_permission = "0700" 2026-04-18 00:02:23.466745 | orchestrator | + file_permission = "0600" 2026-04-18 00:02:23.466751 | orchestrator | + filename = ".id_rsa.ci" 2026-04-18 00:02:23.466757 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.466763 | orchestrator | } 2026-04-18 00:02:23.466910 | orchestrator | 2026-04-18 00:02:23.466934 | orchestrator | # null_resource.node_semaphore will be created 2026-04-18 00:02:23.466942 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-18 00:02:23.467057 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.467070 | orchestrator | } 2026-04-18 00:02:23.467321 | orchestrator | 2026-04-18 00:02:23.467343 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-18 00:02:23.467349 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-18 00:02:23.467355 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.467360 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.467366 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.467371 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.467377 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.467382 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-18 00:02:23.467388 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.467393 | orchestrator | + size = 80 2026-04-18 00:02:23.467399 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.467404 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.467410 | orchestrator | } 2026-04-18 00:02:23.467520 | orchestrator | 2026-04-18 00:02:23.467569 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-18 00:02:23.467575 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-18 00:02:23.467581 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.467586 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.467606 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.467626 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.467632 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.467637 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-18 00:02:23.467705 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.467714 | orchestrator | + size = 80 2026-04-18 00:02:23.467720 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.467741 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.467746 | orchestrator | } 2026-04-18 00:02:23.467918 | orchestrator | 2026-04-18 00:02:23.467973 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-18 00:02:23.467980 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-18 00:02:23.467986 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.467991 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.468000 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.468006 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.468011 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.468029 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-18 00:02:23.468038 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.468044 | orchestrator | + size = 80 2026-04-18 00:02:23.468049 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.468122 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.468141 | orchestrator | } 2026-04-18 00:02:23.468391 | orchestrator | 2026-04-18 00:02:23.468420 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-18 00:02:23.468426 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-18 00:02:23.468432 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.468451 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.468457 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.468462 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.468489 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.468495 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-18 00:02:23.468500 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.468506 | orchestrator | + size = 80 2026-04-18 00:02:23.468519 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.468547 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.468562 | orchestrator | } 2026-04-18 00:02:23.468922 | orchestrator | 2026-04-18 00:02:23.468953 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-18 00:02:23.468959 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-18 00:02:23.468964 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.468972 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.468999 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.469004 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.469036 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.469044 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-18 00:02:23.469052 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.469059 | orchestrator | + size = 80 2026-04-18 00:02:23.469067 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.469141 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.469149 | orchestrator | } 2026-04-18 00:02:23.469332 | orchestrator | 2026-04-18 00:02:23.469429 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-18 00:02:23.469439 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-18 00:02:23.469444 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.469449 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.469599 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.469635 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.469657 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.469686 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-18 00:02:23.469692 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.469739 | orchestrator | + size = 80 2026-04-18 00:02:23.469745 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.469750 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.469755 | orchestrator | } 2026-04-18 00:02:23.470156 | orchestrator | 2026-04-18 00:02:23.470248 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-18 00:02:23.470263 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-18 00:02:23.470269 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.470362 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.470367 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.470372 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.470400 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.470405 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-18 00:02:23.470410 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.470415 | orchestrator | + size = 80 2026-04-18 00:02:23.470420 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.470424 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.470429 | orchestrator | } 2026-04-18 00:02:23.470510 | orchestrator | 2026-04-18 00:02:23.470526 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-18 00:02:23.470533 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-18 00:02:23.470538 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.470543 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.470548 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.470553 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.470558 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-18 00:02:23.470564 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.470568 | orchestrator | + size = 20 2026-04-18 00:02:23.470573 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.470578 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.470583 | orchestrator | } 2026-04-18 00:02:23.470656 | orchestrator | 2026-04-18 00:02:23.470670 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-18 00:02:23.470676 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-18 00:02:23.470681 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.470686 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.470691 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.470696 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.470701 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-18 00:02:23.470705 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.470710 | orchestrator | + size = 20 2026-04-18 00:02:23.470715 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.470720 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.470725 | orchestrator | } 2026-04-18 00:02:23.470794 | orchestrator | 2026-04-18 00:02:23.470809 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-18 00:02:23.470815 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-18 00:02:23.470820 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.470825 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.470829 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.470844 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.470849 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-18 00:02:23.470854 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.470866 | orchestrator | + size = 20 2026-04-18 00:02:23.470871 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.470876 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.470881 | orchestrator | } 2026-04-18 00:02:23.470950 | orchestrator | 2026-04-18 00:02:23.470965 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-18 00:02:23.470971 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-18 00:02:23.470976 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.470981 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.470985 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.470995 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.471000 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-18 00:02:23.471005 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.471010 | orchestrator | + size = 20 2026-04-18 00:02:23.471015 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.471019 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.471024 | orchestrator | } 2026-04-18 00:02:23.471095 | orchestrator | 2026-04-18 00:02:23.471110 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-18 00:02:23.471116 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-18 00:02:23.471121 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.471125 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.471130 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.471135 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.471140 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-18 00:02:23.471144 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.471149 | orchestrator | + size = 20 2026-04-18 00:02:23.471154 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.471159 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.471242 | orchestrator | } 2026-04-18 00:02:23.471355 | orchestrator | 2026-04-18 00:02:23.471371 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-18 00:02:23.471378 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-18 00:02:23.471383 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.471387 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.471392 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.471397 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.471402 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-18 00:02:23.471407 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.471412 | orchestrator | + size = 20 2026-04-18 00:02:23.471417 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.471422 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.471426 | orchestrator | } 2026-04-18 00:02:23.471505 | orchestrator | 2026-04-18 00:02:23.471519 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-18 00:02:23.471525 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-18 00:02:23.471530 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.471535 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.471540 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.471545 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.471584 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-18 00:02:23.471590 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.471595 | orchestrator | + size = 20 2026-04-18 00:02:23.471600 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.471604 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.471669 | orchestrator | } 2026-04-18 00:02:23.471831 | orchestrator | 2026-04-18 00:02:23.471856 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-18 00:02:23.471886 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-18 00:02:23.471898 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.471903 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.471907 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.471912 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.471916 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-18 00:02:23.471921 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.471925 | orchestrator | + size = 20 2026-04-18 00:02:23.471930 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.471935 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.471939 | orchestrator | } 2026-04-18 00:02:23.472071 | orchestrator | 2026-04-18 00:02:23.472087 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-18 00:02:23.472092 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-18 00:02:23.472238 | orchestrator | + attachment = (known after apply) 2026-04-18 00:02:23.472244 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.472249 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.472281 | orchestrator | + metadata = (known after apply) 2026-04-18 00:02:23.472287 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-18 00:02:23.472292 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.472296 | orchestrator | + size = 20 2026-04-18 00:02:23.472301 | orchestrator | + volume_retype_policy = "never" 2026-04-18 00:02:23.472333 | orchestrator | + volume_type = "ssd" 2026-04-18 00:02:23.472338 | orchestrator | } 2026-04-18 00:02:23.472737 | orchestrator | 2026-04-18 00:02:23.472771 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-18 00:02:23.472817 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-18 00:02:23.472822 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-18 00:02:23.472827 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-18 00:02:23.472832 | orchestrator | + all_metadata = (known after apply) 2026-04-18 00:02:23.472837 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.472841 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.472846 | orchestrator | + config_drive = true 2026-04-18 00:02:23.472894 | orchestrator | + created = (known after apply) 2026-04-18 00:02:23.472901 | orchestrator | + flavor_id = (known after apply) 2026-04-18 00:02:23.472906 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-18 00:02:23.472910 | orchestrator | + force_delete = false 2026-04-18 00:02:23.472961 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-18 00:02:23.473025 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.473031 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.473083 | orchestrator | + image_name = (known after apply) 2026-04-18 00:02:23.473107 | orchestrator | + key_pair = "testbed" 2026-04-18 00:02:23.473113 | orchestrator | + name = "testbed-manager" 2026-04-18 00:02:23.473178 | orchestrator | + power_state = "active" 2026-04-18 00:02:23.473184 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.473188 | orchestrator | + security_groups = (known after apply) 2026-04-18 00:02:23.473206 | orchestrator | + stop_before_destroy = false 2026-04-18 00:02:23.473211 | orchestrator | + updated = (known after apply) 2026-04-18 00:02:23.473216 | orchestrator | + user_data = (sensitive value) 2026-04-18 00:02:23.473267 | orchestrator | 2026-04-18 00:02:23.473273 | orchestrator | + block_device { 2026-04-18 00:02:23.473278 | orchestrator | + boot_index = 0 2026-04-18 00:02:23.473282 | orchestrator | + delete_on_termination = false 2026-04-18 00:02:23.473287 | orchestrator | + destination_type = "volume" 2026-04-18 00:02:23.473291 | orchestrator | + multiattach = false 2026-04-18 00:02:23.473296 | orchestrator | + source_type = "volume" 2026-04-18 00:02:23.473300 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.473312 | orchestrator | } 2026-04-18 00:02:23.473341 | orchestrator | 2026-04-18 00:02:23.473347 | orchestrator | + network { 2026-04-18 00:02:23.473351 | orchestrator | + access_network = false 2026-04-18 00:02:23.473356 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-18 00:02:23.473361 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-18 00:02:23.473365 | orchestrator | + mac = (known after apply) 2026-04-18 00:02:23.473370 | orchestrator | + name = (known after apply) 2026-04-18 00:02:23.473374 | orchestrator | + port = (known after apply) 2026-04-18 00:02:23.473389 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.473393 | orchestrator | } 2026-04-18 00:02:23.473398 | orchestrator | } 2026-04-18 00:02:23.473420 | orchestrator | 2026-04-18 00:02:23.473425 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-18 00:02:23.473430 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-18 00:02:23.473435 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-18 00:02:23.473439 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-18 00:02:23.473444 | orchestrator | + all_metadata = (known after apply) 2026-04-18 00:02:23.473459 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.473464 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.473468 | orchestrator | + config_drive = true 2026-04-18 00:02:23.473495 | orchestrator | + created = (known after apply) 2026-04-18 00:02:23.473524 | orchestrator | + flavor_id = (known after apply) 2026-04-18 00:02:23.473529 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-18 00:02:23.473534 | orchestrator | + force_delete = false 2026-04-18 00:02:23.473538 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-18 00:02:23.473543 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.473548 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.473581 | orchestrator | + image_name = (known after apply) 2026-04-18 00:02:23.473587 | orchestrator | + key_pair = "testbed" 2026-04-18 00:02:23.473615 | orchestrator | + name = "testbed-node-0" 2026-04-18 00:02:23.473620 | orchestrator | + power_state = "active" 2026-04-18 00:02:23.473633 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.473638 | orchestrator | + security_groups = (known after apply) 2026-04-18 00:02:23.473643 | orchestrator | + stop_before_destroy = false 2026-04-18 00:02:23.473647 | orchestrator | + updated = (known after apply) 2026-04-18 00:02:23.473652 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-18 00:02:23.473726 | orchestrator | 2026-04-18 00:02:23.473732 | orchestrator | + block_device { 2026-04-18 00:02:23.473737 | orchestrator | + boot_index = 0 2026-04-18 00:02:23.473741 | orchestrator | + delete_on_termination = false 2026-04-18 00:02:23.473746 | orchestrator | + destination_type = "volume" 2026-04-18 00:02:23.473844 | orchestrator | + multiattach = false 2026-04-18 00:02:23.473860 | orchestrator | + source_type = "volume" 2026-04-18 00:02:23.473873 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.473878 | orchestrator | } 2026-04-18 00:02:23.473883 | orchestrator | 2026-04-18 00:02:23.473887 | orchestrator | + network { 2026-04-18 00:02:23.473892 | orchestrator | + access_network = false 2026-04-18 00:02:23.473949 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-18 00:02:23.473955 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-18 00:02:23.473960 | orchestrator | + mac = (known after apply) 2026-04-18 00:02:23.473964 | orchestrator | + name = (known after apply) 2026-04-18 00:02:23.473969 | orchestrator | + port = (known after apply) 2026-04-18 00:02:23.473973 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.473978 | orchestrator | } 2026-04-18 00:02:23.473982 | orchestrator | } 2026-04-18 00:02:23.473987 | orchestrator | 2026-04-18 00:02:23.474038 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-18 00:02:23.474043 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-18 00:02:23.474048 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-18 00:02:23.474057 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-18 00:02:23.474079 | orchestrator | + all_metadata = (known after apply) 2026-04-18 00:02:23.474083 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.474096 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.474139 | orchestrator | + config_drive = true 2026-04-18 00:02:23.474145 | orchestrator | + created = (known after apply) 2026-04-18 00:02:23.474149 | orchestrator | + flavor_id = (known after apply) 2026-04-18 00:02:23.474154 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-18 00:02:23.474158 | orchestrator | + force_delete = false 2026-04-18 00:02:23.474182 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-18 00:02:23.474187 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.474192 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.474196 | orchestrator | + image_name = (known after apply) 2026-04-18 00:02:23.474201 | orchestrator | + key_pair = "testbed" 2026-04-18 00:02:23.474205 | orchestrator | + name = "testbed-node-1" 2026-04-18 00:02:23.474210 | orchestrator | + power_state = "active" 2026-04-18 00:02:23.474214 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.474219 | orchestrator | + security_groups = (known after apply) 2026-04-18 00:02:23.474223 | orchestrator | + stop_before_destroy = false 2026-04-18 00:02:23.474280 | orchestrator | + updated = (known after apply) 2026-04-18 00:02:23.474309 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-18 00:02:23.474314 | orchestrator | 2026-04-18 00:02:23.474319 | orchestrator | + block_device { 2026-04-18 00:02:23.474323 | orchestrator | + boot_index = 0 2026-04-18 00:02:23.474328 | orchestrator | + delete_on_termination = false 2026-04-18 00:02:23.474332 | orchestrator | + destination_type = "volume" 2026-04-18 00:02:23.474337 | orchestrator | + multiattach = false 2026-04-18 00:02:23.474341 | orchestrator | + source_type = "volume" 2026-04-18 00:02:23.474346 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.474362 | orchestrator | } 2026-04-18 00:02:23.474376 | orchestrator | 2026-04-18 00:02:23.474381 | orchestrator | + network { 2026-04-18 00:02:23.474385 | orchestrator | + access_network = false 2026-04-18 00:02:23.474390 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-18 00:02:23.474394 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-18 00:02:23.474399 | orchestrator | + mac = (known after apply) 2026-04-18 00:02:23.474403 | orchestrator | + name = (known after apply) 2026-04-18 00:02:23.474408 | orchestrator | + port = (known after apply) 2026-04-18 00:02:23.474422 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.474427 | orchestrator | } 2026-04-18 00:02:23.474442 | orchestrator | } 2026-04-18 00:02:23.474447 | orchestrator | 2026-04-18 00:02:23.474452 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-18 00:02:23.474456 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-18 00:02:23.474461 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-18 00:02:23.474465 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-18 00:02:23.474471 | orchestrator | + all_metadata = (known after apply) 2026-04-18 00:02:23.474476 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.474481 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.474551 | orchestrator | + config_drive = true 2026-04-18 00:02:23.474561 | orchestrator | + created = (known after apply) 2026-04-18 00:02:23.474566 | orchestrator | + flavor_id = (known after apply) 2026-04-18 00:02:23.474570 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-18 00:02:23.474575 | orchestrator | + force_delete = false 2026-04-18 00:02:23.474579 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-18 00:02:23.474584 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.474588 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.474651 | orchestrator | + image_name = (known after apply) 2026-04-18 00:02:23.474674 | orchestrator | + key_pair = "testbed" 2026-04-18 00:02:23.474679 | orchestrator | + name = "testbed-node-2" 2026-04-18 00:02:23.474683 | orchestrator | + power_state = "active" 2026-04-18 00:02:23.474721 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.474726 | orchestrator | + security_groups = (known after apply) 2026-04-18 00:02:23.474731 | orchestrator | + stop_before_destroy = false 2026-04-18 00:02:23.474736 | orchestrator | + updated = (known after apply) 2026-04-18 00:02:23.474740 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-18 00:02:23.474745 | orchestrator | 2026-04-18 00:02:23.474749 | orchestrator | + block_device { 2026-04-18 00:02:23.474812 | orchestrator | + boot_index = 0 2026-04-18 00:02:23.474816 | orchestrator | + delete_on_termination = false 2026-04-18 00:02:23.474821 | orchestrator | + destination_type = "volume" 2026-04-18 00:02:23.474826 | orchestrator | + multiattach = false 2026-04-18 00:02:23.474830 | orchestrator | + source_type = "volume" 2026-04-18 00:02:23.474835 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.474872 | orchestrator | } 2026-04-18 00:02:23.474893 | orchestrator | 2026-04-18 00:02:23.474916 | orchestrator | + network { 2026-04-18 00:02:23.474922 | orchestrator | + access_network = false 2026-04-18 00:02:23.474926 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-18 00:02:23.475351 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-18 00:02:23.475367 | orchestrator | + mac = (known after apply) 2026-04-18 00:02:23.475371 | orchestrator | + name = (known after apply) 2026-04-18 00:02:23.475376 | orchestrator | + port = (known after apply) 2026-04-18 00:02:23.475434 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.475558 | orchestrator | } 2026-04-18 00:02:23.475643 | orchestrator | } 2026-04-18 00:02:23.475649 | orchestrator | 2026-04-18 00:02:23.475750 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-18 00:02:23.475803 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-18 00:02:23.475809 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-18 00:02:23.475814 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-18 00:02:23.475858 | orchestrator | + all_metadata = (known after apply) 2026-04-18 00:02:23.475931 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.475993 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.476058 | orchestrator | + config_drive = true 2026-04-18 00:02:23.476072 | orchestrator | + created = (known after apply) 2026-04-18 00:02:23.476094 | orchestrator | + flavor_id = (known after apply) 2026-04-18 00:02:23.476098 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-18 00:02:23.476127 | orchestrator | + force_delete = false 2026-04-18 00:02:23.476133 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-18 00:02:23.476137 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.476142 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.476193 | orchestrator | + image_name = (known after apply) 2026-04-18 00:02:23.476200 | orchestrator | + key_pair = "testbed" 2026-04-18 00:02:23.476204 | orchestrator | + name = "testbed-node-3" 2026-04-18 00:02:23.476209 | orchestrator | + power_state = "active" 2026-04-18 00:02:23.476214 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.476302 | orchestrator | + security_groups = (known after apply) 2026-04-18 00:02:23.476309 | orchestrator | + stop_before_destroy = false 2026-04-18 00:02:23.476331 | orchestrator | + updated = (known after apply) 2026-04-18 00:02:23.476337 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-18 00:02:23.476421 | orchestrator | 2026-04-18 00:02:23.476427 | orchestrator | + block_device { 2026-04-18 00:02:23.476510 | orchestrator | + boot_index = 0 2026-04-18 00:02:23.476515 | orchestrator | + delete_on_termination = false 2026-04-18 00:02:23.476545 | orchestrator | + destination_type = "volume" 2026-04-18 00:02:23.476632 | orchestrator | + multiattach = false 2026-04-18 00:02:23.476662 | orchestrator | + source_type = "volume" 2026-04-18 00:02:23.476668 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.476673 | orchestrator | } 2026-04-18 00:02:23.476681 | orchestrator | 2026-04-18 00:02:23.476734 | orchestrator | + network { 2026-04-18 00:02:23.476751 | orchestrator | + access_network = false 2026-04-18 00:02:23.476756 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-18 00:02:23.476760 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-18 00:02:23.476781 | orchestrator | + mac = (known after apply) 2026-04-18 00:02:23.476786 | orchestrator | + name = (known after apply) 2026-04-18 00:02:23.476794 | orchestrator | + port = (known after apply) 2026-04-18 00:02:23.476799 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.476803 | orchestrator | } 2026-04-18 00:02:23.476848 | orchestrator | } 2026-04-18 00:02:23.476854 | orchestrator | 2026-04-18 00:02:23.476858 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-18 00:02:23.476863 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-18 00:02:23.476868 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-18 00:02:23.476883 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-18 00:02:23.476888 | orchestrator | + all_metadata = (known after apply) 2026-04-18 00:02:23.476945 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.477028 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.477037 | orchestrator | + config_drive = true 2026-04-18 00:02:23.477042 | orchestrator | + created = (known after apply) 2026-04-18 00:02:23.477046 | orchestrator | + flavor_id = (known after apply) 2026-04-18 00:02:23.477085 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-18 00:02:23.477090 | orchestrator | + force_delete = false 2026-04-18 00:02:23.477094 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-18 00:02:23.477099 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.477107 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.477159 | orchestrator | + image_name = (known after apply) 2026-04-18 00:02:23.477179 | orchestrator | + key_pair = "testbed" 2026-04-18 00:02:23.477184 | orchestrator | + name = "testbed-node-4" 2026-04-18 00:02:23.477297 | orchestrator | + power_state = "active" 2026-04-18 00:02:23.477304 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.477312 | orchestrator | + security_groups = (known after apply) 2026-04-18 00:02:23.477317 | orchestrator | + stop_before_destroy = false 2026-04-18 00:02:23.477321 | orchestrator | + updated = (known after apply) 2026-04-18 00:02:23.477326 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-18 00:02:23.477331 | orchestrator | 2026-04-18 00:02:23.477335 | orchestrator | + block_device { 2026-04-18 00:02:23.477340 | orchestrator | + boot_index = 0 2026-04-18 00:02:23.477344 | orchestrator | + delete_on_termination = false 2026-04-18 00:02:23.477349 | orchestrator | + destination_type = "volume" 2026-04-18 00:02:23.477353 | orchestrator | + multiattach = false 2026-04-18 00:02:23.477358 | orchestrator | + source_type = "volume" 2026-04-18 00:02:23.477376 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.477381 | orchestrator | } 2026-04-18 00:02:23.477385 | orchestrator | 2026-04-18 00:02:23.477393 | orchestrator | + network { 2026-04-18 00:02:23.477397 | orchestrator | + access_network = false 2026-04-18 00:02:23.477402 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-18 00:02:23.477407 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-18 00:02:23.477411 | orchestrator | + mac = (known after apply) 2026-04-18 00:02:23.477416 | orchestrator | + name = (known after apply) 2026-04-18 00:02:23.477420 | orchestrator | + port = (known after apply) 2026-04-18 00:02:23.477425 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.477429 | orchestrator | } 2026-04-18 00:02:23.477434 | orchestrator | } 2026-04-18 00:02:23.477444 | orchestrator | 2026-04-18 00:02:23.477448 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-18 00:02:23.477453 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-18 00:02:23.477457 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-18 00:02:23.477462 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-18 00:02:23.477467 | orchestrator | + all_metadata = (known after apply) 2026-04-18 00:02:23.477471 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.477476 | orchestrator | + availability_zone = "nova" 2026-04-18 00:02:23.477480 | orchestrator | + config_drive = true 2026-04-18 00:02:23.477485 | orchestrator | + created = (known after apply) 2026-04-18 00:02:23.477489 | orchestrator | + flavor_id = (known after apply) 2026-04-18 00:02:23.477494 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-18 00:02:23.477498 | orchestrator | + force_delete = false 2026-04-18 00:02:23.477503 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-18 00:02:23.477507 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.477512 | orchestrator | + image_id = (known after apply) 2026-04-18 00:02:23.477516 | orchestrator | + image_name = (known after apply) 2026-04-18 00:02:23.477521 | orchestrator | + key_pair = "testbed" 2026-04-18 00:02:23.477525 | orchestrator | + name = "testbed-node-5" 2026-04-18 00:02:23.477530 | orchestrator | + power_state = "active" 2026-04-18 00:02:23.477534 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.477539 | orchestrator | + security_groups = (known after apply) 2026-04-18 00:02:23.477543 | orchestrator | + stop_before_destroy = false 2026-04-18 00:02:23.477548 | orchestrator | + updated = (known after apply) 2026-04-18 00:02:23.477552 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-18 00:02:23.477557 | orchestrator | 2026-04-18 00:02:23.477562 | orchestrator | + block_device { 2026-04-18 00:02:23.477566 | orchestrator | + boot_index = 0 2026-04-18 00:02:23.477571 | orchestrator | + delete_on_termination = false 2026-04-18 00:02:23.477575 | orchestrator | + destination_type = "volume" 2026-04-18 00:02:23.477580 | orchestrator | + multiattach = false 2026-04-18 00:02:23.477584 | orchestrator | + source_type = "volume" 2026-04-18 00:02:23.477589 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.477593 | orchestrator | } 2026-04-18 00:02:23.477598 | orchestrator | 2026-04-18 00:02:23.477602 | orchestrator | + network { 2026-04-18 00:02:23.477607 | orchestrator | + access_network = false 2026-04-18 00:02:23.477611 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-18 00:02:23.477616 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-18 00:02:23.477620 | orchestrator | + mac = (known after apply) 2026-04-18 00:02:23.477625 | orchestrator | + name = (known after apply) 2026-04-18 00:02:23.477630 | orchestrator | + port = (known after apply) 2026-04-18 00:02:23.477634 | orchestrator | + uuid = (known after apply) 2026-04-18 00:02:23.477639 | orchestrator | } 2026-04-18 00:02:23.477643 | orchestrator | } 2026-04-18 00:02:23.477648 | orchestrator | 2026-04-18 00:02:23.477652 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-18 00:02:23.477657 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-18 00:02:23.477661 | orchestrator | + fingerprint = (known after apply) 2026-04-18 00:02:23.477666 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.477670 | orchestrator | + name = "testbed" 2026-04-18 00:02:23.477675 | orchestrator | + private_key = (sensitive value) 2026-04-18 00:02:23.477679 | orchestrator | + public_key = (known after apply) 2026-04-18 00:02:23.477684 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.477688 | orchestrator | + user_id = (known after apply) 2026-04-18 00:02:23.477693 | orchestrator | } 2026-04-18 00:02:23.477697 | orchestrator | 2026-04-18 00:02:23.477702 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-18 00:02:23.477707 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-18 00:02:23.477715 | orchestrator | + device = (known after apply) 2026-04-18 00:02:23.477719 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.477724 | orchestrator | + instance_id = (known after apply) 2026-04-18 00:02:23.477728 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.477736 | orchestrator | + volume_id = (known after apply) 2026-04-18 00:02:23.477741 | orchestrator | } 2026-04-18 00:02:23.477746 | orchestrator | 2026-04-18 00:02:23.477750 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-18 00:02:23.477755 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-18 00:02:23.477760 | orchestrator | + device = (known after apply) 2026-04-18 00:02:23.477764 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.477769 | orchestrator | + instance_id = (known after apply) 2026-04-18 00:02:23.477773 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.477778 | orchestrator | + volume_id = (known after apply) 2026-04-18 00:02:23.477782 | orchestrator | } 2026-04-18 00:02:23.477787 | orchestrator | 2026-04-18 00:02:23.477791 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-18 00:02:23.477796 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-18 00:02:23.477806 | orchestrator | + device = (known after apply) 2026-04-18 00:02:23.477811 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.477815 | orchestrator | + instance_id = (known after apply) 2026-04-18 00:02:23.477820 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.477824 | orchestrator | + volume_id = (known after apply) 2026-04-18 00:02:23.477829 | orchestrator | } 2026-04-18 00:02:23.477833 | orchestrator | 2026-04-18 00:02:23.477838 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-18 00:02:23.477842 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-18 00:02:23.477847 | orchestrator | + device = (known after apply) 2026-04-18 00:02:23.477851 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.477856 | orchestrator | + instance_id = (known after apply) 2026-04-18 00:02:23.477861 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.477865 | orchestrator | + volume_id = (known after apply) 2026-04-18 00:02:23.477870 | orchestrator | } 2026-04-18 00:02:23.477874 | orchestrator | 2026-04-18 00:02:23.477879 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-18 00:02:23.477883 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-18 00:02:23.477888 | orchestrator | + device = (known after apply) 2026-04-18 00:02:23.477893 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.477897 | orchestrator | + instance_id = (known after apply) 2026-04-18 00:02:23.477902 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.477906 | orchestrator | + volume_id = (known after apply) 2026-04-18 00:02:23.477911 | orchestrator | } 2026-04-18 00:02:23.477915 | orchestrator | 2026-04-18 00:02:23.477920 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-18 00:02:23.477924 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-18 00:02:23.477929 | orchestrator | + device = (known after apply) 2026-04-18 00:02:23.477934 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.477938 | orchestrator | + instance_id = (known after apply) 2026-04-18 00:02:23.477943 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.477947 | orchestrator | + volume_id = (known after apply) 2026-04-18 00:02:23.477952 | orchestrator | } 2026-04-18 00:02:23.477956 | orchestrator | 2026-04-18 00:02:23.477961 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-18 00:02:23.477965 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-18 00:02:23.477970 | orchestrator | + device = (known after apply) 2026-04-18 00:02:23.477974 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.477979 | orchestrator | + instance_id = (known after apply) 2026-04-18 00:02:23.477983 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.477992 | orchestrator | + volume_id = (known after apply) 2026-04-18 00:02:23.477996 | orchestrator | } 2026-04-18 00:02:23.478001 | orchestrator | 2026-04-18 00:02:23.478006 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-18 00:02:23.478010 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-18 00:02:23.478031 | orchestrator | + device = (known after apply) 2026-04-18 00:02:23.478036 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.478041 | orchestrator | + instance_id = (known after apply) 2026-04-18 00:02:23.478045 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.478050 | orchestrator | + volume_id = (known after apply) 2026-04-18 00:02:23.478054 | orchestrator | } 2026-04-18 00:02:23.478059 | orchestrator | 2026-04-18 00:02:23.478063 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-18 00:02:23.478068 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-18 00:02:23.478073 | orchestrator | + device = (known after apply) 2026-04-18 00:02:23.478078 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.478082 | orchestrator | + instance_id = (known after apply) 2026-04-18 00:02:23.478087 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.478091 | orchestrator | + volume_id = (known after apply) 2026-04-18 00:02:23.478096 | orchestrator | } 2026-04-18 00:02:23.478100 | orchestrator | 2026-04-18 00:02:23.478105 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-18 00:02:23.478110 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-18 00:02:23.478115 | orchestrator | + fixed_ip = (known after apply) 2026-04-18 00:02:23.478119 | orchestrator | + floating_ip = (known after apply) 2026-04-18 00:02:23.478124 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.478128 | orchestrator | + port_id = (known after apply) 2026-04-18 00:02:23.478133 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.478138 | orchestrator | } 2026-04-18 00:02:23.478142 | orchestrator | 2026-04-18 00:02:23.478147 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-18 00:02:23.478151 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-18 00:02:23.478156 | orchestrator | + address = (known after apply) 2026-04-18 00:02:23.478161 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.478179 | orchestrator | + dns_domain = (known after apply) 2026-04-18 00:02:23.478184 | orchestrator | + dns_name = (known after apply) 2026-04-18 00:02:23.478189 | orchestrator | + fixed_ip = (known after apply) 2026-04-18 00:02:23.478193 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.478198 | orchestrator | + pool = "public" 2026-04-18 00:02:23.478203 | orchestrator | + port_id = (known after apply) 2026-04-18 00:02:23.478207 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.478211 | orchestrator | + subnet_id = (known after apply) 2026-04-18 00:02:23.478216 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.478220 | orchestrator | } 2026-04-18 00:02:23.478225 | orchestrator | 2026-04-18 00:02:23.478229 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-18 00:02:23.478239 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-18 00:02:23.478244 | orchestrator | + admin_state_up = (known after apply) 2026-04-18 00:02:23.478248 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.478252 | orchestrator | + availability_zone_hints = [ 2026-04-18 00:02:23.478257 | orchestrator | + "nova", 2026-04-18 00:02:23.478262 | orchestrator | ] 2026-04-18 00:02:23.478266 | orchestrator | + dns_domain = (known after apply) 2026-04-18 00:02:23.478271 | orchestrator | + external = (known after apply) 2026-04-18 00:02:23.478275 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.478280 | orchestrator | + mtu = (known after apply) 2026-04-18 00:02:23.478284 | orchestrator | + name = "net-testbed-management" 2026-04-18 00:02:23.478292 | orchestrator | + port_security_enabled = (known after apply) 2026-04-18 00:02:23.478301 | orchestrator | + qos_policy_id = (known after apply) 2026-04-18 00:02:23.478305 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.478310 | orchestrator | + shared = (known after apply) 2026-04-18 00:02:23.478314 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.478319 | orchestrator | + transparent_vlan = (known after apply) 2026-04-18 00:02:23.478323 | orchestrator | 2026-04-18 00:02:23.478328 | orchestrator | + segments (known after apply) 2026-04-18 00:02:23.478332 | orchestrator | } 2026-04-18 00:02:23.478337 | orchestrator | 2026-04-18 00:02:23.478341 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-18 00:02:23.478346 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-18 00:02:23.478350 | orchestrator | + admin_state_up = (known after apply) 2026-04-18 00:02:23.478355 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-18 00:02:23.478359 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-18 00:02:23.478364 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.478368 | orchestrator | + device_id = (known after apply) 2026-04-18 00:02:23.478373 | orchestrator | + device_owner = (known after apply) 2026-04-18 00:02:23.478377 | orchestrator | + dns_assignment = (known after apply) 2026-04-18 00:02:23.478382 | orchestrator | + dns_name = (known after apply) 2026-04-18 00:02:23.478386 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.478390 | orchestrator | + mac_address = (known after apply) 2026-04-18 00:02:23.478395 | orchestrator | + network_id = (known after apply) 2026-04-18 00:02:23.478399 | orchestrator | + port_security_enabled = (known after apply) 2026-04-18 00:02:23.478404 | orchestrator | + qos_policy_id = (known after apply) 2026-04-18 00:02:23.478408 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.478413 | orchestrator | + security_group_ids = (known after apply) 2026-04-18 00:02:23.478417 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.478422 | orchestrator | 2026-04-18 00:02:23.478426 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.478431 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-18 00:02:23.478435 | orchestrator | } 2026-04-18 00:02:23.478440 | orchestrator | 2026-04-18 00:02:23.478444 | orchestrator | + binding (known after apply) 2026-04-18 00:02:23.478449 | orchestrator | 2026-04-18 00:02:23.478453 | orchestrator | + fixed_ip { 2026-04-18 00:02:23.478458 | orchestrator | + ip_address = "192.168.16.5" 2026-04-18 00:02:23.478462 | orchestrator | + subnet_id = (known after apply) 2026-04-18 00:02:23.478467 | orchestrator | } 2026-04-18 00:02:23.478471 | orchestrator | } 2026-04-18 00:02:23.478476 | orchestrator | 2026-04-18 00:02:23.478480 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-18 00:02:23.478485 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-18 00:02:23.478490 | orchestrator | + admin_state_up = (known after apply) 2026-04-18 00:02:23.478494 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-18 00:02:23.478498 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-18 00:02:23.478503 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.478507 | orchestrator | + device_id = (known after apply) 2026-04-18 00:02:23.478512 | orchestrator | + device_owner = (known after apply) 2026-04-18 00:02:23.478516 | orchestrator | + dns_assignment = (known after apply) 2026-04-18 00:02:23.478521 | orchestrator | + dns_name = (known after apply) 2026-04-18 00:02:23.478525 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.478529 | orchestrator | + mac_address = (known after apply) 2026-04-18 00:02:23.478534 | orchestrator | + network_id = (known after apply) 2026-04-18 00:02:23.478538 | orchestrator | + port_security_enabled = (known after apply) 2026-04-18 00:02:23.478543 | orchestrator | + qos_policy_id = (known after apply) 2026-04-18 00:02:23.478547 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.478555 | orchestrator | + security_group_ids = (known after apply) 2026-04-18 00:02:23.478560 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.478565 | orchestrator | 2026-04-18 00:02:23.478569 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.478574 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-18 00:02:23.478578 | orchestrator | } 2026-04-18 00:02:23.478583 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.478587 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-18 00:02:23.478592 | orchestrator | } 2026-04-18 00:02:23.478596 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.478601 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-18 00:02:23.478605 | orchestrator | } 2026-04-18 00:02:23.478610 | orchestrator | 2026-04-18 00:02:23.478614 | orchestrator | + binding (known after apply) 2026-04-18 00:02:23.478619 | orchestrator | 2026-04-18 00:02:23.478623 | orchestrator | + fixed_ip { 2026-04-18 00:02:23.478628 | orchestrator | + ip_address = "192.168.16.10" 2026-04-18 00:02:23.478632 | orchestrator | + subnet_id = (known after apply) 2026-04-18 00:02:23.478637 | orchestrator | } 2026-04-18 00:02:23.478642 | orchestrator | } 2026-04-18 00:02:23.478646 | orchestrator | 2026-04-18 00:02:23.478651 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-18 00:02:23.478655 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-18 00:02:23.478662 | orchestrator | + admin_state_up = (known after apply) 2026-04-18 00:02:23.478667 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-18 00:02:23.478672 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-18 00:02:23.478676 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.478681 | orchestrator | + device_id = (known after apply) 2026-04-18 00:02:23.478685 | orchestrator | + device_owner = (known after apply) 2026-04-18 00:02:23.478690 | orchestrator | + dns_assignment = (known after apply) 2026-04-18 00:02:23.478694 | orchestrator | + dns_name = (known after apply) 2026-04-18 00:02:23.478699 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.478703 | orchestrator | + mac_address = (known after apply) 2026-04-18 00:02:23.478708 | orchestrator | + network_id = (known after apply) 2026-04-18 00:02:23.478712 | orchestrator | + port_security_enabled = (known after apply) 2026-04-18 00:02:23.478717 | orchestrator | + qos_policy_id = (known after apply) 2026-04-18 00:02:23.478721 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.478726 | orchestrator | + security_group_ids = (known after apply) 2026-04-18 00:02:23.478730 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.478735 | orchestrator | 2026-04-18 00:02:23.478739 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.478744 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-18 00:02:23.478749 | orchestrator | } 2026-04-18 00:02:23.478753 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.478761 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-18 00:02:23.478766 | orchestrator | } 2026-04-18 00:02:23.478771 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.478775 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-18 00:02:23.478780 | orchestrator | } 2026-04-18 00:02:23.478784 | orchestrator | 2026-04-18 00:02:23.478789 | orchestrator | + binding (known after apply) 2026-04-18 00:02:23.478793 | orchestrator | 2026-04-18 00:02:23.478798 | orchestrator | + fixed_ip { 2026-04-18 00:02:23.478802 | orchestrator | + ip_address = "192.168.16.11" 2026-04-18 00:02:23.478807 | orchestrator | + subnet_id = (known after apply) 2026-04-18 00:02:23.478811 | orchestrator | } 2026-04-18 00:02:23.478816 | orchestrator | } 2026-04-18 00:02:23.478820 | orchestrator | 2026-04-18 00:02:23.478825 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-18 00:02:23.478830 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-18 00:02:23.478834 | orchestrator | + admin_state_up = (known after apply) 2026-04-18 00:02:23.478839 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-18 00:02:23.478843 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-18 00:02:23.478848 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.478856 | orchestrator | + device_id = (known after apply) 2026-04-18 00:02:23.478860 | orchestrator | + device_owner = (known after apply) 2026-04-18 00:02:23.478865 | orchestrator | + dns_assignment = (known after apply) 2026-04-18 00:02:23.478869 | orchestrator | + dns_name = (known after apply) 2026-04-18 00:02:23.478874 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.478878 | orchestrator | + mac_address = (known after apply) 2026-04-18 00:02:23.478883 | orchestrator | + network_id = (known after apply) 2026-04-18 00:02:23.478887 | orchestrator | + port_security_enabled = (known after apply) 2026-04-18 00:02:23.478892 | orchestrator | + qos_policy_id = (known after apply) 2026-04-18 00:02:23.478896 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.478901 | orchestrator | + security_group_ids = (known after apply) 2026-04-18 00:02:23.478905 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.478910 | orchestrator | 2026-04-18 00:02:23.478914 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.478919 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-18 00:02:23.478923 | orchestrator | } 2026-04-18 00:02:23.478928 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.478932 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-18 00:02:23.478937 | orchestrator | } 2026-04-18 00:02:23.478942 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.478946 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-18 00:02:23.478951 | orchestrator | } 2026-04-18 00:02:23.478955 | orchestrator | 2026-04-18 00:02:23.478960 | orchestrator | + binding (known after apply) 2026-04-18 00:02:23.478964 | orchestrator | 2026-04-18 00:02:23.478969 | orchestrator | + fixed_ip { 2026-04-18 00:02:23.478973 | orchestrator | + ip_address = "192.168.16.12" 2026-04-18 00:02:23.478978 | orchestrator | + subnet_id = (known after apply) 2026-04-18 00:02:23.478982 | orchestrator | } 2026-04-18 00:02:23.478987 | orchestrator | } 2026-04-18 00:02:23.478991 | orchestrator | 2026-04-18 00:02:23.478996 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-18 00:02:23.479001 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-18 00:02:23.479005 | orchestrator | + admin_state_up = (known after apply) 2026-04-18 00:02:23.479010 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-18 00:02:23.479014 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-18 00:02:23.479019 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.479023 | orchestrator | + device_id = (known after apply) 2026-04-18 00:02:23.479028 | orchestrator | + device_owner = (known after apply) 2026-04-18 00:02:23.479032 | orchestrator | + dns_assignment = (known after apply) 2026-04-18 00:02:23.479037 | orchestrator | + dns_name = (known after apply) 2026-04-18 00:02:23.479041 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.479046 | orchestrator | + mac_address = (known after apply) 2026-04-18 00:02:23.479050 | orchestrator | + network_id = (known after apply) 2026-04-18 00:02:23.479055 | orchestrator | + port_security_enabled = (known after apply) 2026-04-18 00:02:23.479059 | orchestrator | + qos_policy_id = (known after apply) 2026-04-18 00:02:23.479064 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.479068 | orchestrator | + security_group_ids = (known after apply) 2026-04-18 00:02:23.479073 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.479077 | orchestrator | 2026-04-18 00:02:23.479082 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.479086 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-18 00:02:23.479091 | orchestrator | } 2026-04-18 00:02:23.479096 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.479100 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-18 00:02:23.479105 | orchestrator | } 2026-04-18 00:02:23.479109 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.479114 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-18 00:02:23.479118 | orchestrator | } 2026-04-18 00:02:23.479123 | orchestrator | 2026-04-18 00:02:23.479131 | orchestrator | + binding (known after apply) 2026-04-18 00:02:23.479135 | orchestrator | 2026-04-18 00:02:23.479140 | orchestrator | + fixed_ip { 2026-04-18 00:02:23.479144 | orchestrator | + ip_address = "192.168.16.13" 2026-04-18 00:02:23.479149 | orchestrator | + subnet_id = (known after apply) 2026-04-18 00:02:23.479154 | orchestrator | } 2026-04-18 00:02:23.479158 | orchestrator | } 2026-04-18 00:02:23.479163 | orchestrator | 2026-04-18 00:02:23.479181 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-18 00:02:23.479185 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-18 00:02:23.479190 | orchestrator | + admin_state_up = (known after apply) 2026-04-18 00:02:23.479194 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-18 00:02:23.479199 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-18 00:02:23.479204 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.479208 | orchestrator | + device_id = (known after apply) 2026-04-18 00:02:23.479212 | orchestrator | + device_owner = (known after apply) 2026-04-18 00:02:23.479217 | orchestrator | + dns_assignment = (known after apply) 2026-04-18 00:02:23.479221 | orchestrator | + dns_name = (known after apply) 2026-04-18 00:02:23.479229 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.479233 | orchestrator | + mac_address = (known after apply) 2026-04-18 00:02:23.479238 | orchestrator | + network_id = (known after apply) 2026-04-18 00:02:23.479242 | orchestrator | + port_security_enabled = (known after apply) 2026-04-18 00:02:23.479247 | orchestrator | + qos_policy_id = (known after apply) 2026-04-18 00:02:23.479251 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.479256 | orchestrator | + security_group_ids = (known after apply) 2026-04-18 00:02:23.479260 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.479266 | orchestrator | 2026-04-18 00:02:23.479274 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.479284 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-18 00:02:23.479288 | orchestrator | } 2026-04-18 00:02:23.479293 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.479297 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-18 00:02:23.479302 | orchestrator | } 2026-04-18 00:02:23.479306 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.479311 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-18 00:02:23.479315 | orchestrator | } 2026-04-18 00:02:23.479320 | orchestrator | 2026-04-18 00:02:23.479324 | orchestrator | + binding (known after apply) 2026-04-18 00:02:23.479329 | orchestrator | 2026-04-18 00:02:23.479333 | orchestrator | + fixed_ip { 2026-04-18 00:02:23.479338 | orchestrator | + ip_address = "192.168.16.14" 2026-04-18 00:02:23.479343 | orchestrator | + subnet_id = (known after apply) 2026-04-18 00:02:23.479347 | orchestrator | } 2026-04-18 00:02:23.479352 | orchestrator | } 2026-04-18 00:02:23.479356 | orchestrator | 2026-04-18 00:02:23.479361 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-18 00:02:23.479365 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-18 00:02:23.479370 | orchestrator | + admin_state_up = (known after apply) 2026-04-18 00:02:23.479374 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-18 00:02:23.479379 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-18 00:02:23.479383 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.479388 | orchestrator | + device_id = (known after apply) 2026-04-18 00:02:23.479392 | orchestrator | + device_owner = (known after apply) 2026-04-18 00:02:23.479397 | orchestrator | + dns_assignment = (known after apply) 2026-04-18 00:02:23.479401 | orchestrator | + dns_name = (known after apply) 2026-04-18 00:02:23.479406 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.479410 | orchestrator | + mac_address = (known after apply) 2026-04-18 00:02:23.479415 | orchestrator | + network_id = (known after apply) 2026-04-18 00:02:23.479419 | orchestrator | + port_security_enabled = (known after apply) 2026-04-18 00:02:23.479424 | orchestrator | + qos_policy_id = (known after apply) 2026-04-18 00:02:23.479432 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.479436 | orchestrator | + security_group_ids = (known after apply) 2026-04-18 00:02:23.479441 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.479445 | orchestrator | 2026-04-18 00:02:23.479450 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.479454 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-18 00:02:23.479459 | orchestrator | } 2026-04-18 00:02:23.479463 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.479468 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-18 00:02:23.479472 | orchestrator | } 2026-04-18 00:02:23.479477 | orchestrator | + allowed_address_pairs { 2026-04-18 00:02:23.479481 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-18 00:02:23.479486 | orchestrator | } 2026-04-18 00:02:23.479490 | orchestrator | 2026-04-18 00:02:23.479495 | orchestrator | + binding (known after apply) 2026-04-18 00:02:23.479499 | orchestrator | 2026-04-18 00:02:23.479504 | orchestrator | + fixed_ip { 2026-04-18 00:02:23.479508 | orchestrator | + ip_address = "192.168.16.15" 2026-04-18 00:02:23.479513 | orchestrator | + subnet_id = (known after apply) 2026-04-18 00:02:23.479517 | orchestrator | } 2026-04-18 00:02:23.479522 | orchestrator | } 2026-04-18 00:02:23.479526 | orchestrator | 2026-04-18 00:02:23.479531 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-18 00:02:23.479535 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-18 00:02:23.479540 | orchestrator | + force_destroy = false 2026-04-18 00:02:23.479544 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.479549 | orchestrator | + port_id = (known after apply) 2026-04-18 00:02:23.479554 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.479558 | orchestrator | + router_id = (known after apply) 2026-04-18 00:02:23.479563 | orchestrator | + subnet_id = (known after apply) 2026-04-18 00:02:23.479567 | orchestrator | } 2026-04-18 00:02:23.479572 | orchestrator | 2026-04-18 00:02:23.479576 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-18 00:02:23.479581 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-18 00:02:23.479585 | orchestrator | + admin_state_up = (known after apply) 2026-04-18 00:02:23.479589 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.479594 | orchestrator | + availability_zone_hints = [ 2026-04-18 00:02:23.479598 | orchestrator | + "nova", 2026-04-18 00:02:23.479603 | orchestrator | ] 2026-04-18 00:02:23.479608 | orchestrator | + distributed = (known after apply) 2026-04-18 00:02:23.479612 | orchestrator | + enable_snat = (known after apply) 2026-04-18 00:02:23.479616 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-18 00:02:23.479621 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-18 00:02:23.479625 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.479630 | orchestrator | + name = "testbed" 2026-04-18 00:02:23.479634 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.479639 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.479644 | orchestrator | 2026-04-18 00:02:23.479648 | orchestrator | + external_fixed_ip (known after apply) 2026-04-18 00:02:23.479653 | orchestrator | } 2026-04-18 00:02:23.479657 | orchestrator | 2026-04-18 00:02:23.479662 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-18 00:02:23.479667 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-18 00:02:23.479671 | orchestrator | + description = "ssh" 2026-04-18 00:02:23.479676 | orchestrator | + direction = "ingress" 2026-04-18 00:02:23.479680 | orchestrator | + ethertype = "IPv4" 2026-04-18 00:02:23.479685 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.479689 | orchestrator | + port_range_max = 22 2026-04-18 00:02:23.479694 | orchestrator | + port_range_min = 22 2026-04-18 00:02:23.479698 | orchestrator | + protocol = "tcp" 2026-04-18 00:02:23.479703 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.479710 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-18 00:02:23.479715 | orchestrator | + remote_group_id = (known after apply) 2026-04-18 00:02:23.479719 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-18 00:02:23.479724 | orchestrator | + security_group_id = (known after apply) 2026-04-18 00:02:23.479728 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.479733 | orchestrator | } 2026-04-18 00:02:23.479737 | orchestrator | 2026-04-18 00:02:23.479742 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-18 00:02:23.479746 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-18 00:02:23.479751 | orchestrator | + description = "wireguard" 2026-04-18 00:02:23.479755 | orchestrator | + direction = "ingress" 2026-04-18 00:02:23.479763 | orchestrator | + ethertype = "IPv4" 2026-04-18 00:02:23.479767 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.479772 | orchestrator | + port_range_max = 51820 2026-04-18 00:02:23.479776 | orchestrator | + port_range_min = 51820 2026-04-18 00:02:23.479781 | orchestrator | + protocol = "udp" 2026-04-18 00:02:23.479785 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.479790 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-18 00:02:23.479795 | orchestrator | + remote_group_id = (known after apply) 2026-04-18 00:02:23.479799 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-18 00:02:23.479804 | orchestrator | + security_group_id = (known after apply) 2026-04-18 00:02:23.479808 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.479813 | orchestrator | } 2026-04-18 00:02:23.479817 | orchestrator | 2026-04-18 00:02:23.479822 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-18 00:02:23.479826 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-18 00:02:23.479833 | orchestrator | + direction = "ingress" 2026-04-18 00:02:23.479838 | orchestrator | + ethertype = "IPv4" 2026-04-18 00:02:23.479842 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.479847 | orchestrator | + protocol = "tcp" 2026-04-18 00:02:23.479851 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.479856 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-18 00:02:23.479860 | orchestrator | + remote_group_id = (known after apply) 2026-04-18 00:02:23.479865 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-18 00:02:23.479869 | orchestrator | + security_group_id = (known after apply) 2026-04-18 00:02:23.479874 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.479878 | orchestrator | } 2026-04-18 00:02:23.479883 | orchestrator | 2026-04-18 00:02:23.479888 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-18 00:02:23.479892 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-18 00:02:23.479897 | orchestrator | + direction = "ingress" 2026-04-18 00:02:23.479901 | orchestrator | + ethertype = "IPv4" 2026-04-18 00:02:23.479906 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.479910 | orchestrator | + protocol = "udp" 2026-04-18 00:02:23.479915 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.479919 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-18 00:02:23.479924 | orchestrator | + remote_group_id = (known after apply) 2026-04-18 00:02:23.479928 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-18 00:02:23.479933 | orchestrator | + security_group_id = (known after apply) 2026-04-18 00:02:23.479937 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.479942 | orchestrator | } 2026-04-18 00:02:23.479947 | orchestrator | 2026-04-18 00:02:23.479951 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-18 00:02:23.479959 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-18 00:02:23.479964 | orchestrator | + direction = "ingress" 2026-04-18 00:02:23.479968 | orchestrator | + ethertype = "IPv4" 2026-04-18 00:02:23.479973 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.479977 | orchestrator | + protocol = "icmp" 2026-04-18 00:02:23.479982 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.479986 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-18 00:02:23.479991 | orchestrator | + remote_group_id = (known after apply) 2026-04-18 00:02:23.479995 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-18 00:02:23.480000 | orchestrator | + security_group_id = (known after apply) 2026-04-18 00:02:23.480004 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.480009 | orchestrator | } 2026-04-18 00:02:23.480013 | orchestrator | 2026-04-18 00:02:23.480018 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-18 00:02:23.480022 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-18 00:02:23.480027 | orchestrator | + direction = "ingress" 2026-04-18 00:02:23.480032 | orchestrator | + ethertype = "IPv4" 2026-04-18 00:02:23.480036 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.480041 | orchestrator | + protocol = "tcp" 2026-04-18 00:02:23.480045 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.480050 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-18 00:02:23.480054 | orchestrator | + remote_group_id = (known after apply) 2026-04-18 00:02:23.480059 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-18 00:02:23.480063 | orchestrator | + security_group_id = (known after apply) 2026-04-18 00:02:23.480068 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.480072 | orchestrator | } 2026-04-18 00:02:23.480077 | orchestrator | 2026-04-18 00:02:23.480081 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-18 00:02:23.480086 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-18 00:02:23.480090 | orchestrator | + direction = "ingress" 2026-04-18 00:02:23.480095 | orchestrator | + ethertype = "IPv4" 2026-04-18 00:02:23.480099 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.480104 | orchestrator | + protocol = "udp" 2026-04-18 00:02:23.480109 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.480113 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-18 00:02:23.480118 | orchestrator | + remote_group_id = (known after apply) 2026-04-18 00:02:23.480122 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-18 00:02:23.480126 | orchestrator | + security_group_id = (known after apply) 2026-04-18 00:02:23.480131 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.480136 | orchestrator | } 2026-04-18 00:02:23.480144 | orchestrator | 2026-04-18 00:02:23.480152 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-18 00:02:23.480160 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-18 00:02:23.480214 | orchestrator | + direction = "ingress" 2026-04-18 00:02:23.480220 | orchestrator | + ethertype = "IPv4" 2026-04-18 00:02:23.480224 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.480229 | orchestrator | + protocol = "icmp" 2026-04-18 00:02:23.480233 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.480238 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-18 00:02:23.480242 | orchestrator | + remote_group_id = (known after apply) 2026-04-18 00:02:23.480247 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-18 00:02:23.480251 | orchestrator | + security_group_id = (known after apply) 2026-04-18 00:02:23.480256 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.480264 | orchestrator | } 2026-04-18 00:02:23.480269 | orchestrator | 2026-04-18 00:02:23.480273 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-18 00:02:23.480278 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-18 00:02:23.480283 | orchestrator | + description = "vrrp" 2026-04-18 00:02:23.480287 | orchestrator | + direction = "ingress" 2026-04-18 00:02:23.480291 | orchestrator | + ethertype = "IPv4" 2026-04-18 00:02:23.480296 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.480301 | orchestrator | + protocol = "112" 2026-04-18 00:02:23.480305 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.480310 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-18 00:02:23.480314 | orchestrator | + remote_group_id = (known after apply) 2026-04-18 00:02:23.480319 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-18 00:02:23.480323 | orchestrator | + security_group_id = (known after apply) 2026-04-18 00:02:23.480328 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.480332 | orchestrator | } 2026-04-18 00:02:23.480337 | orchestrator | 2026-04-18 00:02:23.480342 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-18 00:02:23.480346 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-18 00:02:23.480351 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.480355 | orchestrator | + description = "management security group" 2026-04-18 00:02:23.480360 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.480364 | orchestrator | + name = "testbed-management" 2026-04-18 00:02:23.480369 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.480373 | orchestrator | + stateful = (known after apply) 2026-04-18 00:02:23.480378 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.480382 | orchestrator | } 2026-04-18 00:02:23.480387 | orchestrator | 2026-04-18 00:02:23.480391 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-18 00:02:23.480396 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-18 00:02:23.480400 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.480405 | orchestrator | + description = "node security group" 2026-04-18 00:02:23.480409 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.480414 | orchestrator | + name = "testbed-node" 2026-04-18 00:02:23.480418 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.480423 | orchestrator | + stateful = (known after apply) 2026-04-18 00:02:23.480427 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.480432 | orchestrator | } 2026-04-18 00:02:23.480436 | orchestrator | 2026-04-18 00:02:23.480441 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-18 00:02:23.480446 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-18 00:02:23.480450 | orchestrator | + all_tags = (known after apply) 2026-04-18 00:02:23.480455 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-18 00:02:23.480459 | orchestrator | + dns_nameservers = [ 2026-04-18 00:02:23.480464 | orchestrator | + "8.8.8.8", 2026-04-18 00:02:23.480469 | orchestrator | + "9.9.9.9", 2026-04-18 00:02:23.480473 | orchestrator | ] 2026-04-18 00:02:23.480478 | orchestrator | + enable_dhcp = true 2026-04-18 00:02:23.480482 | orchestrator | + gateway_ip = (known after apply) 2026-04-18 00:02:23.480490 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.480495 | orchestrator | + ip_version = 4 2026-04-18 00:02:23.480500 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-18 00:02:23.480504 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-18 00:02:23.480509 | orchestrator | + name = "subnet-testbed-management" 2026-04-18 00:02:23.480513 | orchestrator | + network_id = (known after apply) 2026-04-18 00:02:23.480518 | orchestrator | + no_gateway = false 2026-04-18 00:02:23.480522 | orchestrator | + region = (known after apply) 2026-04-18 00:02:23.480527 | orchestrator | + service_types = (known after apply) 2026-04-18 00:02:23.480535 | orchestrator | + tenant_id = (known after apply) 2026-04-18 00:02:23.480540 | orchestrator | 2026-04-18 00:02:23.480544 | orchestrator | + allocation_pool { 2026-04-18 00:02:23.480549 | orchestrator | + end = "192.168.31.250" 2026-04-18 00:02:23.480553 | orchestrator | + start = "192.168.31.200" 2026-04-18 00:02:23.480558 | orchestrator | } 2026-04-18 00:02:23.480562 | orchestrator | } 2026-04-18 00:02:23.480567 | orchestrator | 2026-04-18 00:02:23.480571 | orchestrator | # terraform_data.image will be created 2026-04-18 00:02:23.480576 | orchestrator | + resource "terraform_data" "image" { 2026-04-18 00:02:23.480580 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.480585 | orchestrator | + input = "Ubuntu 24.04" 2026-04-18 00:02:23.480589 | orchestrator | + output = (known after apply) 2026-04-18 00:02:23.480594 | orchestrator | } 2026-04-18 00:02:23.480598 | orchestrator | 2026-04-18 00:02:23.480603 | orchestrator | # terraform_data.image_node will be created 2026-04-18 00:02:23.480607 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-18 00:02:23.480612 | orchestrator | + id = (known after apply) 2026-04-18 00:02:23.480616 | orchestrator | + input = "Ubuntu 24.04" 2026-04-18 00:02:23.480621 | orchestrator | + output = (known after apply) 2026-04-18 00:02:23.480625 | orchestrator | } 2026-04-18 00:02:23.480630 | orchestrator | 2026-04-18 00:02:23.480634 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-18 00:02:23.480639 | orchestrator | 2026-04-18 00:02:23.480643 | orchestrator | Changes to Outputs: 2026-04-18 00:02:23.480648 | orchestrator | + manager_address = (sensitive value) 2026-04-18 00:02:23.480652 | orchestrator | + private_key = (sensitive value) 2026-04-18 00:02:23.715488 | orchestrator | terraform_data.image: Creating... 2026-04-18 00:02:23.715554 | orchestrator | terraform_data.image_node: Creating... 2026-04-18 00:02:23.715563 | orchestrator | terraform_data.image: Creation complete after 0s [id=271c642c-54c2-18cb-9329-12eea53f0afe] 2026-04-18 00:02:23.715571 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=38e4e73d-e20f-6920-e08a-a7b0b4aa5cb9] 2026-04-18 00:02:23.754093 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-18 00:02:23.766010 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-18 00:02:23.775430 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-18 00:02:23.775489 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-18 00:02:23.775507 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-18 00:02:23.775513 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-18 00:02:23.775520 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-18 00:02:23.775789 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-18 00:02:23.778944 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-18 00:02:23.779480 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-18 00:02:24.248403 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-18 00:02:24.252863 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-18 00:02:24.380706 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-18 00:02:24.385509 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-18 00:02:24.880347 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=00f1013f-6599-4da7-a848-01855ba9c2e9] 2026-04-18 00:02:24.885341 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-18 00:02:24.932432 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-18 00:02:24.936578 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-18 00:02:27.413964 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=599ac768-9444-46f8-8891-4fc4025f5c6d] 2026-04-18 00:02:27.436069 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-18 00:02:27.445457 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=d804f2e65b2fe07a325409d4a1983925d97a9a29] 2026-04-18 00:02:27.451610 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-18 00:02:27.458902 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=1ec2efee70ff505451be55696ee792062e24f24e] 2026-04-18 00:02:27.464735 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=7f3bae2c-a706-4f5f-b58d-9951f9e0de3e] 2026-04-18 00:02:27.470138 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-18 00:02:27.471780 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-18 00:02:27.474274 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=7641f05d-52e5-4bbf-9f1e-20500225c31b] 2026-04-18 00:02:27.480229 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=ca3776c9-6de1-440c-b162-bababc8c77c0] 2026-04-18 00:02:27.482395 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-18 00:02:27.484303 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-18 00:02:27.510663 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=b8f980bd-a337-4427-8cb0-8c7c2531a389] 2026-04-18 00:02:27.516401 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=60a8bfc5-b9d0-4eee-9755-9876332cb447] 2026-04-18 00:02:27.518887 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-18 00:02:27.523749 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-18 00:02:27.539965 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=73edcc3e-3dbf-4300-a5bd-7c80b242e92a] 2026-04-18 00:02:27.546817 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-18 00:02:27.575903 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=5963e3c2-4ca5-403e-a695-14ba4ecb8231] 2026-04-18 00:02:27.625969 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=3edcf3c9-33ad-4391-9ec0-9b04bbd0d527] 2026-04-18 00:02:28.289062 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=5f0d91dd-4cbd-4575-b615-6da2039ab98d] 2026-04-18 00:02:28.927551 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=d7e3b100-f27c-4d1d-8b76-77c9daf65683] 2026-04-18 00:02:28.934394 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-18 00:02:30.881629 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=9213db1e-4236-43b7-9dd1-3c155c1e850d] 2026-04-18 00:02:30.891697 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=1f6e4506-6b1e-430c-8147-62b3e11670d5] 2026-04-18 00:02:30.912201 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=4a56528a-54c0-4b72-af2f-9bfd74ddf29e] 2026-04-18 00:02:30.950935 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=d62b803e-00a4-4d44-9f80-4fab0bc8c0f9] 2026-04-18 00:02:30.969561 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=f72b8c50-fdce-43ff-a6a0-37b5c0548b80] 2026-04-18 00:02:31.389957 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee] 2026-04-18 00:02:33.301600 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=039b1e6f-5911-4ee1-9752-97fc705fefef] 2026-04-18 00:02:33.307870 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-18 00:02:33.309540 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-18 00:02:33.310999 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-18 00:02:33.506042 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=2471ad0c-b518-4b24-a8a2-e70f51c52d88] 2026-04-18 00:02:33.516307 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-18 00:02:33.516368 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-18 00:02:33.516374 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-18 00:02:33.516394 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-18 00:02:33.517534 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-18 00:02:33.518223 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-18 00:02:33.859531 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=8eecc08e-d8db-47c5-80a2-72f95103cf4b] 2026-04-18 00:02:33.866911 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-18 00:02:33.867011 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-18 00:02:33.872356 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-18 00:02:34.313762 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=b630787c-e830-4913-a354-0e60ad669b8e] 2026-04-18 00:02:34.320138 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-18 00:02:34.417790 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=e49fefa4-63c7-4b96-bd4f-c6b6afd75780] 2026-04-18 00:02:34.427031 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-18 00:02:34.776948 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=07ab65ef-56a2-436e-8b4a-36aacc39965f] 2026-04-18 00:02:34.788858 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-18 00:02:35.211067 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=c76681e6-13dc-4526-8618-f1e12ac83fb6] 2026-04-18 00:02:35.222875 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-18 00:02:35.238606 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=94680650-d50e-4753-a6bd-abba0c6adbc4] 2026-04-18 00:02:35.245416 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-18 00:02:35.422274 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=3765cba6-c7c3-4350-a1c1-dcab8cbe20cb] 2026-04-18 00:02:36.148054 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-18 00:02:36.148118 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=bce17210-f5bc-42d3-a7cf-67eacc27275b] 2026-04-18 00:02:36.148131 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-18 00:02:36.148143 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=45d67c3e-4ebb-4b89-a807-af036e70eb48] 2026-04-18 00:02:36.148153 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=d95991d0-9da1-4cd7-a5b0-ccd42896502f] 2026-04-18 00:02:36.148189 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=7dfff089-e079-4c6c-996e-212510737365] 2026-04-18 00:02:36.148210 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=61e1a65e-ac87-4455-bcaf-eef5b95221f3] 2026-04-18 00:02:36.148220 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=ce57ed99-51d7-423a-9b18-7e9749db4495] 2026-04-18 00:02:36.322417 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=f656cb47-9ea5-4f5f-b768-803c7d9cdc9c] 2026-04-18 00:02:36.491698 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=a3d272cf-88b3-4c04-87ae-3d0bdff02f0a] 2026-04-18 00:02:36.573251 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=06918f51-651a-4ed1-a5b9-928415adc4bb] 2026-04-18 00:02:37.593021 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 3s [id=034a5580-51ad-4e77-85b3-d4b9d47a2e6a] 2026-04-18 00:02:39.309737 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 6s [id=d06d7b2b-816c-41e8-abe1-363cb2e79229] 2026-04-18 00:02:39.325959 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-18 00:02:39.343132 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-18 00:02:39.344833 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-18 00:02:39.348490 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-18 00:02:39.354289 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-18 00:02:39.360213 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-18 00:02:39.361127 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-18 00:02:42.557547 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 4s [id=c714ba24-be24-4385-8ded-f10085b24a19] 2026-04-18 00:02:42.569751 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-18 00:02:42.570609 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-18 00:02:42.572011 | orchestrator | local_file.inventory: Creating... 2026-04-18 00:02:42.574065 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=d0da1cd956f1eaf432393fde59e22ee0b3dc9565] 2026-04-18 00:02:42.577771 | orchestrator | local_file.inventory: Creation complete after 0s [id=51615470c870edff475ff34e6a8e490598eb6218] 2026-04-18 00:02:44.402564 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=c714ba24-be24-4385-8ded-f10085b24a19] 2026-04-18 00:02:49.350803 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-18 00:02:49.350929 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-18 00:02:49.353026 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-18 00:02:49.360394 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-18 00:02:49.361615 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-18 00:02:49.364831 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-18 00:02:59.359127 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-18 00:02:59.359256 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-18 00:02:59.359275 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-18 00:02:59.360461 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-18 00:02:59.362931 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-18 00:02:59.365270 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-18 00:03:09.367661 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-18 00:03:09.367826 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-18 00:03:09.367842 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-18 00:03:09.367866 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-18 00:03:09.367880 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-18 00:03:09.367894 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-18 00:03:10.214957 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=42f02d68-5a85-41ab-9068-8474a90f71d2] 2026-04-18 00:03:19.374371 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-04-18 00:03:19.374480 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-18 00:03:19.374495 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-04-18 00:03:19.374506 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-04-18 00:03:19.374549 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-04-18 00:03:20.209514 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=71895afd-ba56-4315-8d15-4e93394349c5] 2026-04-18 00:03:20.508089 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 42s [id=995b7dae-c672-4bde-be10-6419e2818511] 2026-04-18 00:03:29.374845 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-04-18 00:03:29.375014 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-04-18 00:03:29.375035 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-04-18 00:03:30.415627 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 51s [id=a6f5de1f-eeac-49f0-9744-cfcdb0314d27] 2026-04-18 00:03:31.257216 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 52s [id=31183e9a-72b6-459c-9534-19039d9b0df3] 2026-04-18 00:03:39.382899 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m0s elapsed] 2026-04-18 00:03:49.390415 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m10s elapsed] 2026-04-18 00:03:59.398970 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [1m20s elapsed] 2026-04-18 00:04:00.607637 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 1m22s [id=ee07a1f7-8137-4031-a1c7-dc364aac2a81] 2026-04-18 00:04:00.620969 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-18 00:04:00.627614 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=2814390113072418980] 2026-04-18 00:04:00.644218 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-18 00:04:00.646189 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-18 00:04:00.647012 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-18 00:04:00.653014 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-18 00:04:00.654636 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-18 00:04:00.661277 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-18 00:04:00.674174 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-18 00:04:00.674292 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-18 00:04:00.674314 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-18 00:04:00.701790 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-18 00:04:04.091617 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=31183e9a-72b6-459c-9534-19039d9b0df3/5963e3c2-4ca5-403e-a695-14ba4ecb8231] 2026-04-18 00:04:04.091711 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=a6f5de1f-eeac-49f0-9744-cfcdb0314d27/7f3bae2c-a706-4f5f-b58d-9951f9e0de3e] 2026-04-18 00:04:04.132915 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=71895afd-ba56-4315-8d15-4e93394349c5/599ac768-9444-46f8-8891-4fc4025f5c6d] 2026-04-18 00:04:04.169429 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=a6f5de1f-eeac-49f0-9744-cfcdb0314d27/7641f05d-52e5-4bbf-9f1e-20500225c31b] 2026-04-18 00:04:04.186498 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=31183e9a-72b6-459c-9534-19039d9b0df3/b8f980bd-a337-4427-8cb0-8c7c2531a389] 2026-04-18 00:04:04.186586 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=71895afd-ba56-4315-8d15-4e93394349c5/3edcf3c9-33ad-4391-9ec0-9b04bbd0d527] 2026-04-18 00:04:10.271345 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=31183e9a-72b6-459c-9534-19039d9b0df3/73edcc3e-3dbf-4300-a5bd-7c80b242e92a] 2026-04-18 00:04:10.288441 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 9s [id=a6f5de1f-eeac-49f0-9744-cfcdb0314d27/60a8bfc5-b9d0-4eee-9755-9876332cb447] 2026-04-18 00:04:10.320248 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=71895afd-ba56-4315-8d15-4e93394349c5/ca3776c9-6de1-440c-b162-bababc8c77c0] 2026-04-18 00:04:10.660364 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-18 00:04:20.661073 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-18 00:04:21.039878 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=970749f6-596b-4369-8551-ea2f89319dbc] 2026-04-18 00:04:21.055331 | orchestrator | 2026-04-18 00:04:21.055409 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-18 00:04:21.055416 | orchestrator | 2026-04-18 00:04:21.055422 | orchestrator | Outputs: 2026-04-18 00:04:21.055429 | orchestrator | 2026-04-18 00:04:21.055436 | orchestrator | manager_address = 2026-04-18 00:04:21.055444 | orchestrator | private_key = 2026-04-18 00:04:21.223155 | orchestrator | ok: Runtime: 0:02:06.878263 2026-04-18 00:04:21.258292 | 2026-04-18 00:04:21.258484 | TASK [Fetch manager address] 2026-04-18 00:04:21.744777 | orchestrator | ok 2026-04-18 00:04:21.755985 | 2026-04-18 00:04:21.756128 | TASK [Set manager_host address] 2026-04-18 00:04:21.836645 | orchestrator | ok 2026-04-18 00:04:21.846000 | 2026-04-18 00:04:21.846221 | LOOP [Update ansible collections] 2026-04-18 00:04:23.169183 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-18 00:04:23.169752 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-18 00:04:23.169865 | orchestrator | Starting galaxy collection install process 2026-04-18 00:04:23.170174 | orchestrator | Process install dependency map 2026-04-18 00:04:23.170278 | orchestrator | Starting collection install process 2026-04-18 00:04:23.170344 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-04-18 00:04:23.170411 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-04-18 00:04:23.170491 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-18 00:04:23.170667 | orchestrator | ok: Item: commons Runtime: 0:00:00.917316 2026-04-18 00:04:24.343326 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-18 00:04:24.343572 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-18 00:04:24.343651 | orchestrator | Starting galaxy collection install process 2026-04-18 00:04:24.343695 | orchestrator | Process install dependency map 2026-04-18 00:04:24.343734 | orchestrator | Starting collection install process 2026-04-18 00:04:24.343770 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-04-18 00:04:24.343804 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-04-18 00:04:24.343838 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-18 00:04:24.343914 | orchestrator | ok: Item: services Runtime: 0:00:00.893062 2026-04-18 00:04:24.367147 | 2026-04-18 00:04:24.367311 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-18 00:04:34.979069 | orchestrator | ok 2026-04-18 00:04:34.989985 | 2026-04-18 00:04:34.990117 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-18 00:05:35.071765 | orchestrator | ok 2026-04-18 00:05:35.082586 | 2026-04-18 00:05:35.082710 | TASK [Fetch manager ssh hostkey] 2026-04-18 00:05:36.661209 | orchestrator | Output suppressed because no_log was given 2026-04-18 00:05:36.669812 | 2026-04-18 00:05:36.669971 | TASK [Get ssh keypair from terraform environment] 2026-04-18 00:05:37.208039 | orchestrator | ok: Runtime: 0:00:00.008272 2026-04-18 00:05:37.223203 | 2026-04-18 00:05:37.223363 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-18 00:05:37.281504 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-18 00:05:37.295441 | 2026-04-18 00:05:37.297531 | TASK [Run manager part 0] 2026-04-18 00:05:38.497736 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-18 00:05:38.569150 | orchestrator | 2026-04-18 00:05:38.569229 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-18 00:05:38.569242 | orchestrator | 2026-04-18 00:05:38.569263 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-18 00:05:40.431623 | orchestrator | ok: [testbed-manager] 2026-04-18 00:05:40.431675 | orchestrator | 2026-04-18 00:05:40.431701 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-18 00:05:40.431711 | orchestrator | 2026-04-18 00:05:40.431719 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-18 00:05:42.258586 | orchestrator | ok: [testbed-manager] 2026-04-18 00:05:42.258637 | orchestrator | 2026-04-18 00:05:42.258648 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-18 00:05:42.864944 | orchestrator | ok: [testbed-manager] 2026-04-18 00:05:42.864994 | orchestrator | 2026-04-18 00:05:42.865005 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-18 00:05:42.909894 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:05:42.909936 | orchestrator | 2026-04-18 00:05:42.909949 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-18 00:05:42.942464 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:05:42.942497 | orchestrator | 2026-04-18 00:05:42.942541 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-18 00:05:42.976933 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:05:42.977248 | orchestrator | 2026-04-18 00:05:42.977309 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-18 00:05:43.653737 | orchestrator | changed: [testbed-manager] 2026-04-18 00:05:43.653769 | orchestrator | 2026-04-18 00:05:43.653775 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-18 00:08:51.742768 | orchestrator | changed: [testbed-manager] 2026-04-18 00:08:51.742860 | orchestrator | 2026-04-18 00:08:51.742875 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-18 00:10:19.030219 | orchestrator | changed: [testbed-manager] 2026-04-18 00:10:19.030312 | orchestrator | 2026-04-18 00:10:19.030372 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-18 00:10:43.400124 | orchestrator | changed: [testbed-manager] 2026-04-18 00:10:43.400168 | orchestrator | 2026-04-18 00:10:43.400178 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-18 00:10:51.819182 | orchestrator | changed: [testbed-manager] 2026-04-18 00:10:51.819272 | orchestrator | 2026-04-18 00:10:51.819288 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-18 00:10:51.869943 | orchestrator | ok: [testbed-manager] 2026-04-18 00:10:51.870065 | orchestrator | 2026-04-18 00:10:51.870084 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-18 00:10:52.654290 | orchestrator | ok: [testbed-manager] 2026-04-18 00:10:52.654332 | orchestrator | 2026-04-18 00:10:52.654341 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-18 00:10:53.405318 | orchestrator | changed: [testbed-manager] 2026-04-18 00:10:53.405403 | orchestrator | 2026-04-18 00:10:53.405421 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-18 00:10:59.979122 | orchestrator | changed: [testbed-manager] 2026-04-18 00:10:59.979217 | orchestrator | 2026-04-18 00:10:59.979232 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-18 00:11:05.781795 | orchestrator | changed: [testbed-manager] 2026-04-18 00:11:05.781865 | orchestrator | 2026-04-18 00:11:05.781877 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-18 00:11:08.352759 | orchestrator | changed: [testbed-manager] 2026-04-18 00:11:08.352830 | orchestrator | 2026-04-18 00:11:08.352846 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-18 00:11:10.055905 | orchestrator | changed: [testbed-manager] 2026-04-18 00:11:10.055996 | orchestrator | 2026-04-18 00:11:10.056013 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-18 00:11:11.156422 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-18 00:11:11.156550 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-18 00:11:11.156567 | orchestrator | 2026-04-18 00:11:11.156577 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-18 00:11:11.196196 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-18 00:11:11.196261 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-18 00:11:11.196271 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-18 00:11:11.196280 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-18 00:11:14.347672 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-18 00:11:14.347713 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-18 00:11:14.347720 | orchestrator | 2026-04-18 00:11:14.347728 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-18 00:11:14.917557 | orchestrator | changed: [testbed-manager] 2026-04-18 00:11:14.917647 | orchestrator | 2026-04-18 00:11:14.917662 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-18 00:14:35.358460 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-18 00:14:35.358493 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-18 00:14:35.358498 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-18 00:14:35.358502 | orchestrator | 2026-04-18 00:14:35.358508 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-18 00:14:37.542286 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-18 00:14:37.542323 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-18 00:14:37.542328 | orchestrator | 2026-04-18 00:14:37.542334 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-18 00:14:37.542339 | orchestrator | 2026-04-18 00:14:37.542343 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-18 00:14:38.828309 | orchestrator | ok: [testbed-manager] 2026-04-18 00:14:38.828342 | orchestrator | 2026-04-18 00:14:38.828348 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-18 00:14:38.878341 | orchestrator | ok: [testbed-manager] 2026-04-18 00:14:38.878381 | orchestrator | 2026-04-18 00:14:38.878389 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-18 00:14:38.934925 | orchestrator | ok: [testbed-manager] 2026-04-18 00:14:38.934965 | orchestrator | 2026-04-18 00:14:38.934972 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-18 00:14:39.669523 | orchestrator | changed: [testbed-manager] 2026-04-18 00:14:39.669563 | orchestrator | 2026-04-18 00:14:39.669572 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-18 00:14:40.358796 | orchestrator | changed: [testbed-manager] 2026-04-18 00:14:40.358846 | orchestrator | 2026-04-18 00:14:40.358855 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-18 00:14:41.649642 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-18 00:14:41.649685 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-18 00:14:41.649693 | orchestrator | 2026-04-18 00:14:41.649701 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-18 00:14:42.949794 | orchestrator | changed: [testbed-manager] 2026-04-18 00:14:42.949834 | orchestrator | 2026-04-18 00:14:42.949841 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-18 00:14:44.649449 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-18 00:14:44.649544 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-18 00:14:44.649579 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-18 00:14:44.649592 | orchestrator | 2026-04-18 00:14:44.649639 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-18 00:14:44.705112 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:14:44.705172 | orchestrator | 2026-04-18 00:14:44.705181 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-18 00:14:44.784192 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:14:44.784246 | orchestrator | 2026-04-18 00:14:44.784252 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-18 00:14:45.325587 | orchestrator | changed: [testbed-manager] 2026-04-18 00:14:45.325682 | orchestrator | 2026-04-18 00:14:45.325700 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-18 00:14:45.394659 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:14:45.394716 | orchestrator | 2026-04-18 00:14:45.394723 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-18 00:14:46.245574 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-18 00:14:46.245664 | orchestrator | changed: [testbed-manager] 2026-04-18 00:14:46.245677 | orchestrator | 2026-04-18 00:14:46.245686 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-18 00:14:46.285374 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:14:46.285519 | orchestrator | 2026-04-18 00:14:46.285530 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-18 00:14:46.317197 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:14:46.317249 | orchestrator | 2026-04-18 00:14:46.317262 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-18 00:14:46.349168 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:14:46.349205 | orchestrator | 2026-04-18 00:14:46.349213 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-18 00:14:46.417552 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:14:46.417592 | orchestrator | 2026-04-18 00:14:46.417599 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-18 00:14:47.099328 | orchestrator | ok: [testbed-manager] 2026-04-18 00:14:47.099364 | orchestrator | 2026-04-18 00:14:47.099370 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-18 00:14:47.099375 | orchestrator | 2026-04-18 00:14:47.099381 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-18 00:14:48.416672 | orchestrator | ok: [testbed-manager] 2026-04-18 00:14:48.416708 | orchestrator | 2026-04-18 00:14:48.416713 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-18 00:14:49.352257 | orchestrator | changed: [testbed-manager] 2026-04-18 00:14:49.352337 | orchestrator | 2026-04-18 00:14:49.352349 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:14:49.352359 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-18 00:14:49.352368 | orchestrator | 2026-04-18 00:14:49.677558 | orchestrator | ok: Runtime: 0:09:11.776790 2026-04-18 00:14:49.695192 | 2026-04-18 00:14:49.695382 | TASK [Point out that the log in on the manager is now possible] 2026-04-18 00:14:49.743075 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-18 00:14:49.753814 | 2026-04-18 00:14:49.753936 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-18 00:14:49.801212 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-18 00:14:49.811663 | 2026-04-18 00:14:49.811809 | TASK [Run manager part 1 + 2] 2026-04-18 00:14:50.657113 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-18 00:14:50.718156 | orchestrator | 2026-04-18 00:14:50.718253 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-18 00:14:50.718273 | orchestrator | 2026-04-18 00:14:50.718301 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-18 00:14:53.499774 | orchestrator | ok: [testbed-manager] 2026-04-18 00:14:53.500114 | orchestrator | 2026-04-18 00:14:53.500169 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-18 00:14:53.541593 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:14:53.541650 | orchestrator | 2026-04-18 00:14:53.541660 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-18 00:14:53.581007 | orchestrator | ok: [testbed-manager] 2026-04-18 00:14:53.581088 | orchestrator | 2026-04-18 00:14:53.581106 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-18 00:14:53.634373 | orchestrator | ok: [testbed-manager] 2026-04-18 00:14:53.634461 | orchestrator | 2026-04-18 00:14:53.634476 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-18 00:14:53.711848 | orchestrator | ok: [testbed-manager] 2026-04-18 00:14:53.711906 | orchestrator | 2026-04-18 00:14:53.711914 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-18 00:14:53.773158 | orchestrator | ok: [testbed-manager] 2026-04-18 00:14:53.773232 | orchestrator | 2026-04-18 00:14:53.773247 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-18 00:14:53.828949 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-18 00:14:53.829027 | orchestrator | 2026-04-18 00:14:53.829042 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-18 00:14:54.533065 | orchestrator | ok: [testbed-manager] 2026-04-18 00:14:54.533209 | orchestrator | 2026-04-18 00:14:54.533242 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-18 00:14:54.585654 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:14:54.585717 | orchestrator | 2026-04-18 00:14:54.585727 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-18 00:14:55.945954 | orchestrator | changed: [testbed-manager] 2026-04-18 00:14:55.946082 | orchestrator | 2026-04-18 00:14:55.946103 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-18 00:14:56.510917 | orchestrator | ok: [testbed-manager] 2026-04-18 00:14:56.511029 | orchestrator | 2026-04-18 00:14:56.511057 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-18 00:14:57.615742 | orchestrator | changed: [testbed-manager] 2026-04-18 00:14:57.615834 | orchestrator | 2026-04-18 00:14:57.615853 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-18 00:15:11.374234 | orchestrator | changed: [testbed-manager] 2026-04-18 00:15:11.374314 | orchestrator | 2026-04-18 00:15:11.374331 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-18 00:15:11.997818 | orchestrator | ok: [testbed-manager] 2026-04-18 00:15:11.997922 | orchestrator | 2026-04-18 00:15:11.997951 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-18 00:15:12.080728 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:15:12.080834 | orchestrator | 2026-04-18 00:15:12.080852 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-18 00:15:13.068946 | orchestrator | changed: [testbed-manager] 2026-04-18 00:15:13.069064 | orchestrator | 2026-04-18 00:15:13.069082 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-18 00:15:14.070202 | orchestrator | changed: [testbed-manager] 2026-04-18 00:15:14.070301 | orchestrator | 2026-04-18 00:15:14.070319 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-18 00:15:14.638007 | orchestrator | changed: [testbed-manager] 2026-04-18 00:15:14.638124 | orchestrator | 2026-04-18 00:15:14.638147 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-18 00:15:14.692944 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-18 00:15:14.693072 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-18 00:15:14.693089 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-18 00:15:14.693102 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-18 00:15:16.761045 | orchestrator | changed: [testbed-manager] 2026-04-18 00:15:16.761143 | orchestrator | 2026-04-18 00:15:16.761159 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-18 00:15:25.429827 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-18 00:15:25.430107 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-18 00:15:25.430129 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-18 00:15:25.430138 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-18 00:15:25.430152 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-18 00:15:25.430160 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-18 00:15:25.430168 | orchestrator | 2026-04-18 00:15:25.430177 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-18 00:15:26.468824 | orchestrator | changed: [testbed-manager] 2026-04-18 00:15:26.468914 | orchestrator | 2026-04-18 00:15:26.468930 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-18 00:15:29.435829 | orchestrator | changed: [testbed-manager] 2026-04-18 00:15:29.435930 | orchestrator | 2026-04-18 00:15:29.435947 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-18 00:15:29.481240 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:15:29.481308 | orchestrator | 2026-04-18 00:15:29.481317 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-18 00:17:03.797751 | orchestrator | changed: [testbed-manager] 2026-04-18 00:17:03.797901 | orchestrator | 2026-04-18 00:17:03.797922 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-18 00:17:04.799266 | orchestrator | ok: [testbed-manager] 2026-04-18 00:17:04.799305 | orchestrator | 2026-04-18 00:17:04.799315 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:17:04.799322 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-18 00:17:04.799328 | orchestrator | 2026-04-18 00:17:04.956432 | orchestrator | ok: Runtime: 0:02:14.789023 2026-04-18 00:17:04.973185 | 2026-04-18 00:17:04.973398 | TASK [Reboot manager] 2026-04-18 00:17:06.514778 | orchestrator | ok: Runtime: 0:00:00.998586 2026-04-18 00:17:06.532613 | 2026-04-18 00:17:06.532796 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-18 00:17:20.277177 | orchestrator | ok 2026-04-18 00:17:20.287770 | 2026-04-18 00:17:20.287896 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-18 00:18:20.328922 | orchestrator | ok 2026-04-18 00:18:20.338989 | 2026-04-18 00:18:20.339116 | TASK [Deploy manager + bootstrap nodes] 2026-04-18 00:18:22.758816 | orchestrator | 2026-04-18 00:18:22.759013 | orchestrator | # DEPLOY MANAGER 2026-04-18 00:18:22.759038 | orchestrator | 2026-04-18 00:18:22.759054 | orchestrator | + set -e 2026-04-18 00:18:22.759071 | orchestrator | + echo 2026-04-18 00:18:22.759100 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-18 00:18:22.759135 | orchestrator | + echo 2026-04-18 00:18:22.759200 | orchestrator | + cat /opt/manager-vars.sh 2026-04-18 00:18:22.762102 | orchestrator | export NUMBER_OF_NODES=6 2026-04-18 00:18:22.762147 | orchestrator | 2026-04-18 00:18:22.762168 | orchestrator | export CEPH_VERSION=reef 2026-04-18 00:18:22.762186 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-18 00:18:22.762199 | orchestrator | export MANAGER_VERSION=10.0.0 2026-04-18 00:18:22.762223 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-18 00:18:22.762234 | orchestrator | 2026-04-18 00:18:22.762252 | orchestrator | export ARA=false 2026-04-18 00:18:22.762264 | orchestrator | export DEPLOY_MODE=manager 2026-04-18 00:18:22.762281 | orchestrator | export TEMPEST=true 2026-04-18 00:18:22.762293 | orchestrator | export IS_ZUUL=true 2026-04-18 00:18:22.762304 | orchestrator | 2026-04-18 00:18:22.762322 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.97 2026-04-18 00:18:22.762334 | orchestrator | export EXTERNAL_API=false 2026-04-18 00:18:22.762387 | orchestrator | 2026-04-18 00:18:22.762400 | orchestrator | export IMAGE_USER=ubuntu 2026-04-18 00:18:22.762414 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-18 00:18:22.762425 | orchestrator | 2026-04-18 00:18:22.762436 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-18 00:18:22.762455 | orchestrator | 2026-04-18 00:18:22.762466 | orchestrator | + echo 2026-04-18 00:18:22.762484 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-18 00:18:22.762977 | orchestrator | ++ export INTERACTIVE=false 2026-04-18 00:18:22.763004 | orchestrator | ++ INTERACTIVE=false 2026-04-18 00:18:22.763024 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-18 00:18:22.763043 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-18 00:18:22.763196 | orchestrator | + source /opt/manager-vars.sh 2026-04-18 00:18:22.763222 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-18 00:18:22.763242 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-18 00:18:22.763270 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-18 00:18:22.763287 | orchestrator | ++ CEPH_VERSION=reef 2026-04-18 00:18:22.763303 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-18 00:18:22.763315 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-18 00:18:22.763326 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-18 00:18:22.763337 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-18 00:18:22.763377 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-18 00:18:22.763397 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-18 00:18:22.763409 | orchestrator | ++ export ARA=false 2026-04-18 00:18:22.763420 | orchestrator | ++ ARA=false 2026-04-18 00:18:22.763431 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-18 00:18:22.763442 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-18 00:18:22.763453 | orchestrator | ++ export TEMPEST=true 2026-04-18 00:18:22.763464 | orchestrator | ++ TEMPEST=true 2026-04-18 00:18:22.763475 | orchestrator | ++ export IS_ZUUL=true 2026-04-18 00:18:22.763486 | orchestrator | ++ IS_ZUUL=true 2026-04-18 00:18:22.763497 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.97 2026-04-18 00:18:22.763508 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.97 2026-04-18 00:18:22.763518 | orchestrator | ++ export EXTERNAL_API=false 2026-04-18 00:18:22.763529 | orchestrator | ++ EXTERNAL_API=false 2026-04-18 00:18:22.763540 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-18 00:18:22.763551 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-18 00:18:22.763562 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-18 00:18:22.763573 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-18 00:18:22.763588 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-18 00:18:22.763599 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-18 00:18:22.763611 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-18 00:18:22.815237 | orchestrator | + docker version 2026-04-18 00:18:22.915533 | orchestrator | Client: Docker Engine - Community 2026-04-18 00:18:22.915632 | orchestrator | Version: 27.5.1 2026-04-18 00:18:22.915647 | orchestrator | API version: 1.47 2026-04-18 00:18:22.915662 | orchestrator | Go version: go1.22.11 2026-04-18 00:18:22.915673 | orchestrator | Git commit: 9f9e405 2026-04-18 00:18:22.915684 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-18 00:18:22.915696 | orchestrator | OS/Arch: linux/amd64 2026-04-18 00:18:22.915707 | orchestrator | Context: default 2026-04-18 00:18:22.915718 | orchestrator | 2026-04-18 00:18:22.915729 | orchestrator | Server: Docker Engine - Community 2026-04-18 00:18:22.915740 | orchestrator | Engine: 2026-04-18 00:18:22.915751 | orchestrator | Version: 27.5.1 2026-04-18 00:18:22.915763 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-18 00:18:22.915803 | orchestrator | Go version: go1.22.11 2026-04-18 00:18:22.915815 | orchestrator | Git commit: 4c9b3b0 2026-04-18 00:18:22.915826 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-18 00:18:22.915836 | orchestrator | OS/Arch: linux/amd64 2026-04-18 00:18:22.915847 | orchestrator | Experimental: false 2026-04-18 00:18:22.915858 | orchestrator | containerd: 2026-04-18 00:18:22.915868 | orchestrator | Version: v2.2.3 2026-04-18 00:18:22.915880 | orchestrator | GitCommit: 77c84241c7cbdd9b4eca2591793e3d4f4317c590 2026-04-18 00:18:22.915891 | orchestrator | runc: 2026-04-18 00:18:22.915902 | orchestrator | Version: 1.3.5 2026-04-18 00:18:22.915913 | orchestrator | GitCommit: v1.3.5-0-g488fc13e 2026-04-18 00:18:22.915924 | orchestrator | docker-init: 2026-04-18 00:18:22.915935 | orchestrator | Version: 0.19.0 2026-04-18 00:18:22.915946 | orchestrator | GitCommit: de40ad0 2026-04-18 00:18:22.917828 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-18 00:18:22.926395 | orchestrator | + set -e 2026-04-18 00:18:22.926437 | orchestrator | + source /opt/manager-vars.sh 2026-04-18 00:18:22.926454 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-18 00:18:22.926480 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-18 00:18:22.926491 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-18 00:18:22.926503 | orchestrator | ++ CEPH_VERSION=reef 2026-04-18 00:18:22.926514 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-18 00:18:22.926534 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-18 00:18:22.926545 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-18 00:18:22.926557 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-18 00:18:22.926574 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-18 00:18:22.926593 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-18 00:18:22.926604 | orchestrator | ++ export ARA=false 2026-04-18 00:18:22.926622 | orchestrator | ++ ARA=false 2026-04-18 00:18:22.926637 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-18 00:18:22.926649 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-18 00:18:22.926659 | orchestrator | ++ export TEMPEST=true 2026-04-18 00:18:22.926677 | orchestrator | ++ TEMPEST=true 2026-04-18 00:18:22.926688 | orchestrator | ++ export IS_ZUUL=true 2026-04-18 00:18:22.926699 | orchestrator | ++ IS_ZUUL=true 2026-04-18 00:18:22.926720 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.97 2026-04-18 00:18:22.926738 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.97 2026-04-18 00:18:22.926749 | orchestrator | ++ export EXTERNAL_API=false 2026-04-18 00:18:22.926760 | orchestrator | ++ EXTERNAL_API=false 2026-04-18 00:18:22.926771 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-18 00:18:22.926781 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-18 00:18:22.926792 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-18 00:18:22.926802 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-18 00:18:22.926814 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-18 00:18:22.926825 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-18 00:18:22.926836 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-18 00:18:22.926846 | orchestrator | ++ export INTERACTIVE=false 2026-04-18 00:18:22.926857 | orchestrator | ++ INTERACTIVE=false 2026-04-18 00:18:22.926867 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-18 00:18:22.926883 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-18 00:18:22.927207 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-18 00:18:22.927225 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-18 00:18:22.933447 | orchestrator | + set -e 2026-04-18 00:18:22.933511 | orchestrator | + VERSION=10.0.0 2026-04-18 00:18:22.933528 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-18 00:18:22.940092 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-18 00:18:22.940151 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-18 00:18:22.943878 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-18 00:18:22.947017 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-18 00:18:22.954187 | orchestrator | /opt/configuration ~ 2026-04-18 00:18:22.954260 | orchestrator | + set -e 2026-04-18 00:18:22.954275 | orchestrator | + pushd /opt/configuration 2026-04-18 00:18:22.954287 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-18 00:18:22.956916 | orchestrator | + source /opt/venv/bin/activate 2026-04-18 00:18:22.957976 | orchestrator | ++ deactivate nondestructive 2026-04-18 00:18:22.957995 | orchestrator | ++ '[' -n '' ']' 2026-04-18 00:18:22.958010 | orchestrator | ++ '[' -n '' ']' 2026-04-18 00:18:22.958091 | orchestrator | ++ hash -r 2026-04-18 00:18:22.958109 | orchestrator | ++ '[' -n '' ']' 2026-04-18 00:18:22.958121 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-18 00:18:22.958132 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-18 00:18:22.958143 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-18 00:18:22.958574 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-18 00:18:22.958678 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-18 00:18:22.958693 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-18 00:18:22.958705 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-18 00:18:22.958717 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-18 00:18:22.958729 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-18 00:18:22.958740 | orchestrator | ++ export PATH 2026-04-18 00:18:22.958754 | orchestrator | ++ '[' -n '' ']' 2026-04-18 00:18:22.958765 | orchestrator | ++ '[' -z '' ']' 2026-04-18 00:18:22.958776 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-18 00:18:22.958787 | orchestrator | ++ PS1='(venv) ' 2026-04-18 00:18:22.958798 | orchestrator | ++ export PS1 2026-04-18 00:18:22.958819 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-18 00:18:22.958830 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-18 00:18:22.958841 | orchestrator | ++ hash -r 2026-04-18 00:18:22.958852 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-18 00:18:23.795473 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-18 00:18:23.796292 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-18 00:18:23.797675 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-18 00:18:23.798958 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-18 00:18:23.799986 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.1) 2026-04-18 00:18:23.809392 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-18 00:18:23.810559 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-18 00:18:23.811513 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-18 00:18:23.812781 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-18 00:18:23.835716 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-18 00:18:23.837069 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-18 00:18:23.838491 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-18 00:18:23.839640 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-18 00:18:23.843379 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-18 00:18:24.017143 | orchestrator | ++ which gilt 2026-04-18 00:18:24.020701 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-18 00:18:24.020744 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-18 00:18:24.211633 | orchestrator | osism.cfg-generics: 2026-04-18 00:18:24.351526 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-18 00:18:24.351639 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-18 00:18:24.352060 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-18 00:18:24.352096 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-18 00:18:25.144663 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-18 00:18:25.154772 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-18 00:18:25.479846 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-18 00:18:25.511623 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-18 00:18:25.511673 | orchestrator | + deactivate 2026-04-18 00:18:25.511687 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-18 00:18:25.511698 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-18 00:18:25.511708 | orchestrator | + export PATH 2026-04-18 00:18:25.511718 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-18 00:18:25.511729 | orchestrator | + '[' -n '' ']' 2026-04-18 00:18:25.511742 | orchestrator | + hash -r 2026-04-18 00:18:25.511759 | orchestrator | ~ 2026-04-18 00:18:25.511769 | orchestrator | + '[' -n '' ']' 2026-04-18 00:18:25.511779 | orchestrator | + unset VIRTUAL_ENV 2026-04-18 00:18:25.511789 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-18 00:18:25.511799 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-18 00:18:25.511809 | orchestrator | + unset -f deactivate 2026-04-18 00:18:25.511819 | orchestrator | + popd 2026-04-18 00:18:25.512979 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-18 00:18:25.513002 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-18 00:18:25.513912 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-18 00:18:25.560621 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-18 00:18:25.560710 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-18 00:18:25.560723 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-18 00:18:25.561548 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-18 00:18:25.633786 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-18 00:18:25.633862 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-18 00:18:25.639271 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-18 00:18:25.643207 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-18 00:18:25.724391 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-18 00:18:25.724490 | orchestrator | + source /opt/venv/bin/activate 2026-04-18 00:18:25.724503 | orchestrator | ++ deactivate nondestructive 2026-04-18 00:18:25.724515 | orchestrator | ++ '[' -n '' ']' 2026-04-18 00:18:25.724526 | orchestrator | ++ '[' -n '' ']' 2026-04-18 00:18:25.724538 | orchestrator | ++ hash -r 2026-04-18 00:18:25.724549 | orchestrator | ++ '[' -n '' ']' 2026-04-18 00:18:25.724560 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-18 00:18:25.724571 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-18 00:18:25.724582 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-18 00:18:25.724595 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-18 00:18:25.724606 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-18 00:18:25.724637 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-18 00:18:25.724648 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-18 00:18:25.724660 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-18 00:18:25.724672 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-18 00:18:25.724694 | orchestrator | ++ export PATH 2026-04-18 00:18:25.724706 | orchestrator | ++ '[' -n '' ']' 2026-04-18 00:18:25.724717 | orchestrator | ++ '[' -z '' ']' 2026-04-18 00:18:25.724727 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-18 00:18:25.724738 | orchestrator | ++ PS1='(venv) ' 2026-04-18 00:18:25.724749 | orchestrator | ++ export PS1 2026-04-18 00:18:25.724760 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-18 00:18:25.724771 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-18 00:18:25.724781 | orchestrator | ++ hash -r 2026-04-18 00:18:25.724793 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-18 00:18:26.693216 | orchestrator | 2026-04-18 00:18:26.693439 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-18 00:18:26.693475 | orchestrator | 2026-04-18 00:18:26.693495 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-18 00:18:27.179884 | orchestrator | ok: [testbed-manager] 2026-04-18 00:18:27.179999 | orchestrator | 2026-04-18 00:18:27.180043 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-18 00:18:28.016130 | orchestrator | changed: [testbed-manager] 2026-04-18 00:18:28.016243 | orchestrator | 2026-04-18 00:18:28.016261 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-18 00:18:28.016274 | orchestrator | 2026-04-18 00:18:28.016290 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-18 00:18:30.066761 | orchestrator | ok: [testbed-manager] 2026-04-18 00:18:30.066874 | orchestrator | 2026-04-18 00:18:30.066892 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-18 00:18:30.118837 | orchestrator | ok: [testbed-manager] 2026-04-18 00:18:30.118944 | orchestrator | 2026-04-18 00:18:30.118961 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-18 00:18:30.526158 | orchestrator | changed: [testbed-manager] 2026-04-18 00:18:30.526236 | orchestrator | 2026-04-18 00:18:30.526246 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-18 00:18:30.563571 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:18:30.563684 | orchestrator | 2026-04-18 00:18:30.563703 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-18 00:18:30.864093 | orchestrator | changed: [testbed-manager] 2026-04-18 00:18:30.864185 | orchestrator | 2026-04-18 00:18:30.864199 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-18 00:18:31.159627 | orchestrator | ok: [testbed-manager] 2026-04-18 00:18:31.159724 | orchestrator | 2026-04-18 00:18:31.159740 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-18 00:18:31.261319 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:18:31.261457 | orchestrator | 2026-04-18 00:18:31.261474 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-18 00:18:31.261487 | orchestrator | 2026-04-18 00:18:31.261498 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-18 00:18:32.884831 | orchestrator | ok: [testbed-manager] 2026-04-18 00:18:32.884954 | orchestrator | 2026-04-18 00:18:32.884982 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-18 00:18:32.971221 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-18 00:18:32.971313 | orchestrator | 2026-04-18 00:18:32.971325 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-18 00:18:33.020088 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-18 00:18:33.020202 | orchestrator | 2026-04-18 00:18:33.020231 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-18 00:18:33.968184 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-18 00:18:33.968286 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-18 00:18:33.968302 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-18 00:18:33.968314 | orchestrator | 2026-04-18 00:18:33.968326 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-18 00:18:35.533429 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-18 00:18:35.533549 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-18 00:18:35.533566 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-18 00:18:35.533582 | orchestrator | 2026-04-18 00:18:35.533635 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-18 00:18:36.087868 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-18 00:18:36.087989 | orchestrator | changed: [testbed-manager] 2026-04-18 00:18:36.088008 | orchestrator | 2026-04-18 00:18:36.088020 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-18 00:18:36.661216 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-18 00:18:36.661325 | orchestrator | changed: [testbed-manager] 2026-04-18 00:18:36.661387 | orchestrator | 2026-04-18 00:18:36.661404 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-18 00:18:36.705984 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:18:36.706128 | orchestrator | 2026-04-18 00:18:36.706145 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-18 00:18:37.051253 | orchestrator | ok: [testbed-manager] 2026-04-18 00:18:37.051395 | orchestrator | 2026-04-18 00:18:37.051412 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-18 00:18:37.120965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-18 00:18:37.121073 | orchestrator | 2026-04-18 00:18:37.121088 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-18 00:18:38.200658 | orchestrator | changed: [testbed-manager] 2026-04-18 00:18:38.200761 | orchestrator | 2026-04-18 00:18:38.200777 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-18 00:18:38.952379 | orchestrator | changed: [testbed-manager] 2026-04-18 00:18:38.952481 | orchestrator | 2026-04-18 00:18:38.952498 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-18 00:18:48.727027 | orchestrator | changed: [testbed-manager] 2026-04-18 00:18:48.727142 | orchestrator | 2026-04-18 00:18:48.727160 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-18 00:18:48.775707 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:18:48.775797 | orchestrator | 2026-04-18 00:18:48.775813 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-18 00:18:48.775825 | orchestrator | 2026-04-18 00:18:48.775837 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-18 00:18:50.605576 | orchestrator | ok: [testbed-manager] 2026-04-18 00:18:50.605672 | orchestrator | 2026-04-18 00:18:50.605690 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-18 00:18:50.705654 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-18 00:18:50.705748 | orchestrator | 2026-04-18 00:18:50.705764 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-18 00:18:50.771915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-18 00:18:50.772007 | orchestrator | 2026-04-18 00:18:50.772022 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-18 00:18:53.053048 | orchestrator | ok: [testbed-manager] 2026-04-18 00:18:53.053135 | orchestrator | 2026-04-18 00:18:53.053148 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-18 00:18:53.108445 | orchestrator | ok: [testbed-manager] 2026-04-18 00:18:53.108538 | orchestrator | 2026-04-18 00:18:53.108554 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-18 00:18:53.235435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-18 00:18:53.235549 | orchestrator | 2026-04-18 00:18:53.235572 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-18 00:18:55.865236 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-18 00:18:55.865335 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-18 00:18:55.865389 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-18 00:18:55.865401 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-18 00:18:55.865412 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-18 00:18:55.865421 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-18 00:18:55.865429 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-18 00:18:55.865438 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-18 00:18:55.865447 | orchestrator | 2026-04-18 00:18:55.865456 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-18 00:18:56.482337 | orchestrator | changed: [testbed-manager] 2026-04-18 00:18:56.482489 | orchestrator | 2026-04-18 00:18:56.482507 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-18 00:18:57.070179 | orchestrator | changed: [testbed-manager] 2026-04-18 00:18:57.070275 | orchestrator | 2026-04-18 00:18:57.070298 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-18 00:18:57.145133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-18 00:18:57.145250 | orchestrator | 2026-04-18 00:18:57.145276 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-18 00:18:58.281694 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-18 00:18:58.281795 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-18 00:18:58.281811 | orchestrator | 2026-04-18 00:18:58.281823 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-18 00:18:58.888191 | orchestrator | changed: [testbed-manager] 2026-04-18 00:18:58.888320 | orchestrator | 2026-04-18 00:18:58.888376 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-18 00:18:58.940856 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:18:58.940954 | orchestrator | 2026-04-18 00:18:58.940969 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-18 00:18:59.033461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-18 00:18:59.033531 | orchestrator | 2026-04-18 00:18:59.033539 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-18 00:18:59.610242 | orchestrator | changed: [testbed-manager] 2026-04-18 00:18:59.610366 | orchestrator | 2026-04-18 00:18:59.610384 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-18 00:18:59.667666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-18 00:18:59.667769 | orchestrator | 2026-04-18 00:18:59.667786 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-18 00:19:00.884979 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-18 00:19:00.885081 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-18 00:19:00.885100 | orchestrator | changed: [testbed-manager] 2026-04-18 00:19:00.885114 | orchestrator | 2026-04-18 00:19:00.885126 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-18 00:19:01.474500 | orchestrator | changed: [testbed-manager] 2026-04-18 00:19:01.474621 | orchestrator | 2026-04-18 00:19:01.474646 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-18 00:19:01.524083 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:19:01.524166 | orchestrator | 2026-04-18 00:19:01.524182 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-18 00:19:01.606527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-18 00:19:01.606623 | orchestrator | 2026-04-18 00:19:01.606640 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-18 00:19:02.085984 | orchestrator | changed: [testbed-manager] 2026-04-18 00:19:02.086181 | orchestrator | 2026-04-18 00:19:02.086210 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-18 00:19:02.450562 | orchestrator | changed: [testbed-manager] 2026-04-18 00:19:02.450658 | orchestrator | 2026-04-18 00:19:02.450674 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-18 00:19:03.486181 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-18 00:19:03.486315 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-18 00:19:03.486399 | orchestrator | 2026-04-18 00:19:03.486416 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-18 00:19:04.059247 | orchestrator | changed: [testbed-manager] 2026-04-18 00:19:04.059414 | orchestrator | 2026-04-18 00:19:04.059440 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-18 00:19:04.384090 | orchestrator | ok: [testbed-manager] 2026-04-18 00:19:04.384197 | orchestrator | 2026-04-18 00:19:04.384215 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-18 00:19:04.693189 | orchestrator | changed: [testbed-manager] 2026-04-18 00:19:04.693275 | orchestrator | 2026-04-18 00:19:04.693287 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-18 00:19:04.733722 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:19:04.733871 | orchestrator | 2026-04-18 00:19:04.733890 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-18 00:19:04.795554 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-18 00:19:04.795649 | orchestrator | 2026-04-18 00:19:04.795664 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-18 00:19:04.828281 | orchestrator | ok: [testbed-manager] 2026-04-18 00:19:04.828422 | orchestrator | 2026-04-18 00:19:04.828438 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-18 00:19:06.543905 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-18 00:19:06.544033 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-18 00:19:06.544051 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-18 00:19:06.544064 | orchestrator | 2026-04-18 00:19:06.544076 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-18 00:19:07.160065 | orchestrator | changed: [testbed-manager] 2026-04-18 00:19:07.160167 | orchestrator | 2026-04-18 00:19:07.160184 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-18 00:19:07.805797 | orchestrator | changed: [testbed-manager] 2026-04-18 00:19:07.805900 | orchestrator | 2026-04-18 00:19:07.805916 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-18 00:19:08.451251 | orchestrator | changed: [testbed-manager] 2026-04-18 00:19:08.451410 | orchestrator | 2026-04-18 00:19:08.451431 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-18 00:19:08.523037 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-18 00:19:08.523128 | orchestrator | 2026-04-18 00:19:08.523143 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-18 00:19:08.560676 | orchestrator | ok: [testbed-manager] 2026-04-18 00:19:08.560762 | orchestrator | 2026-04-18 00:19:08.560777 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-18 00:19:09.179044 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-18 00:19:09.179193 | orchestrator | 2026-04-18 00:19:09.180009 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-18 00:19:09.250565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-18 00:19:09.250657 | orchestrator | 2026-04-18 00:19:09.250672 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-18 00:19:09.888503 | orchestrator | changed: [testbed-manager] 2026-04-18 00:19:09.888592 | orchestrator | 2026-04-18 00:19:09.888603 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-18 00:19:10.437972 | orchestrator | ok: [testbed-manager] 2026-04-18 00:19:10.438136 | orchestrator | 2026-04-18 00:19:10.438155 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-18 00:19:10.490660 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:19:10.490776 | orchestrator | 2026-04-18 00:19:10.490799 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-18 00:19:10.545975 | orchestrator | ok: [testbed-manager] 2026-04-18 00:19:10.546117 | orchestrator | 2026-04-18 00:19:10.546133 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-18 00:19:11.243821 | orchestrator | changed: [testbed-manager] 2026-04-18 00:19:11.243948 | orchestrator | 2026-04-18 00:19:11.243965 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-18 00:20:19.264776 | orchestrator | changed: [testbed-manager] 2026-04-18 00:20:19.264869 | orchestrator | 2026-04-18 00:20:19.264882 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-18 00:20:20.183758 | orchestrator | ok: [testbed-manager] 2026-04-18 00:20:20.183862 | orchestrator | 2026-04-18 00:20:20.183878 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-18 00:20:20.218379 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:20:20.218506 | orchestrator | 2026-04-18 00:20:20.218572 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-18 00:20:22.518999 | orchestrator | changed: [testbed-manager] 2026-04-18 00:20:22.519102 | orchestrator | 2026-04-18 00:20:22.519119 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-18 00:20:22.589754 | orchestrator | ok: [testbed-manager] 2026-04-18 00:20:22.589848 | orchestrator | 2026-04-18 00:20:22.589863 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-18 00:20:22.589876 | orchestrator | 2026-04-18 00:20:22.589888 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-18 00:20:22.756657 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:20:22.756742 | orchestrator | 2026-04-18 00:20:22.756776 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-18 00:21:22.815883 | orchestrator | Pausing for 60 seconds 2026-04-18 00:21:22.815998 | orchestrator | changed: [testbed-manager] 2026-04-18 00:21:22.816015 | orchestrator | 2026-04-18 00:21:22.816028 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-18 00:21:25.346844 | orchestrator | changed: [testbed-manager] 2026-04-18 00:21:25.346947 | orchestrator | 2026-04-18 00:21:25.346964 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-18 00:22:06.824312 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-18 00:22:06.824488 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-18 00:22:06.824509 | orchestrator | changed: [testbed-manager] 2026-04-18 00:22:06.824524 | orchestrator | 2026-04-18 00:22:06.824536 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-18 00:22:12.287285 | orchestrator | changed: [testbed-manager] 2026-04-18 00:22:12.287452 | orchestrator | 2026-04-18 00:22:12.287470 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-18 00:22:12.367211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-18 00:22:12.367299 | orchestrator | 2026-04-18 00:22:12.367313 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-18 00:22:12.367326 | orchestrator | 2026-04-18 00:22:12.367337 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-18 00:22:12.407420 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:22:12.407506 | orchestrator | 2026-04-18 00:22:12.407524 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-18 00:22:12.470163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-18 00:22:12.470277 | orchestrator | 2026-04-18 00:22:12.470300 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-18 00:22:13.221001 | orchestrator | changed: [testbed-manager] 2026-04-18 00:22:13.221147 | orchestrator | 2026-04-18 00:22:13.221178 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-18 00:22:16.356342 | orchestrator | ok: [testbed-manager] 2026-04-18 00:22:16.356500 | orchestrator | 2026-04-18 00:22:16.356517 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-18 00:22:16.426322 | orchestrator | ok: [testbed-manager] => { 2026-04-18 00:22:16.426436 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-18 00:22:16.426451 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-18 00:22:16.426463 | orchestrator | "Checking running containers against expected versions...", 2026-04-18 00:22:16.426475 | orchestrator | "", 2026-04-18 00:22:16.426487 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-18 00:22:16.426498 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-18 00:22:16.426510 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.426522 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-18 00:22:16.426533 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.426545 | orchestrator | "", 2026-04-18 00:22:16.426584 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-18 00:22:16.426597 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-18 00:22:16.426609 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.426620 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-18 00:22:16.426631 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.426642 | orchestrator | "", 2026-04-18 00:22:16.426654 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-18 00:22:16.426668 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-18 00:22:16.426680 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.426691 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-18 00:22:16.426702 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.426713 | orchestrator | "", 2026-04-18 00:22:16.426724 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-18 00:22:16.426736 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-18 00:22:16.426747 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.426758 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-18 00:22:16.426769 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.426780 | orchestrator | "", 2026-04-18 00:22:16.426791 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-18 00:22:16.426802 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-18 00:22:16.426813 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.426824 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-18 00:22:16.426835 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.426846 | orchestrator | "", 2026-04-18 00:22:16.426857 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-18 00:22:16.426870 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-18 00:22:16.426882 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.426895 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-18 00:22:16.426907 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.426919 | orchestrator | "", 2026-04-18 00:22:16.426932 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-18 00:22:16.426944 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-18 00:22:16.426957 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.426970 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-18 00:22:16.426984 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.426996 | orchestrator | "", 2026-04-18 00:22:16.427008 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-18 00:22:16.427021 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-18 00:22:16.427033 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.427046 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-18 00:22:16.427058 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.427071 | orchestrator | "", 2026-04-18 00:22:16.427084 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-18 00:22:16.427097 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-18 00:22:16.427109 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.427121 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-18 00:22:16.427133 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.427145 | orchestrator | "", 2026-04-18 00:22:16.427158 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-18 00:22:16.427170 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-18 00:22:16.427183 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.427196 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-18 00:22:16.427208 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.427219 | orchestrator | "", 2026-04-18 00:22:16.427238 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-18 00:22:16.427249 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-18 00:22:16.427260 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.427271 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-18 00:22:16.427281 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.427292 | orchestrator | "", 2026-04-18 00:22:16.427303 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-18 00:22:16.427314 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-18 00:22:16.427325 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.427336 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-18 00:22:16.427346 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.427358 | orchestrator | "", 2026-04-18 00:22:16.427399 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-18 00:22:16.427419 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-18 00:22:16.427431 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.427442 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-18 00:22:16.427453 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.427464 | orchestrator | "", 2026-04-18 00:22:16.427475 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-18 00:22:16.427486 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-18 00:22:16.427497 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.427508 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-18 00:22:16.427536 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.427547 | orchestrator | "", 2026-04-18 00:22:16.427558 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-18 00:22:16.427569 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-18 00:22:16.427580 | orchestrator | " Enabled: true", 2026-04-18 00:22:16.427591 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-18 00:22:16.427602 | orchestrator | " Status: ✅ MATCH", 2026-04-18 00:22:16.427613 | orchestrator | "", 2026-04-18 00:22:16.427624 | orchestrator | "=== Summary ===", 2026-04-18 00:22:16.427635 | orchestrator | "Errors (version mismatches): 0", 2026-04-18 00:22:16.427646 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-18 00:22:16.427657 | orchestrator | "", 2026-04-18 00:22:16.427668 | orchestrator | "✅ All running containers match expected versions!" 2026-04-18 00:22:16.427679 | orchestrator | ] 2026-04-18 00:22:16.427690 | orchestrator | } 2026-04-18 00:22:16.427701 | orchestrator | 2026-04-18 00:22:16.427712 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-18 00:22:16.483448 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:22:16.483549 | orchestrator | 2026-04-18 00:22:16.483564 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:22:16.483577 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-18 00:22:16.483589 | orchestrator | 2026-04-18 00:22:16.589914 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-18 00:22:16.590013 | orchestrator | + deactivate 2026-04-18 00:22:16.590111 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-18 00:22:16.590134 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-18 00:22:16.590153 | orchestrator | + export PATH 2026-04-18 00:22:16.590172 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-18 00:22:16.590193 | orchestrator | + '[' -n '' ']' 2026-04-18 00:22:16.590214 | orchestrator | + hash -r 2026-04-18 00:22:16.590233 | orchestrator | + '[' -n '' ']' 2026-04-18 00:22:16.590253 | orchestrator | + unset VIRTUAL_ENV 2026-04-18 00:22:16.590273 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-18 00:22:16.590293 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-18 00:22:16.590312 | orchestrator | + unset -f deactivate 2026-04-18 00:22:16.590333 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-18 00:22:16.597157 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-18 00:22:16.597231 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-18 00:22:16.597253 | orchestrator | + local max_attempts=60 2026-04-18 00:22:16.597274 | orchestrator | + local name=ceph-ansible 2026-04-18 00:22:16.597294 | orchestrator | + local attempt_num=1 2026-04-18 00:22:16.597898 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:22:16.627490 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:22:16.627577 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-18 00:22:16.627593 | orchestrator | + local max_attempts=60 2026-04-18 00:22:16.627606 | orchestrator | + local name=kolla-ansible 2026-04-18 00:22:16.627617 | orchestrator | + local attempt_num=1 2026-04-18 00:22:16.628011 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-18 00:22:16.668309 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:22:16.668443 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-18 00:22:16.668458 | orchestrator | + local max_attempts=60 2026-04-18 00:22:16.668471 | orchestrator | + local name=osism-ansible 2026-04-18 00:22:16.668483 | orchestrator | + local attempt_num=1 2026-04-18 00:22:16.668889 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-18 00:22:16.705648 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:22:16.705727 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-18 00:22:16.705740 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-18 00:22:17.378958 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-18 00:22:17.502351 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-18 00:22:17.502508 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-18 00:22:17.502548 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-18 00:22:17.502561 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-18 00:22:17.502575 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-18 00:22:17.502586 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-04-18 00:22:17.502598 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-04-18 00:22:17.502609 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2026-04-18 00:22:17.502621 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-04-18 00:22:17.502632 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-04-18 00:22:17.502644 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-04-18 00:22:17.502655 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-04-18 00:22:17.502689 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-18 00:22:17.502701 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-18 00:22:17.502712 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-18 00:22:17.502724 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-04-18 00:22:17.506405 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-18 00:22:17.534818 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-18 00:22:17.534887 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-18 00:22:17.536731 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-18 00:22:29.984934 | orchestrator | 2026-04-18 00:22:29 | INFO  | Prepare task for execution of resolvconf. 2026-04-18 00:22:30.175597 | orchestrator | 2026-04-18 00:22:30 | INFO  | Task 8feeab20-7e8a-45ec-b032-c26fe11c9c65 (resolvconf) was prepared for execution. 2026-04-18 00:22:30.175693 | orchestrator | 2026-04-18 00:22:30 | INFO  | It takes a moment until task 8feeab20-7e8a-45ec-b032-c26fe11c9c65 (resolvconf) has been started and output is visible here. 2026-04-18 00:22:43.703900 | orchestrator | 2026-04-18 00:22:43.704018 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-18 00:22:43.704045 | orchestrator | 2026-04-18 00:22:43.704064 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-18 00:22:43.704106 | orchestrator | Saturday 18 April 2026 00:22:33 +0000 (0:00:00.173) 0:00:00.173 ******** 2026-04-18 00:22:43.704124 | orchestrator | ok: [testbed-manager] 2026-04-18 00:22:43.704144 | orchestrator | 2026-04-18 00:22:43.704162 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-18 00:22:43.704184 | orchestrator | Saturday 18 April 2026 00:22:37 +0000 (0:00:04.567) 0:00:04.740 ******** 2026-04-18 00:22:43.704204 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:22:43.704224 | orchestrator | 2026-04-18 00:22:43.704239 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-18 00:22:43.704250 | orchestrator | Saturday 18 April 2026 00:22:37 +0000 (0:00:00.061) 0:00:04.802 ******** 2026-04-18 00:22:43.704261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-18 00:22:43.704273 | orchestrator | 2026-04-18 00:22:43.704284 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-18 00:22:43.704295 | orchestrator | Saturday 18 April 2026 00:22:37 +0000 (0:00:00.092) 0:00:04.895 ******** 2026-04-18 00:22:43.704306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-18 00:22:43.704316 | orchestrator | 2026-04-18 00:22:43.704327 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-18 00:22:43.704338 | orchestrator | Saturday 18 April 2026 00:22:38 +0000 (0:00:00.082) 0:00:04.977 ******** 2026-04-18 00:22:43.704349 | orchestrator | ok: [testbed-manager] 2026-04-18 00:22:43.704359 | orchestrator | 2026-04-18 00:22:43.704399 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-18 00:22:43.704411 | orchestrator | Saturday 18 April 2026 00:22:39 +0000 (0:00:01.090) 0:00:06.067 ******** 2026-04-18 00:22:43.704448 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:22:43.704461 | orchestrator | 2026-04-18 00:22:43.704474 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-18 00:22:43.704487 | orchestrator | Saturday 18 April 2026 00:22:39 +0000 (0:00:00.054) 0:00:06.121 ******** 2026-04-18 00:22:43.704499 | orchestrator | ok: [testbed-manager] 2026-04-18 00:22:43.704510 | orchestrator | 2026-04-18 00:22:43.704522 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-18 00:22:43.704534 | orchestrator | Saturday 18 April 2026 00:22:39 +0000 (0:00:00.511) 0:00:06.632 ******** 2026-04-18 00:22:43.704547 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:22:43.704559 | orchestrator | 2026-04-18 00:22:43.704571 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-18 00:22:43.704585 | orchestrator | Saturday 18 April 2026 00:22:39 +0000 (0:00:00.078) 0:00:06.711 ******** 2026-04-18 00:22:43.704598 | orchestrator | changed: [testbed-manager] 2026-04-18 00:22:43.704610 | orchestrator | 2026-04-18 00:22:43.704622 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-18 00:22:43.704633 | orchestrator | Saturday 18 April 2026 00:22:40 +0000 (0:00:00.566) 0:00:07.278 ******** 2026-04-18 00:22:43.704646 | orchestrator | changed: [testbed-manager] 2026-04-18 00:22:43.704658 | orchestrator | 2026-04-18 00:22:43.704670 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-18 00:22:43.704683 | orchestrator | Saturday 18 April 2026 00:22:41 +0000 (0:00:01.089) 0:00:08.367 ******** 2026-04-18 00:22:43.704695 | orchestrator | ok: [testbed-manager] 2026-04-18 00:22:43.704707 | orchestrator | 2026-04-18 00:22:43.704719 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-18 00:22:43.704731 | orchestrator | Saturday 18 April 2026 00:22:42 +0000 (0:00:00.949) 0:00:09.317 ******** 2026-04-18 00:22:43.704743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-18 00:22:43.704755 | orchestrator | 2026-04-18 00:22:43.704767 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-18 00:22:43.704779 | orchestrator | Saturday 18 April 2026 00:22:42 +0000 (0:00:00.081) 0:00:09.399 ******** 2026-04-18 00:22:43.704790 | orchestrator | changed: [testbed-manager] 2026-04-18 00:22:43.704801 | orchestrator | 2026-04-18 00:22:43.704811 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:22:43.704823 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-18 00:22:43.704833 | orchestrator | 2026-04-18 00:22:43.704844 | orchestrator | 2026-04-18 00:22:43.704855 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:22:43.704865 | orchestrator | Saturday 18 April 2026 00:22:43 +0000 (0:00:01.113) 0:00:10.512 ******** 2026-04-18 00:22:43.704876 | orchestrator | =============================================================================== 2026-04-18 00:22:43.704886 | orchestrator | Gathering Facts --------------------------------------------------------- 4.57s 2026-04-18 00:22:43.704897 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.11s 2026-04-18 00:22:43.704907 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.09s 2026-04-18 00:22:43.704918 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.09s 2026-04-18 00:22:43.704928 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2026-04-18 00:22:43.704952 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-04-18 00:22:43.704983 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2026-04-18 00:22:43.705002 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-04-18 00:22:43.705013 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-04-18 00:22:43.705032 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-18 00:22:43.705043 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-18 00:22:43.705054 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-04-18 00:22:43.705064 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-04-18 00:22:43.879541 | orchestrator | + osism apply sshconfig 2026-04-18 00:22:55.137971 | orchestrator | 2026-04-18 00:22:55 | INFO  | Prepare task for execution of sshconfig. 2026-04-18 00:22:55.202519 | orchestrator | 2026-04-18 00:22:55 | INFO  | Task b20bfb8d-877e-4946-9f16-2d02c93fdbc0 (sshconfig) was prepared for execution. 2026-04-18 00:22:55.202641 | orchestrator | 2026-04-18 00:22:55 | INFO  | It takes a moment until task b20bfb8d-877e-4946-9f16-2d02c93fdbc0 (sshconfig) has been started and output is visible here. 2026-04-18 00:23:06.041628 | orchestrator | 2026-04-18 00:23:06.041744 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-18 00:23:06.041762 | orchestrator | 2026-04-18 00:23:06.041774 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-18 00:23:06.041786 | orchestrator | Saturday 18 April 2026 00:22:58 +0000 (0:00:00.186) 0:00:00.186 ******** 2026-04-18 00:23:06.041797 | orchestrator | ok: [testbed-manager] 2026-04-18 00:23:06.041809 | orchestrator | 2026-04-18 00:23:06.041820 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-18 00:23:06.041831 | orchestrator | Saturday 18 April 2026 00:22:59 +0000 (0:00:00.908) 0:00:01.094 ******** 2026-04-18 00:23:06.041842 | orchestrator | changed: [testbed-manager] 2026-04-18 00:23:06.041854 | orchestrator | 2026-04-18 00:23:06.041865 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-18 00:23:06.041876 | orchestrator | Saturday 18 April 2026 00:22:59 +0000 (0:00:00.522) 0:00:01.617 ******** 2026-04-18 00:23:06.041887 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-18 00:23:06.041898 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-18 00:23:06.041910 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-18 00:23:06.041921 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-18 00:23:06.041931 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-18 00:23:06.041942 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-18 00:23:06.041953 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-18 00:23:06.041964 | orchestrator | 2026-04-18 00:23:06.041975 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-18 00:23:06.041986 | orchestrator | Saturday 18 April 2026 00:23:05 +0000 (0:00:05.452) 0:00:07.070 ******** 2026-04-18 00:23:06.041997 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:23:06.042007 | orchestrator | 2026-04-18 00:23:06.042073 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-18 00:23:06.042085 | orchestrator | Saturday 18 April 2026 00:23:05 +0000 (0:00:00.100) 0:00:07.171 ******** 2026-04-18 00:23:06.042096 | orchestrator | changed: [testbed-manager] 2026-04-18 00:23:06.042107 | orchestrator | 2026-04-18 00:23:06.042118 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:23:06.042131 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-18 00:23:06.042143 | orchestrator | 2026-04-18 00:23:06.042153 | orchestrator | 2026-04-18 00:23:06.042167 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:23:06.042180 | orchestrator | Saturday 18 April 2026 00:23:05 +0000 (0:00:00.550) 0:00:07.721 ******** 2026-04-18 00:23:06.042194 | orchestrator | =============================================================================== 2026-04-18 00:23:06.042235 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.45s 2026-04-18 00:23:06.042249 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.91s 2026-04-18 00:23:06.042262 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2026-04-18 00:23:06.042274 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2026-04-18 00:23:06.042287 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-04-18 00:23:06.217906 | orchestrator | + osism apply known-hosts 2026-04-18 00:23:17.486855 | orchestrator | 2026-04-18 00:23:17 | INFO  | Prepare task for execution of known-hosts. 2026-04-18 00:23:17.560504 | orchestrator | 2026-04-18 00:23:17 | INFO  | Task 6866ec6c-0a85-407a-a922-d2ff9a810bb7 (known-hosts) was prepared for execution. 2026-04-18 00:23:17.560599 | orchestrator | 2026-04-18 00:23:17 | INFO  | It takes a moment until task 6866ec6c-0a85-407a-a922-d2ff9a810bb7 (known-hosts) has been started and output is visible here. 2026-04-18 00:23:32.265302 | orchestrator | 2026-04-18 00:23:32.265476 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-18 00:23:32.265507 | orchestrator | 2026-04-18 00:23:32.265529 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-18 00:23:32.265550 | orchestrator | Saturday 18 April 2026 00:23:20 +0000 (0:00:00.177) 0:00:00.177 ******** 2026-04-18 00:23:32.265571 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-18 00:23:32.265589 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-18 00:23:32.265609 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-18 00:23:32.265627 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-18 00:23:32.265647 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-18 00:23:32.265665 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-18 00:23:32.265685 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-18 00:23:32.265704 | orchestrator | 2026-04-18 00:23:32.265740 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-18 00:23:32.265762 | orchestrator | Saturday 18 April 2026 00:23:26 +0000 (0:00:06.032) 0:00:06.210 ******** 2026-04-18 00:23:32.265783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-18 00:23:32.265806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-18 00:23:32.265828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-18 00:23:32.265850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-18 00:23:32.265869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-18 00:23:32.265890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-18 00:23:32.265911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-18 00:23:32.265932 | orchestrator | 2026-04-18 00:23:32.265954 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:32.265968 | orchestrator | Saturday 18 April 2026 00:23:26 +0000 (0:00:00.171) 0:00:06.382 ******** 2026-04-18 00:23:32.266006 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICA+IIUdnVseEZFiJ7sSeM7tDxq59YPxBAKKB+OmUv1O) 2026-04-18 00:23:32.266088 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO/w43WbwGm6ktlR/DAcnZdle60tyF61FAlKCBg+EKq0oUdObGbaAo/foAHSp/dWKdXlGEMrIjwCVbHm+4qRbMOPa66qcEqYtCDs+d3U4VOkcjuLV2v79FoY193NK+JKI3WZDX1tJDYoiX16CSVsKuKFX0tdYPuYTb7iygFnNzEZmWW6c+ayL+wr6X7iqrXWyJoxH87ZrXAgdG0CQMwxwoFwdlFWUUHF+B6LNKqUoAy+GMgZoy+nw9R7Pc7wLjjCjP20gmyStdZgMtUZ20dD1nrBBT0AMzUrv/TXuZlWVk0mDK7TXdEzA1Db2eokLDFgiOGcU15nYYwmWz0b2y3nrhUH+Pd6sH/WvOsq1xr3I0Z6NYOLAlBbIrlqxmR1A7hS8y0djouVK1OjbVjz13B2pXRRmqb0c7fA00X8z+BDMfFgOvYqrzlxAHXvouKBmj3vUYlVWQAysxQUwXx+w03ygKAtii2pUScaGEKlOAdiLh8iu1SuroRov6b8wxJGbBYe0=) 2026-04-18 00:23:32.266112 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD+SEWbimwINR+wqZ0r46WEg5eI2lmfVI3b8JuKgmQ7ufeqarqQnp7kUk457mWbGAznSM1cEnAqUburh41K/RN4=) 2026-04-18 00:23:32.266129 | orchestrator | 2026-04-18 00:23:32.266147 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:32.266167 | orchestrator | Saturday 18 April 2026 00:23:27 +0000 (0:00:01.234) 0:00:07.616 ******** 2026-04-18 00:23:32.266214 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDtSQjiU1Klr2v4JxmQu1ABJSiUpAIJXj640em7z5RtcGF1Y9SrU/D8PuJ+UCRMw0MCLOSoUcC+n16ADfxLhA8gLuQ13hoyXhb4ZgNqqFPV/OWKLUR33QwDham3C6EN84c+LG/ZotOYItQnMBhSzjfJANZAFLGTvnUm77w8svLpm3LcyQ7xnU6B6cebsM/xsZLq0b8R4U8sMJMyOr/vLVDb+yXBgdFUpuAozdvW9OuTKbLkzqP+oEx136vQca7k3vei8z8NOgnA841BI9rSorxlrBhk9U+V5+vG52WgtOVb/2NYkZOOaZSLeI0DKMlSfGka0r9EJcmF10S/pa5XbCduK+C/G+Ob9Ul6ZiG0mifY53+5L3km+an1c/n5KFWskrAmwmSK9zIagQgjQJ3Gvtht1/Nu2sCMNzNqaWQEcMOigK9JHfV1BU5abHpjOO5GnmdFEJ8zSicIYp8Ptti4wOelT+cGdMgpw7ADCcT7Ov4FRLV2vRXeSVck6LV4MEWszs=) 2026-04-18 00:23:32.266237 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDV7btcgHlE34ibWtXz1QtwQ1Ozi7Y9nGp0/WIy0jJ62) 2026-04-18 00:23:32.266256 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOqKpRlZGXrkwBldnuYtgWSfwsSiLRHy3m1FVZHYz2L6VvIuKM/COGF5nbyD1djaqa8vraAglIhNEitlIIJyK6s=) 2026-04-18 00:23:32.266276 | orchestrator | 2026-04-18 00:23:32.266295 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:32.266315 | orchestrator | Saturday 18 April 2026 00:23:28 +0000 (0:00:00.993) 0:00:08.609 ******** 2026-04-18 00:23:32.266330 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFyhL9EPtjQpHzE5occfRB6QHMge3QTDYdLQ+jqC1kq2qLI6J9bZ8YU9LkIPcsUhRlHoAXgjXRe3KaoYZucZ8Yc=) 2026-04-18 00:23:32.266341 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINqhfW/bp8VbtWsBqL1ZPLW49nFxcIAYbcCGC//duDaf) 2026-04-18 00:23:32.266352 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/f2s9zSz8fMD7UqnfNhbEUAHklxuNwCgobOjDFCe614oZg8uu8ES+kZhbXHvWM5YpdLyLeckFi4+4BLjqiSaPzEJ+Zv08SQ/X/outjLCFCsuDzc9IgHNoQCmq3us0iqHGaRD5NVYeoDUpZyHswKSsXj5huX1lqjuUDp1lxpTlfibSyC8dOdyc1t8VrPEyRa8/pGHUMs5toQN6WO3RzSGQ7AL9in0uFN9Z/AalRRpdOO66AwrU4yqFosqISqMLAig/KAByQYvoEzZz7EtI2EtBk0siBlmcPOFDW/MuNcxjUq8jUwe41EoPWaTV1SSaDlR2ymfY43mGATGlWo1ZrKqC0ANarKChcKOTuOmIEW3DosUw5rW9u32b2GHm3iP4jD5d7C6jGt+JnCyLQIWqz2wWdvhjHGdcl3S0zKkUNm/id4t8kkI+8dSg6nF5/GcPca1W/UGDYnh/1WMX4XXV916sRysHzu2hRiTX3CfEhX3Z7fWvxbudQEvuRNfqKg3DXF8=) 2026-04-18 00:23:32.266364 | orchestrator | 2026-04-18 00:23:32.266375 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:32.266499 | orchestrator | Saturday 18 April 2026 00:23:29 +0000 (0:00:00.992) 0:00:09.602 ******** 2026-04-18 00:23:32.266511 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBNQ3grIu4e8RQUi5Y6T2qycZA0mm5aiBlEzO4XY2bsE) 2026-04-18 00:23:32.266536 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvfXzzI279ZUHxBnVAdEbAzx1dA5RhWiGBfOdy9p6LdfdStZ1RjN/DVycxk7a+2hFrXuKPxeLkfqEFZw8Yr6jZPdnE94Fa42aeoUhZTSsK30z7cEojcqAAYWaNTXzhywahKzfUsWcUCI/coQyCbzYLejMrYoSmw8OYBCq/dwijTlbxzLFPg4IxxjWt5fuZziwjCfvvfMfKoDq2EEiEUhQDq232i30gXs8aXEjoeDZk/Emw5QebCsgXKUIeb5GW7AtR3iIQNxokOYzCvx0TOhs7HB6YqrErzxAJjmSUwqA8Ezky5j2k8m6WD51ACxqgu9q/YX/wGTTz0LXXpdf3aoihU/Oyee5Hc1yU00NtEKLjUyvlwZwBaqaNCqJWDrwvUpdQ8l3HwiMK4S3iVqpkQBEKqhlEgE7S4doNDiFXfqKtF/TgyncLz4uk7E26QyRHX3moymchGsHVDKYfc10xzKFz6Jd5Mcu3SuBoO5oDKE1bu+bOQPQx2SMsBCj2jW5Z/wk=) 2026-04-18 00:23:32.266548 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLf42kmHJYNBDIzTh+yRdLJDQ0CCPtWbXOFHikTCmkypXuKRaBPH917uYt0aHrFvebKQnizX3uEBmSedluEpeDs=) 2026-04-18 00:23:32.266559 | orchestrator | 2026-04-18 00:23:32.266570 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:32.266581 | orchestrator | Saturday 18 April 2026 00:23:30 +0000 (0:00:01.007) 0:00:10.609 ******** 2026-04-18 00:23:32.266592 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRVxlsoKV9yKjDoEI09P0/oB9C3c64xJ80yu52nRx8/7ByF9g9YmfP8LyQi5P2DdiACzEHyCRJLXN346bWuOXkQ+WuLrUemTEG2rvKLNT+MGPFvIWYwfGQleUY010+rC9tbv8XaVTGoFtvXsYYneh7sMIqKg7d2EvwVQ/wXzEIjR8LaNnF7x4k1x5MfJt8yLYNvN32oxb2Q11Ih7vQKP4Mq+gZpICI84bBa5pMsPMzvXiF8JzEFXr9eS/30Z2MjjYNoRkb7q7M5UAhB9W1T/YkqWBV73LzIOHNi/N9s/w7FRvCCCJJfKoblHxysx+7dAM9S69s88J3Xha51wgaLn4Ht/81QLGzlxCvbiKsyw5/ozwmCdJuXfounaek0Qn8ThIubUF4cTeDZbmwGaM+HVMnoo6HRFNMum1haBLN/UBxD5DJH77Ug4edg4LA5eavTpyqWFWCe+/zAwcxlLZCGPaw4ImAuu3f65+DvbRUYABxQffp25EvwnPen74/MNMQW2M=) 2026-04-18 00:23:32.266603 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPgEsnQykaJTCmmOSlN181pDZzBzz2hQE0LfyagSR1ZCXqRwwSdKS9gDgQy5KX7gHDRM59R3qR/yz7AwELlJxXk=) 2026-04-18 00:23:32.266615 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDpZRvwIt23/ahanNyyocUSqH2KJfg9fAM3LbLeRr6bX) 2026-04-18 00:23:32.266625 | orchestrator | 2026-04-18 00:23:32.266636 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:32.266647 | orchestrator | Saturday 18 April 2026 00:23:31 +0000 (0:00:00.979) 0:00:11.589 ******** 2026-04-18 00:23:32.266670 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtO3rX63vc0jiP94YWIEpCnucV3e2iWSvc+EJZpWdT8Z/Gz4TXDrHUC3M7QqdS60afRD6Lx7IBeh3FKVCDebuUhqIVRZAFwK85fXNo+Eceyk+/gt2ZFlNaIHZaR/HfSD29P3BTZ2Q/iW8WkF3+B9CcJyPGbkPc8FEVl54aiqLy0Y9yRUl4PLW621ipH9wbVHR+4nmb3eO2Ut65ut5cHldI3YGs2b80HJuciPc6cOCOTT/L6qy37q7Mn9RLV840icbVB6HmcvRUerjyBSWGD5aszeff7fhC1FzBgqJiKXBf3WtFIbMAgnTxP7HN8iCEYGHPfiy7EhTKIy3Iu4yDO75AXSrbYG/FIrAaI7/cJwJktV3ebgeiQO02Bk01wZT9AbcQiNpn0inO3jC+/K9s//SDpvlrG3dPbKL+djo6v7sejFFCZ6oVlaCwNeQHDe7Ur0dlXJ/avPfIjbSGCX2hZ00r4iVaoXBDXjFKKSwtbfb9FQ9TXvhKEHdzyxz/8xWeVQU=) 2026-04-18 00:23:43.460042 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIyuRRb1D0rTPpJbmwcvl+rnznNNbA4KBv6cjMBUa2pGdjzQ1borwsZAk1Wn60XPoB9t0weumARgsgr/+Msdyv4=) 2026-04-18 00:23:43.460125 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILbBdlRURWuN24ncXxK1t3zT426X6fko+MXdyDMO72+4) 2026-04-18 00:23:43.460133 | orchestrator | 2026-04-18 00:23:43.460139 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:43.460144 | orchestrator | Saturday 18 April 2026 00:23:32 +0000 (0:00:01.002) 0:00:12.591 ******** 2026-04-18 00:23:43.460149 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO7EHgmrgTqNYF/HqQGI++UhTUNmCLVwBzpPONebnS7VlYpe263ynpzumN8z6WCqqBarqv7l9h2anUR2RLO3Ubc=) 2026-04-18 00:23:43.460171 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbtMyGTpmFjVnWyz+O/jXC4089zEMUB+7whadK/8FZGrwf5N8f7hlIjWXYNqaUp4WW/UBp+gp9208lbhYNG7Ukbxce5VT8YtN94UzTB3TOpCxH4N7ccUbgyGFneQPgi3wJ1u5jRRr27x+AiW54zPv9SjSp6i7ZIC8BEGVcBwSawOe1zP/crmxkkhJA+7wMi6mO6aFsNxzkEnS7fKRvEW8LN4cAkydFzvoay69B+MK650tFJXcO4o8dN53wpqza0vS9ITv5S6dp4U+3gX6zHQvJoMD5/f8LPfZddQCObuKD0OSylOavZcywJo/Nrzgbn7Oc3T/M8454V8LvUzDHggdUJfXU11VI1V11NRDIDMaGhwJiASyJDgpsXNxMSyxTNAg0DjKSYy8fviCdZp/062DHuiKgc7QVb+ConYjiCjgT1FpIzsat7dKZzZj312Ty52YkJV+P0KIWbo9Ox9D/jJbgB/5Gs4HkaLb7e/4xiWu6THiRqtku9rtHn59J7TVRHn8=) 2026-04-18 00:23:43.460176 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDzkgFOV8CGV+Vhct72hgqXwKPb4zUHGDYk9yfFF15tu) 2026-04-18 00:23:43.460180 | orchestrator | 2026-04-18 00:23:43.460185 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-18 00:23:43.460190 | orchestrator | Saturday 18 April 2026 00:23:33 +0000 (0:00:00.997) 0:00:13.589 ******** 2026-04-18 00:23:43.460194 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-18 00:23:43.460199 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-18 00:23:43.460203 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-18 00:23:43.460207 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-18 00:23:43.460210 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-18 00:23:43.460214 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-18 00:23:43.460218 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-18 00:23:43.460221 | orchestrator | 2026-04-18 00:23:43.460225 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-18 00:23:43.460231 | orchestrator | Saturday 18 April 2026 00:23:39 +0000 (0:00:05.238) 0:00:18.827 ******** 2026-04-18 00:23:43.460240 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-18 00:23:43.460249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-18 00:23:43.460270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-18 00:23:43.460276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-18 00:23:43.460283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-18 00:23:43.460289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-18 00:23:43.460296 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-18 00:23:43.460302 | orchestrator | 2026-04-18 00:23:43.460308 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:43.460315 | orchestrator | Saturday 18 April 2026 00:23:39 +0000 (0:00:00.177) 0:00:19.005 ******** 2026-04-18 00:23:43.460322 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD+SEWbimwINR+wqZ0r46WEg5eI2lmfVI3b8JuKgmQ7ufeqarqQnp7kUk457mWbGAznSM1cEnAqUburh41K/RN4=) 2026-04-18 00:23:43.460353 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO/w43WbwGm6ktlR/DAcnZdle60tyF61FAlKCBg+EKq0oUdObGbaAo/foAHSp/dWKdXlGEMrIjwCVbHm+4qRbMOPa66qcEqYtCDs+d3U4VOkcjuLV2v79FoY193NK+JKI3WZDX1tJDYoiX16CSVsKuKFX0tdYPuYTb7iygFnNzEZmWW6c+ayL+wr6X7iqrXWyJoxH87ZrXAgdG0CQMwxwoFwdlFWUUHF+B6LNKqUoAy+GMgZoy+nw9R7Pc7wLjjCjP20gmyStdZgMtUZ20dD1nrBBT0AMzUrv/TXuZlWVk0mDK7TXdEzA1Db2eokLDFgiOGcU15nYYwmWz0b2y3nrhUH+Pd6sH/WvOsq1xr3I0Z6NYOLAlBbIrlqxmR1A7hS8y0djouVK1OjbVjz13B2pXRRmqb0c7fA00X8z+BDMfFgOvYqrzlxAHXvouKBmj3vUYlVWQAysxQUwXx+w03ygKAtii2pUScaGEKlOAdiLh8iu1SuroRov6b8wxJGbBYe0=) 2026-04-18 00:23:43.460361 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICA+IIUdnVseEZFiJ7sSeM7tDxq59YPxBAKKB+OmUv1O) 2026-04-18 00:23:43.460368 | orchestrator | 2026-04-18 00:23:43.460434 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:43.460441 | orchestrator | Saturday 18 April 2026 00:23:40 +0000 (0:00:01.015) 0:00:20.020 ******** 2026-04-18 00:23:43.460447 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOqKpRlZGXrkwBldnuYtgWSfwsSiLRHy3m1FVZHYz2L6VvIuKM/COGF5nbyD1djaqa8vraAglIhNEitlIIJyK6s=) 2026-04-18 00:23:43.460453 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDtSQjiU1Klr2v4JxmQu1ABJSiUpAIJXj640em7z5RtcGF1Y9SrU/D8PuJ+UCRMw0MCLOSoUcC+n16ADfxLhA8gLuQ13hoyXhb4ZgNqqFPV/OWKLUR33QwDham3C6EN84c+LG/ZotOYItQnMBhSzjfJANZAFLGTvnUm77w8svLpm3LcyQ7xnU6B6cebsM/xsZLq0b8R4U8sMJMyOr/vLVDb+yXBgdFUpuAozdvW9OuTKbLkzqP+oEx136vQca7k3vei8z8NOgnA841BI9rSorxlrBhk9U+V5+vG52WgtOVb/2NYkZOOaZSLeI0DKMlSfGka0r9EJcmF10S/pa5XbCduK+C/G+Ob9Ul6ZiG0mifY53+5L3km+an1c/n5KFWskrAmwmSK9zIagQgjQJ3Gvtht1/Nu2sCMNzNqaWQEcMOigK9JHfV1BU5abHpjOO5GnmdFEJ8zSicIYp8Ptti4wOelT+cGdMgpw7ADCcT7Ov4FRLV2vRXeSVck6LV4MEWszs=) 2026-04-18 00:23:43.460460 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDV7btcgHlE34ibWtXz1QtwQ1Ozi7Y9nGp0/WIy0jJ62) 2026-04-18 00:23:43.460466 | orchestrator | 2026-04-18 00:23:43.460472 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:43.460479 | orchestrator | Saturday 18 April 2026 00:23:41 +0000 (0:00:01.007) 0:00:21.028 ******** 2026-04-18 00:23:43.460485 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFyhL9EPtjQpHzE5occfRB6QHMge3QTDYdLQ+jqC1kq2qLI6J9bZ8YU9LkIPcsUhRlHoAXgjXRe3KaoYZucZ8Yc=) 2026-04-18 00:23:43.460492 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/f2s9zSz8fMD7UqnfNhbEUAHklxuNwCgobOjDFCe614oZg8uu8ES+kZhbXHvWM5YpdLyLeckFi4+4BLjqiSaPzEJ+Zv08SQ/X/outjLCFCsuDzc9IgHNoQCmq3us0iqHGaRD5NVYeoDUpZyHswKSsXj5huX1lqjuUDp1lxpTlfibSyC8dOdyc1t8VrPEyRa8/pGHUMs5toQN6WO3RzSGQ7AL9in0uFN9Z/AalRRpdOO66AwrU4yqFosqISqMLAig/KAByQYvoEzZz7EtI2EtBk0siBlmcPOFDW/MuNcxjUq8jUwe41EoPWaTV1SSaDlR2ymfY43mGATGlWo1ZrKqC0ANarKChcKOTuOmIEW3DosUw5rW9u32b2GHm3iP4jD5d7C6jGt+JnCyLQIWqz2wWdvhjHGdcl3S0zKkUNm/id4t8kkI+8dSg6nF5/GcPca1W/UGDYnh/1WMX4XXV916sRysHzu2hRiTX3CfEhX3Z7fWvxbudQEvuRNfqKg3DXF8=) 2026-04-18 00:23:43.460498 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINqhfW/bp8VbtWsBqL1ZPLW49nFxcIAYbcCGC//duDaf) 2026-04-18 00:23:43.460505 | orchestrator | 2026-04-18 00:23:43.460511 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:43.460517 | orchestrator | Saturday 18 April 2026 00:23:42 +0000 (0:00:01.094) 0:00:22.122 ******** 2026-04-18 00:23:43.460523 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLf42kmHJYNBDIzTh+yRdLJDQ0CCPtWbXOFHikTCmkypXuKRaBPH917uYt0aHrFvebKQnizX3uEBmSedluEpeDs=) 2026-04-18 00:23:43.460529 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBNQ3grIu4e8RQUi5Y6T2qycZA0mm5aiBlEzO4XY2bsE) 2026-04-18 00:23:43.460545 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvfXzzI279ZUHxBnVAdEbAzx1dA5RhWiGBfOdy9p6LdfdStZ1RjN/DVycxk7a+2hFrXuKPxeLkfqEFZw8Yr6jZPdnE94Fa42aeoUhZTSsK30z7cEojcqAAYWaNTXzhywahKzfUsWcUCI/coQyCbzYLejMrYoSmw8OYBCq/dwijTlbxzLFPg4IxxjWt5fuZziwjCfvvfMfKoDq2EEiEUhQDq232i30gXs8aXEjoeDZk/Emw5QebCsgXKUIeb5GW7AtR3iIQNxokOYzCvx0TOhs7HB6YqrErzxAJjmSUwqA8Ezky5j2k8m6WD51ACxqgu9q/YX/wGTTz0LXXpdf3aoihU/Oyee5Hc1yU00NtEKLjUyvlwZwBaqaNCqJWDrwvUpdQ8l3HwiMK4S3iVqpkQBEKqhlEgE7S4doNDiFXfqKtF/TgyncLz4uk7E26QyRHX3moymchGsHVDKYfc10xzKFz6Jd5Mcu3SuBoO5oDKE1bu+bOQPQx2SMsBCj2jW5Z/wk=) 2026-04-18 00:23:47.581026 | orchestrator | 2026-04-18 00:23:47.581126 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:47.581145 | orchestrator | Saturday 18 April 2026 00:23:43 +0000 (0:00:01.051) 0:00:23.173 ******** 2026-04-18 00:23:47.581154 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRVxlsoKV9yKjDoEI09P0/oB9C3c64xJ80yu52nRx8/7ByF9g9YmfP8LyQi5P2DdiACzEHyCRJLXN346bWuOXkQ+WuLrUemTEG2rvKLNT+MGPFvIWYwfGQleUY010+rC9tbv8XaVTGoFtvXsYYneh7sMIqKg7d2EvwVQ/wXzEIjR8LaNnF7x4k1x5MfJt8yLYNvN32oxb2Q11Ih7vQKP4Mq+gZpICI84bBa5pMsPMzvXiF8JzEFXr9eS/30Z2MjjYNoRkb7q7M5UAhB9W1T/YkqWBV73LzIOHNi/N9s/w7FRvCCCJJfKoblHxysx+7dAM9S69s88J3Xha51wgaLn4Ht/81QLGzlxCvbiKsyw5/ozwmCdJuXfounaek0Qn8ThIubUF4cTeDZbmwGaM+HVMnoo6HRFNMum1haBLN/UBxD5DJH77Ug4edg4LA5eavTpyqWFWCe+/zAwcxlLZCGPaw4ImAuu3f65+DvbRUYABxQffp25EvwnPen74/MNMQW2M=) 2026-04-18 00:23:47.581181 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPgEsnQykaJTCmmOSlN181pDZzBzz2hQE0LfyagSR1ZCXqRwwSdKS9gDgQy5KX7gHDRM59R3qR/yz7AwELlJxXk=) 2026-04-18 00:23:47.581191 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDpZRvwIt23/ahanNyyocUSqH2KJfg9fAM3LbLeRr6bX) 2026-04-18 00:23:47.581199 | orchestrator | 2026-04-18 00:23:47.581205 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:47.581212 | orchestrator | Saturday 18 April 2026 00:23:44 +0000 (0:00:01.024) 0:00:24.198 ******** 2026-04-18 00:23:47.581218 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtO3rX63vc0jiP94YWIEpCnucV3e2iWSvc+EJZpWdT8Z/Gz4TXDrHUC3M7QqdS60afRD6Lx7IBeh3FKVCDebuUhqIVRZAFwK85fXNo+Eceyk+/gt2ZFlNaIHZaR/HfSD29P3BTZ2Q/iW8WkF3+B9CcJyPGbkPc8FEVl54aiqLy0Y9yRUl4PLW621ipH9wbVHR+4nmb3eO2Ut65ut5cHldI3YGs2b80HJuciPc6cOCOTT/L6qy37q7Mn9RLV840icbVB6HmcvRUerjyBSWGD5aszeff7fhC1FzBgqJiKXBf3WtFIbMAgnTxP7HN8iCEYGHPfiy7EhTKIy3Iu4yDO75AXSrbYG/FIrAaI7/cJwJktV3ebgeiQO02Bk01wZT9AbcQiNpn0inO3jC+/K9s//SDpvlrG3dPbKL+djo6v7sejFFCZ6oVlaCwNeQHDe7Ur0dlXJ/avPfIjbSGCX2hZ00r4iVaoXBDXjFKKSwtbfb9FQ9TXvhKEHdzyxz/8xWeVQU=) 2026-04-18 00:23:47.581225 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIyuRRb1D0rTPpJbmwcvl+rnznNNbA4KBv6cjMBUa2pGdjzQ1borwsZAk1Wn60XPoB9t0weumARgsgr/+Msdyv4=) 2026-04-18 00:23:47.581232 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILbBdlRURWuN24ncXxK1t3zT426X6fko+MXdyDMO72+4) 2026-04-18 00:23:47.581239 | orchestrator | 2026-04-18 00:23:47.581245 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-18 00:23:47.581251 | orchestrator | Saturday 18 April 2026 00:23:45 +0000 (0:00:00.993) 0:00:25.191 ******** 2026-04-18 00:23:47.581257 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDzkgFOV8CGV+Vhct72hgqXwKPb4zUHGDYk9yfFF15tu) 2026-04-18 00:23:47.581264 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbtMyGTpmFjVnWyz+O/jXC4089zEMUB+7whadK/8FZGrwf5N8f7hlIjWXYNqaUp4WW/UBp+gp9208lbhYNG7Ukbxce5VT8YtN94UzTB3TOpCxH4N7ccUbgyGFneQPgi3wJ1u5jRRr27x+AiW54zPv9SjSp6i7ZIC8BEGVcBwSawOe1zP/crmxkkhJA+7wMi6mO6aFsNxzkEnS7fKRvEW8LN4cAkydFzvoay69B+MK650tFJXcO4o8dN53wpqza0vS9ITv5S6dp4U+3gX6zHQvJoMD5/f8LPfZddQCObuKD0OSylOavZcywJo/Nrzgbn7Oc3T/M8454V8LvUzDHggdUJfXU11VI1V11NRDIDMaGhwJiASyJDgpsXNxMSyxTNAg0DjKSYy8fviCdZp/062DHuiKgc7QVb+ConYjiCjgT1FpIzsat7dKZzZj312Ty52YkJV+P0KIWbo9Ox9D/jJbgB/5Gs4HkaLb7e/4xiWu6THiRqtku9rtHn59J7TVRHn8=) 2026-04-18 00:23:47.581292 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO7EHgmrgTqNYF/HqQGI++UhTUNmCLVwBzpPONebnS7VlYpe263ynpzumN8z6WCqqBarqv7l9h2anUR2RLO3Ubc=) 2026-04-18 00:23:47.581299 | orchestrator | 2026-04-18 00:23:47.581305 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-18 00:23:47.581311 | orchestrator | Saturday 18 April 2026 00:23:46 +0000 (0:00:01.110) 0:00:26.302 ******** 2026-04-18 00:23:47.581318 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-18 00:23:47.581325 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-18 00:23:47.581338 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-18 00:23:47.581345 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-18 00:23:47.581351 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-18 00:23:47.581357 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-18 00:23:47.581363 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-18 00:23:47.581370 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:23:47.581376 | orchestrator | 2026-04-18 00:23:47.581428 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-18 00:23:47.581435 | orchestrator | Saturday 18 April 2026 00:23:46 +0000 (0:00:00.186) 0:00:26.488 ******** 2026-04-18 00:23:47.581441 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:23:47.581448 | orchestrator | 2026-04-18 00:23:47.581454 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-18 00:23:47.581460 | orchestrator | Saturday 18 April 2026 00:23:46 +0000 (0:00:00.053) 0:00:26.541 ******** 2026-04-18 00:23:47.581466 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:23:47.581472 | orchestrator | 2026-04-18 00:23:47.581478 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-18 00:23:47.581485 | orchestrator | Saturday 18 April 2026 00:23:46 +0000 (0:00:00.048) 0:00:26.590 ******** 2026-04-18 00:23:47.581491 | orchestrator | changed: [testbed-manager] 2026-04-18 00:23:47.581497 | orchestrator | 2026-04-18 00:23:47.581503 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:23:47.581509 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-18 00:23:47.581517 | orchestrator | 2026-04-18 00:23:47.581523 | orchestrator | 2026-04-18 00:23:47.581529 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:23:47.581535 | orchestrator | Saturday 18 April 2026 00:23:47 +0000 (0:00:00.495) 0:00:27.085 ******** 2026-04-18 00:23:47.581541 | orchestrator | =============================================================================== 2026-04-18 00:23:47.581547 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.03s 2026-04-18 00:23:47.581553 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.24s 2026-04-18 00:23:47.581560 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2026-04-18 00:23:47.581567 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-04-18 00:23:47.581573 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-04-18 00:23:47.581579 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-04-18 00:23:47.581585 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-18 00:23:47.581591 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-18 00:23:47.581597 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-18 00:23:47.581610 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-18 00:23:47.581617 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-18 00:23:47.581623 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-18 00:23:47.581629 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-18 00:23:47.581635 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-18 00:23:47.581641 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-18 00:23:47.581647 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-04-18 00:23:47.581653 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.50s 2026-04-18 00:23:47.581660 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-04-18 00:23:47.581666 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-04-18 00:23:47.581672 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-04-18 00:23:47.739550 | orchestrator | + osism apply squid 2026-04-18 00:23:59.061602 | orchestrator | 2026-04-18 00:23:59 | INFO  | Prepare task for execution of squid. 2026-04-18 00:23:59.131931 | orchestrator | 2026-04-18 00:23:59 | INFO  | Task 82f2a598-a4f5-400c-90b7-9dce32106589 (squid) was prepared for execution. 2026-04-18 00:23:59.132022 | orchestrator | 2026-04-18 00:23:59 | INFO  | It takes a moment until task 82f2a598-a4f5-400c-90b7-9dce32106589 (squid) has been started and output is visible here. 2026-04-18 00:26:06.098378 | orchestrator | 2026-04-18 00:26:06.098470 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-18 00:26:06.098479 | orchestrator | 2026-04-18 00:26:06.098528 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-18 00:26:06.098534 | orchestrator | Saturday 18 April 2026 00:24:02 +0000 (0:00:00.193) 0:00:00.193 ******** 2026-04-18 00:26:06.098540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-18 00:26:06.098547 | orchestrator | 2026-04-18 00:26:06.098553 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-18 00:26:06.098558 | orchestrator | Saturday 18 April 2026 00:24:02 +0000 (0:00:00.079) 0:00:00.273 ******** 2026-04-18 00:26:06.098564 | orchestrator | ok: [testbed-manager] 2026-04-18 00:26:06.098570 | orchestrator | 2026-04-18 00:26:06.098576 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-18 00:26:06.098582 | orchestrator | Saturday 18 April 2026 00:24:04 +0000 (0:00:02.327) 0:00:02.600 ******** 2026-04-18 00:26:06.098589 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-18 00:26:06.098612 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-18 00:26:06.098619 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-18 00:26:06.098625 | orchestrator | 2026-04-18 00:26:06.098630 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-18 00:26:06.098636 | orchestrator | Saturday 18 April 2026 00:24:05 +0000 (0:00:01.225) 0:00:03.826 ******** 2026-04-18 00:26:06.098641 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-18 00:26:06.098647 | orchestrator | 2026-04-18 00:26:06.098652 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-18 00:26:06.098658 | orchestrator | Saturday 18 April 2026 00:24:06 +0000 (0:00:00.997) 0:00:04.823 ******** 2026-04-18 00:26:06.098664 | orchestrator | ok: [testbed-manager] 2026-04-18 00:26:06.098670 | orchestrator | 2026-04-18 00:26:06.098676 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-18 00:26:06.098682 | orchestrator | Saturday 18 April 2026 00:24:07 +0000 (0:00:00.332) 0:00:05.156 ******** 2026-04-18 00:26:06.098708 | orchestrator | changed: [testbed-manager] 2026-04-18 00:26:06.098716 | orchestrator | 2026-04-18 00:26:06.098725 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-18 00:26:06.098730 | orchestrator | Saturday 18 April 2026 00:24:08 +0000 (0:00:00.856) 0:00:06.012 ******** 2026-04-18 00:26:06.098735 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-18 00:26:06.098742 | orchestrator | ok: [testbed-manager] 2026-04-18 00:26:06.098747 | orchestrator | 2026-04-18 00:26:06.098752 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-18 00:26:06.098757 | orchestrator | Saturday 18 April 2026 00:24:42 +0000 (0:00:33.889) 0:00:39.901 ******** 2026-04-18 00:26:06.098763 | orchestrator | changed: [testbed-manager] 2026-04-18 00:26:06.098768 | orchestrator | 2026-04-18 00:26:06.098773 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-18 00:26:06.098778 | orchestrator | Saturday 18 April 2026 00:25:05 +0000 (0:00:23.159) 0:01:03.061 ******** 2026-04-18 00:26:06.098784 | orchestrator | Pausing for 60 seconds 2026-04-18 00:26:06.098789 | orchestrator | changed: [testbed-manager] 2026-04-18 00:26:06.098794 | orchestrator | 2026-04-18 00:26:06.098800 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-18 00:26:06.098805 | orchestrator | Saturday 18 April 2026 00:26:05 +0000 (0:01:00.078) 0:02:03.140 ******** 2026-04-18 00:26:06.098810 | orchestrator | ok: [testbed-manager] 2026-04-18 00:26:06.098815 | orchestrator | 2026-04-18 00:26:06.098820 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-18 00:26:06.098825 | orchestrator | Saturday 18 April 2026 00:26:05 +0000 (0:00:00.060) 0:02:03.200 ******** 2026-04-18 00:26:06.098830 | orchestrator | changed: [testbed-manager] 2026-04-18 00:26:06.098836 | orchestrator | 2026-04-18 00:26:06.098841 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:26:06.098846 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:26:06.098851 | orchestrator | 2026-04-18 00:26:06.098857 | orchestrator | 2026-04-18 00:26:06.098862 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:26:06.098867 | orchestrator | Saturday 18 April 2026 00:26:05 +0000 (0:00:00.589) 0:02:03.790 ******** 2026-04-18 00:26:06.098872 | orchestrator | =============================================================================== 2026-04-18 00:26:06.098877 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-04-18 00:26:06.098882 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 33.89s 2026-04-18 00:26:06.098887 | orchestrator | osism.services.squid : Restart squid service --------------------------- 23.16s 2026-04-18 00:26:06.098893 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.33s 2026-04-18 00:26:06.098898 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.23s 2026-04-18 00:26:06.098903 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.00s 2026-04-18 00:26:06.098908 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.86s 2026-04-18 00:26:06.098913 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2026-04-18 00:26:06.098918 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2026-04-18 00:26:06.098923 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-04-18 00:26:06.098929 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-04-18 00:26:06.269529 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-18 00:26:06.269605 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-18 00:26:06.340811 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-18 00:26:06.340878 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2024.2 2026-04-18 00:26:06.348651 | orchestrator | + set -e 2026-04-18 00:26:06.348728 | orchestrator | + NAMESPACE=kolla/release/2024.2 2026-04-18 00:26:06.348741 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2024.2#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-18 00:26:06.354846 | orchestrator | ++ semver 10.0.0 9.0.0 2026-04-18 00:26:06.408783 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-18 00:26:06.409579 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-18 00:26:17.676423 | orchestrator | 2026-04-18 00:26:17 | INFO  | Prepare task for execution of operator. 2026-04-18 00:26:17.750264 | orchestrator | 2026-04-18 00:26:17 | INFO  | Task 99082f06-63ae-406f-9848-5811bfa158d7 (operator) was prepared for execution. 2026-04-18 00:26:17.750345 | orchestrator | 2026-04-18 00:26:17 | INFO  | It takes a moment until task 99082f06-63ae-406f-9848-5811bfa158d7 (operator) has been started and output is visible here. 2026-04-18 00:26:33.892250 | orchestrator | 2026-04-18 00:26:33.892402 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-18 00:26:33.892431 | orchestrator | 2026-04-18 00:26:33.892451 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-18 00:26:33.892469 | orchestrator | Saturday 18 April 2026 00:26:20 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-04-18 00:26:33.892481 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:26:33.892493 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:26:33.892504 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:26:33.892515 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:26:33.892525 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:26:33.892598 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:26:33.892619 | orchestrator | 2026-04-18 00:26:33.892638 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-18 00:26:33.892658 | orchestrator | Saturday 18 April 2026 00:26:25 +0000 (0:00:04.267) 0:00:04.455 ******** 2026-04-18 00:26:33.892678 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:26:33.892698 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:26:33.892718 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:26:33.892738 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:26:33.892757 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:26:33.892771 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:26:33.892783 | orchestrator | 2026-04-18 00:26:33.892797 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-18 00:26:33.892810 | orchestrator | 2026-04-18 00:26:33.892823 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-18 00:26:33.892834 | orchestrator | Saturday 18 April 2026 00:26:26 +0000 (0:00:00.888) 0:00:05.344 ******** 2026-04-18 00:26:33.892846 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:26:33.892858 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:26:33.892869 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:26:33.892880 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:26:33.892891 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:26:33.892901 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:26:33.892912 | orchestrator | 2026-04-18 00:26:33.892923 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-18 00:26:33.892934 | orchestrator | Saturday 18 April 2026 00:26:26 +0000 (0:00:00.155) 0:00:05.499 ******** 2026-04-18 00:26:33.892945 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:26:33.892955 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:26:33.892966 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:26:33.892977 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:26:33.892987 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:26:33.892998 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:26:33.893009 | orchestrator | 2026-04-18 00:26:33.893020 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-18 00:26:33.893031 | orchestrator | Saturday 18 April 2026 00:26:26 +0000 (0:00:00.149) 0:00:05.649 ******** 2026-04-18 00:26:33.893042 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:26:33.893054 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:26:33.893065 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:26:33.893109 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:26:33.893121 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:26:33.893131 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:26:33.893142 | orchestrator | 2026-04-18 00:26:33.893153 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-18 00:26:33.893164 | orchestrator | Saturday 18 April 2026 00:26:27 +0000 (0:00:00.771) 0:00:06.421 ******** 2026-04-18 00:26:33.893175 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:26:33.893185 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:26:33.893198 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:26:33.893218 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:26:33.893238 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:26:33.893258 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:26:33.893277 | orchestrator | 2026-04-18 00:26:33.893296 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-18 00:26:33.893314 | orchestrator | Saturday 18 April 2026 00:26:28 +0000 (0:00:00.932) 0:00:07.353 ******** 2026-04-18 00:26:33.893334 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-18 00:26:33.893355 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-18 00:26:33.893375 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-18 00:26:33.893395 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-18 00:26:33.893413 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-18 00:26:33.893434 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-18 00:26:33.893453 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-18 00:26:33.893471 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-18 00:26:33.893489 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-18 00:26:33.893507 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-18 00:26:33.893523 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-18 00:26:33.893574 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-18 00:26:33.893594 | orchestrator | 2026-04-18 00:26:33.893612 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-18 00:26:33.893631 | orchestrator | Saturday 18 April 2026 00:26:29 +0000 (0:00:01.438) 0:00:08.792 ******** 2026-04-18 00:26:33.893643 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:26:33.893655 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:26:33.893666 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:26:33.893677 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:26:33.893688 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:26:33.893698 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:26:33.893709 | orchestrator | 2026-04-18 00:26:33.893720 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-18 00:26:33.893733 | orchestrator | Saturday 18 April 2026 00:26:30 +0000 (0:00:01.277) 0:00:10.069 ******** 2026-04-18 00:26:33.893744 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-18 00:26:33.893755 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-18 00:26:33.893766 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-18 00:26:33.893782 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-18 00:26:33.893800 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-18 00:26:33.893874 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-18 00:26:33.893896 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-18 00:26:33.893915 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-18 00:26:33.893934 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-18 00:26:33.893953 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-18 00:26:33.893972 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-18 00:26:33.893983 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-18 00:26:33.894099 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-18 00:26:33.894120 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-18 00:26:33.894132 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-18 00:26:33.894144 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-18 00:26:33.894163 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-18 00:26:33.894191 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-18 00:26:33.894208 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-18 00:26:33.894219 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-18 00:26:33.894230 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-18 00:26:33.894241 | orchestrator | 2026-04-18 00:26:33.894252 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-18 00:26:33.894264 | orchestrator | Saturday 18 April 2026 00:26:31 +0000 (0:00:01.100) 0:00:11.169 ******** 2026-04-18 00:26:33.894275 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:26:33.894286 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:26:33.894296 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:26:33.894307 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:26:33.894318 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:26:33.894328 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:26:33.894339 | orchestrator | 2026-04-18 00:26:33.894350 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-18 00:26:33.894361 | orchestrator | Saturday 18 April 2026 00:26:32 +0000 (0:00:00.143) 0:00:11.313 ******** 2026-04-18 00:26:33.894371 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:26:33.894382 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:26:33.894393 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:26:33.894403 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:26:33.894414 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:26:33.894425 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:26:33.894435 | orchestrator | 2026-04-18 00:26:33.894446 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-18 00:26:33.894457 | orchestrator | Saturday 18 April 2026 00:26:32 +0000 (0:00:00.183) 0:00:11.496 ******** 2026-04-18 00:26:33.894467 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:26:33.894478 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:26:33.894488 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:26:33.894499 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:26:33.894510 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:26:33.894520 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:26:33.894562 | orchestrator | 2026-04-18 00:26:33.894580 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-18 00:26:33.894591 | orchestrator | Saturday 18 April 2026 00:26:32 +0000 (0:00:00.521) 0:00:12.018 ******** 2026-04-18 00:26:33.894602 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:26:33.894612 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:26:33.894623 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:26:33.894634 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:26:33.894645 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:26:33.894655 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:26:33.894666 | orchestrator | 2026-04-18 00:26:33.894677 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-18 00:26:33.894687 | orchestrator | Saturday 18 April 2026 00:26:32 +0000 (0:00:00.147) 0:00:12.166 ******** 2026-04-18 00:26:33.894698 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-18 00:26:33.894709 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:26:33.894720 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-18 00:26:33.894742 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:26:33.894753 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-18 00:26:33.894767 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-18 00:26:33.894785 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:26:33.894805 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:26:33.894825 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-18 00:26:33.894845 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:26:33.894864 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-18 00:26:33.894884 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:26:33.894904 | orchestrator | 2026-04-18 00:26:33.894923 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-18 00:26:33.894944 | orchestrator | Saturday 18 April 2026 00:26:33 +0000 (0:00:00.752) 0:00:12.918 ******** 2026-04-18 00:26:33.894964 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:26:33.894984 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:26:33.895004 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:26:33.895024 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:26:33.895044 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:26:33.895063 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:26:33.895082 | orchestrator | 2026-04-18 00:26:33.895102 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-18 00:26:33.895123 | orchestrator | Saturday 18 April 2026 00:26:33 +0000 (0:00:00.145) 0:00:13.063 ******** 2026-04-18 00:26:33.895143 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:26:33.895163 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:26:33.895183 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:26:33.895202 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:26:33.895235 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:26:35.078996 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:26:35.079106 | orchestrator | 2026-04-18 00:26:35.079127 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-18 00:26:35.079138 | orchestrator | Saturday 18 April 2026 00:26:33 +0000 (0:00:00.129) 0:00:13.193 ******** 2026-04-18 00:26:35.079147 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:26:35.079160 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:26:35.079171 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:26:35.079178 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:26:35.079185 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:26:35.079192 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:26:35.079199 | orchestrator | 2026-04-18 00:26:35.079207 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-18 00:26:35.079214 | orchestrator | Saturday 18 April 2026 00:26:34 +0000 (0:00:00.126) 0:00:13.319 ******** 2026-04-18 00:26:35.079221 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:26:35.079228 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:26:35.079235 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:26:35.079242 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:26:35.079249 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:26:35.079282 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:26:35.079301 | orchestrator | 2026-04-18 00:26:35.079312 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-18 00:26:35.079323 | orchestrator | Saturday 18 April 2026 00:26:34 +0000 (0:00:00.665) 0:00:13.984 ******** 2026-04-18 00:26:35.079333 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:26:35.079344 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:26:35.079355 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:26:35.079365 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:26:35.079375 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:26:35.079386 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:26:35.079398 | orchestrator | 2026-04-18 00:26:35.079410 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:26:35.079423 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-18 00:26:35.079466 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-18 00:26:35.079478 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-18 00:26:35.079489 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-18 00:26:35.079502 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-18 00:26:35.079513 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-18 00:26:35.079525 | orchestrator | 2026-04-18 00:26:35.079566 | orchestrator | 2026-04-18 00:26:35.079580 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:26:35.079590 | orchestrator | Saturday 18 April 2026 00:26:34 +0000 (0:00:00.197) 0:00:14.182 ******** 2026-04-18 00:26:35.079599 | orchestrator | =============================================================================== 2026-04-18 00:26:35.079607 | orchestrator | Gathering Facts --------------------------------------------------------- 4.27s 2026-04-18 00:26:35.079615 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.44s 2026-04-18 00:26:35.079624 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.28s 2026-04-18 00:26:35.079633 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.10s 2026-04-18 00:26:35.079642 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.93s 2026-04-18 00:26:35.079650 | orchestrator | Do not require tty for all users ---------------------------------------- 0.89s 2026-04-18 00:26:35.079658 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.77s 2026-04-18 00:26:35.079666 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.75s 2026-04-18 00:26:35.079674 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-04-18 00:26:35.079682 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.52s 2026-04-18 00:26:35.079690 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2026-04-18 00:26:35.079699 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2026-04-18 00:26:35.079707 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-04-18 00:26:35.079715 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-04-18 00:26:35.079722 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-04-18 00:26:35.079731 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-04-18 00:26:35.079740 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-04-18 00:26:35.079748 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-04-18 00:26:35.079756 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-04-18 00:26:35.242758 | orchestrator | + osism apply --environment custom facts 2026-04-18 00:26:36.449312 | orchestrator | 2026-04-18 00:26:36 | INFO  | Trying to run play facts in environment custom 2026-04-18 00:26:46.499440 | orchestrator | 2026-04-18 00:26:46 | INFO  | Prepare task for execution of facts. 2026-04-18 00:26:46.567406 | orchestrator | 2026-04-18 00:26:46 | INFO  | Task 0669dc59-b65b-4f68-ad44-16701ba2364e (facts) was prepared for execution. 2026-04-18 00:26:46.567530 | orchestrator | 2026-04-18 00:26:46 | INFO  | It takes a moment until task 0669dc59-b65b-4f68-ad44-16701ba2364e (facts) has been started and output is visible here. 2026-04-18 00:27:30.874300 | orchestrator | 2026-04-18 00:27:30.874397 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-18 00:27:30.874408 | orchestrator | 2026-04-18 00:27:30.874416 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-18 00:27:30.874424 | orchestrator | Saturday 18 April 2026 00:26:49 +0000 (0:00:00.116) 0:00:00.116 ******** 2026-04-18 00:27:30.874431 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:27:30.874490 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:27:30.874499 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:27:30.874505 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:27:30.874512 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:27:30.874520 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:27:30.874526 | orchestrator | ok: [testbed-manager] 2026-04-18 00:27:30.874532 | orchestrator | 2026-04-18 00:27:30.874538 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-18 00:27:30.874546 | orchestrator | Saturday 18 April 2026 00:26:50 +0000 (0:00:01.352) 0:00:01.469 ******** 2026-04-18 00:27:30.874553 | orchestrator | ok: [testbed-manager] 2026-04-18 00:27:30.874560 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:27:30.874567 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:27:30.874574 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:27:30.874581 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:27:30.874588 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:27:30.874595 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:27:30.874602 | orchestrator | 2026-04-18 00:27:30.874609 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-18 00:27:30.874676 | orchestrator | 2026-04-18 00:27:30.874687 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-18 00:27:30.874694 | orchestrator | Saturday 18 April 2026 00:26:52 +0000 (0:00:01.288) 0:00:02.757 ******** 2026-04-18 00:27:30.874701 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:27:30.874708 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:27:30.874715 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:27:30.874723 | orchestrator | 2026-04-18 00:27:30.874730 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-18 00:27:30.874737 | orchestrator | Saturday 18 April 2026 00:26:52 +0000 (0:00:00.103) 0:00:02.861 ******** 2026-04-18 00:27:30.874745 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:27:30.874752 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:27:30.874759 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:27:30.874766 | orchestrator | 2026-04-18 00:27:30.874773 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-18 00:27:30.874781 | orchestrator | Saturday 18 April 2026 00:26:52 +0000 (0:00:00.185) 0:00:03.046 ******** 2026-04-18 00:27:30.874788 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:27:30.874795 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:27:30.874802 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:27:30.874809 | orchestrator | 2026-04-18 00:27:30.874816 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-18 00:27:30.874823 | orchestrator | Saturday 18 April 2026 00:26:52 +0000 (0:00:00.204) 0:00:03.250 ******** 2026-04-18 00:27:30.874831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:27:30.874840 | orchestrator | 2026-04-18 00:27:30.874847 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-18 00:27:30.874857 | orchestrator | Saturday 18 April 2026 00:26:52 +0000 (0:00:00.127) 0:00:03.378 ******** 2026-04-18 00:27:30.874867 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:27:30.874874 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:27:30.874881 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:27:30.874890 | orchestrator | 2026-04-18 00:27:30.874918 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-18 00:27:30.874928 | orchestrator | Saturday 18 April 2026 00:26:53 +0000 (0:00:00.441) 0:00:03.820 ******** 2026-04-18 00:27:30.874938 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:27:30.874948 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:27:30.874958 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:27:30.874969 | orchestrator | 2026-04-18 00:27:30.874979 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-18 00:27:30.874989 | orchestrator | Saturday 18 April 2026 00:26:53 +0000 (0:00:00.132) 0:00:03.952 ******** 2026-04-18 00:27:30.875000 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:27:30.875010 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:27:30.875021 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:27:30.875031 | orchestrator | 2026-04-18 00:27:30.875041 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-18 00:27:30.875050 | orchestrator | Saturday 18 April 2026 00:26:54 +0000 (0:00:01.079) 0:00:05.032 ******** 2026-04-18 00:27:30.875056 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:27:30.875062 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:27:30.875068 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:27:30.875074 | orchestrator | 2026-04-18 00:27:30.875079 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-18 00:27:30.875086 | orchestrator | Saturday 18 April 2026 00:26:55 +0000 (0:00:00.482) 0:00:05.514 ******** 2026-04-18 00:27:30.875095 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:27:30.875105 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:27:30.875115 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:27:30.875125 | orchestrator | 2026-04-18 00:27:30.875134 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-18 00:27:30.875144 | orchestrator | Saturday 18 April 2026 00:26:56 +0000 (0:00:01.111) 0:00:06.625 ******** 2026-04-18 00:27:30.875155 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:27:30.875162 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:27:30.875168 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:27:30.875175 | orchestrator | 2026-04-18 00:27:30.875182 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-18 00:27:30.875189 | orchestrator | Saturday 18 April 2026 00:27:14 +0000 (0:00:18.598) 0:00:25.224 ******** 2026-04-18 00:27:30.875195 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:27:30.875202 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:27:30.875209 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:27:30.875216 | orchestrator | 2026-04-18 00:27:30.875223 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-18 00:27:30.875248 | orchestrator | Saturday 18 April 2026 00:27:14 +0000 (0:00:00.083) 0:00:25.307 ******** 2026-04-18 00:27:30.875255 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:27:30.875262 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:27:30.875269 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:27:30.875276 | orchestrator | 2026-04-18 00:27:30.875283 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-18 00:27:30.875295 | orchestrator | Saturday 18 April 2026 00:27:22 +0000 (0:00:07.243) 0:00:32.551 ******** 2026-04-18 00:27:30.875302 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:27:30.875309 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:27:30.875316 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:27:30.875323 | orchestrator | 2026-04-18 00:27:30.875330 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-18 00:27:30.875337 | orchestrator | Saturday 18 April 2026 00:27:22 +0000 (0:00:00.425) 0:00:32.976 ******** 2026-04-18 00:27:30.875344 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-18 00:27:30.875351 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-18 00:27:30.875358 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-18 00:27:30.875371 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-18 00:27:30.875378 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-18 00:27:30.875385 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-18 00:27:30.875392 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-18 00:27:30.875399 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-18 00:27:30.875406 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-18 00:27:30.875413 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-18 00:27:30.875420 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-18 00:27:30.875427 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-18 00:27:30.875434 | orchestrator | 2026-04-18 00:27:30.875440 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-18 00:27:30.875447 | orchestrator | Saturday 18 April 2026 00:27:25 +0000 (0:00:03.410) 0:00:36.387 ******** 2026-04-18 00:27:30.875454 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:27:30.875461 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:27:30.875468 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:27:30.875475 | orchestrator | 2026-04-18 00:27:30.875482 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-18 00:27:30.875489 | orchestrator | 2026-04-18 00:27:30.875496 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-18 00:27:30.875503 | orchestrator | Saturday 18 April 2026 00:27:27 +0000 (0:00:01.294) 0:00:37.681 ******** 2026-04-18 00:27:30.875510 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:27:30.875516 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:27:30.875523 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:27:30.875530 | orchestrator | ok: [testbed-manager] 2026-04-18 00:27:30.875538 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:27:30.875545 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:27:30.875552 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:27:30.875558 | orchestrator | 2026-04-18 00:27:30.875565 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:27:30.875572 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:27:30.875579 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:27:30.875586 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:27:30.875594 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:27:30.875600 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:27:30.875608 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:27:30.875614 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:27:30.875635 | orchestrator | 2026-04-18 00:27:30.875642 | orchestrator | 2026-04-18 00:27:30.875649 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:27:30.875656 | orchestrator | Saturday 18 April 2026 00:27:30 +0000 (0:00:03.679) 0:00:41.360 ******** 2026-04-18 00:27:30.875663 | orchestrator | =============================================================================== 2026-04-18 00:27:30.875669 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.60s 2026-04-18 00:27:30.875681 | orchestrator | Install required packages (Debian) -------------------------------------- 7.24s 2026-04-18 00:27:30.875688 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.68s 2026-04-18 00:27:30.875695 | orchestrator | Copy fact files --------------------------------------------------------- 3.41s 2026-04-18 00:27:30.875702 | orchestrator | Create custom facts directory ------------------------------------------- 1.35s 2026-04-18 00:27:30.875708 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.29s 2026-04-18 00:27:30.875720 | orchestrator | Copy fact file ---------------------------------------------------------- 1.29s 2026-04-18 00:27:31.067156 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.11s 2026-04-18 00:27:31.067255 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.08s 2026-04-18 00:27:31.067270 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2026-04-18 00:27:31.067281 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2026-04-18 00:27:31.067292 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2026-04-18 00:27:31.067303 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-04-18 00:27:31.067314 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-04-18 00:27:31.067325 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-04-18 00:27:31.067336 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-04-18 00:27:31.067348 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-04-18 00:27:31.067359 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2026-04-18 00:27:31.239444 | orchestrator | + osism apply bootstrap 2026-04-18 00:27:42.549058 | orchestrator | 2026-04-18 00:27:42 | INFO  | Prepare task for execution of bootstrap. 2026-04-18 00:27:42.624444 | orchestrator | 2026-04-18 00:27:42 | INFO  | Task 6ac05ee9-aee8-4ba8-af54-4cf3d23c16c6 (bootstrap) was prepared for execution. 2026-04-18 00:27:42.624605 | orchestrator | 2026-04-18 00:27:42 | INFO  | It takes a moment until task 6ac05ee9-aee8-4ba8-af54-4cf3d23c16c6 (bootstrap) has been started and output is visible here. 2026-04-18 00:27:58.216321 | orchestrator | 2026-04-18 00:27:58.216396 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-18 00:27:58.216405 | orchestrator | 2026-04-18 00:27:58.216410 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-18 00:27:58.216416 | orchestrator | Saturday 18 April 2026 00:27:45 +0000 (0:00:00.189) 0:00:00.189 ******** 2026-04-18 00:27:58.216421 | orchestrator | ok: [testbed-manager] 2026-04-18 00:27:58.216427 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:27:58.216432 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:27:58.216437 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:27:58.216442 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:27:58.216446 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:27:58.216451 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:27:58.216455 | orchestrator | 2026-04-18 00:27:58.216460 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-18 00:27:58.216465 | orchestrator | 2026-04-18 00:27:58.216469 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-18 00:27:58.216474 | orchestrator | Saturday 18 April 2026 00:27:46 +0000 (0:00:00.275) 0:00:00.465 ******** 2026-04-18 00:27:58.216479 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:27:58.216483 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:27:58.216488 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:27:58.216492 | orchestrator | ok: [testbed-manager] 2026-04-18 00:27:58.216497 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:27:58.216518 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:27:58.216523 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:27:58.216528 | orchestrator | 2026-04-18 00:27:58.216532 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-18 00:27:58.216553 | orchestrator | 2026-04-18 00:27:58.216558 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-18 00:27:58.216563 | orchestrator | Saturday 18 April 2026 00:27:50 +0000 (0:00:04.544) 0:00:05.010 ******** 2026-04-18 00:27:58.216568 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-18 00:27:58.216573 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-18 00:27:58.216578 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-18 00:27:58.216582 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-18 00:27:58.216587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-18 00:27:58.216592 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-18 00:27:58.216596 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-18 00:27:58.216601 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-18 00:27:58.216606 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-18 00:27:58.216610 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-18 00:27:58.216615 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-18 00:27:58.216619 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-18 00:27:58.216691 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-18 00:27:58.216697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-18 00:27:58.216702 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-18 00:27:58.216706 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-18 00:27:58.216711 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-18 00:27:58.216716 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-18 00:27:58.216720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-18 00:27:58.216779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-18 00:27:58.216784 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-18 00:27:58.216789 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-18 00:27:58.216794 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:27:58.216813 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-18 00:27:58.216819 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-18 00:27:58.216824 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-18 00:27:58.216828 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-18 00:27:58.216833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-18 00:27:58.216842 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-18 00:27:58.216847 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-18 00:27:58.216853 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-18 00:27:58.216858 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-18 00:27:58.216863 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-18 00:27:58.216868 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-18 00:27:58.216875 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-18 00:27:58.216883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-18 00:27:58.216893 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:27:58.216903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-18 00:27:58.216910 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-18 00:27:58.216917 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-18 00:27:58.216924 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-18 00:27:58.216943 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:27:58.216950 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-18 00:27:58.216958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:27:58.216966 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-18 00:27:58.216975 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:27:58.216998 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-18 00:27:58.217004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:27:58.217010 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-18 00:27:58.217015 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-18 00:27:58.217020 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:27:58.217025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:27:58.217030 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:27:58.217035 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-18 00:27:58.217040 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-18 00:27:58.217045 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:27:58.217050 | orchestrator | 2026-04-18 00:27:58.217056 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-18 00:27:58.217061 | orchestrator | 2026-04-18 00:27:58.217066 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-18 00:27:58.217071 | orchestrator | Saturday 18 April 2026 00:27:51 +0000 (0:00:00.424) 0:00:05.434 ******** 2026-04-18 00:27:58.217076 | orchestrator | ok: [testbed-manager] 2026-04-18 00:27:58.217081 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:27:58.217087 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:27:58.217092 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:27:58.217097 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:27:58.217102 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:27:58.217107 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:27:58.217112 | orchestrator | 2026-04-18 00:27:58.217116 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-18 00:27:58.217121 | orchestrator | Saturday 18 April 2026 00:27:52 +0000 (0:00:01.241) 0:00:06.676 ******** 2026-04-18 00:27:58.217126 | orchestrator | ok: [testbed-manager] 2026-04-18 00:27:58.217130 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:27:58.217134 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:27:58.217139 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:27:58.217143 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:27:58.217148 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:27:58.217152 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:27:58.217159 | orchestrator | 2026-04-18 00:27:58.217167 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-18 00:27:58.217178 | orchestrator | Saturday 18 April 2026 00:27:53 +0000 (0:00:01.276) 0:00:07.952 ******** 2026-04-18 00:27:58.217188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:27:58.217197 | orchestrator | 2026-04-18 00:27:58.217205 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-18 00:27:58.217212 | orchestrator | Saturday 18 April 2026 00:27:53 +0000 (0:00:00.269) 0:00:08.222 ******** 2026-04-18 00:27:58.217220 | orchestrator | changed: [testbed-manager] 2026-04-18 00:27:58.217227 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:27:58.217235 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:27:58.217240 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:27:58.217244 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:27:58.217249 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:27:58.217253 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:27:58.217258 | orchestrator | 2026-04-18 00:27:58.217262 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-18 00:27:58.217273 | orchestrator | Saturday 18 April 2026 00:27:55 +0000 (0:00:01.576) 0:00:09.799 ******** 2026-04-18 00:27:58.217278 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:27:58.217284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:27:58.217290 | orchestrator | 2026-04-18 00:27:58.217295 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-18 00:27:58.217299 | orchestrator | Saturday 18 April 2026 00:27:55 +0000 (0:00:00.301) 0:00:10.100 ******** 2026-04-18 00:27:58.217304 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:27:58.217308 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:27:58.217313 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:27:58.217317 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:27:58.217322 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:27:58.217330 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:27:58.217335 | orchestrator | 2026-04-18 00:27:58.217339 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-18 00:27:58.217344 | orchestrator | Saturday 18 April 2026 00:27:56 +0000 (0:00:01.165) 0:00:11.266 ******** 2026-04-18 00:27:58.217348 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:27:58.217353 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:27:58.217357 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:27:58.217361 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:27:58.217366 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:27:58.217370 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:27:58.217375 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:27:58.217379 | orchestrator | 2026-04-18 00:27:58.217383 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-18 00:27:58.217388 | orchestrator | Saturday 18 April 2026 00:27:57 +0000 (0:00:00.696) 0:00:11.962 ******** 2026-04-18 00:27:58.217392 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:27:58.217397 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:27:58.217401 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:27:58.217406 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:27:58.217410 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:27:58.217415 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:27:58.217419 | orchestrator | ok: [testbed-manager] 2026-04-18 00:27:58.217423 | orchestrator | 2026-04-18 00:27:58.217428 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-18 00:27:58.217433 | orchestrator | Saturday 18 April 2026 00:27:58 +0000 (0:00:00.474) 0:00:12.436 ******** 2026-04-18 00:27:58.217438 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:27:58.217442 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:27:58.217451 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:28:10.339077 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:28:10.339152 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:28:10.339159 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:28:10.339164 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:28:10.339169 | orchestrator | 2026-04-18 00:28:10.339175 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-18 00:28:10.339181 | orchestrator | Saturday 18 April 2026 00:27:58 +0000 (0:00:00.220) 0:00:12.657 ******** 2026-04-18 00:28:10.339188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:28:10.339204 | orchestrator | 2026-04-18 00:28:10.339209 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-18 00:28:10.339214 | orchestrator | Saturday 18 April 2026 00:27:58 +0000 (0:00:00.283) 0:00:12.940 ******** 2026-04-18 00:28:10.339219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:28:10.339241 | orchestrator | 2026-04-18 00:28:10.339246 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-18 00:28:10.339251 | orchestrator | Saturday 18 April 2026 00:27:58 +0000 (0:00:00.276) 0:00:13.216 ******** 2026-04-18 00:28:10.339256 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:10.339261 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:10.339266 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:10.339270 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:10.339275 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:10.339279 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:10.339284 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:10.339288 | orchestrator | 2026-04-18 00:28:10.339293 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-18 00:28:10.339298 | orchestrator | Saturday 18 April 2026 00:28:00 +0000 (0:00:01.428) 0:00:14.644 ******** 2026-04-18 00:28:10.339303 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:28:10.339307 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:28:10.339312 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:28:10.339316 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:28:10.339321 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:28:10.339325 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:28:10.339330 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:28:10.339334 | orchestrator | 2026-04-18 00:28:10.339339 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-18 00:28:10.339343 | orchestrator | Saturday 18 April 2026 00:28:00 +0000 (0:00:00.213) 0:00:14.858 ******** 2026-04-18 00:28:10.339348 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:10.339352 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:10.339357 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:10.339361 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:10.339366 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:10.339370 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:10.339375 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:10.339379 | orchestrator | 2026-04-18 00:28:10.339384 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-18 00:28:10.339389 | orchestrator | Saturday 18 April 2026 00:28:01 +0000 (0:00:00.601) 0:00:15.459 ******** 2026-04-18 00:28:10.339393 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:28:10.339398 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:28:10.339402 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:28:10.339407 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:28:10.339411 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:28:10.339416 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:28:10.339420 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:28:10.339425 | orchestrator | 2026-04-18 00:28:10.339429 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-18 00:28:10.339435 | orchestrator | Saturday 18 April 2026 00:28:01 +0000 (0:00:00.271) 0:00:15.731 ******** 2026-04-18 00:28:10.339439 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:10.339444 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:28:10.339448 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:28:10.339453 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:28:10.339458 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:28:10.339462 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:28:10.339467 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:28:10.339471 | orchestrator | 2026-04-18 00:28:10.339476 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-18 00:28:10.339481 | orchestrator | Saturday 18 April 2026 00:28:01 +0000 (0:00:00.609) 0:00:16.341 ******** 2026-04-18 00:28:10.339485 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:10.339495 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:28:10.339500 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:28:10.339505 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:28:10.339509 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:28:10.339513 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:28:10.339518 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:28:10.339523 | orchestrator | 2026-04-18 00:28:10.339527 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-18 00:28:10.339532 | orchestrator | Saturday 18 April 2026 00:28:03 +0000 (0:00:01.203) 0:00:17.544 ******** 2026-04-18 00:28:10.339536 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:10.339541 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:10.339545 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:10.339550 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:10.339554 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:10.339559 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:10.339563 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:10.339568 | orchestrator | 2026-04-18 00:28:10.339572 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-18 00:28:10.339577 | orchestrator | Saturday 18 April 2026 00:28:04 +0000 (0:00:01.113) 0:00:18.657 ******** 2026-04-18 00:28:10.339600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:28:10.339606 | orchestrator | 2026-04-18 00:28:10.339610 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-18 00:28:10.339615 | orchestrator | Saturday 18 April 2026 00:28:04 +0000 (0:00:00.300) 0:00:18.958 ******** 2026-04-18 00:28:10.339619 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:28:10.339624 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:28:10.339673 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:28:10.339681 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:28:10.339688 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:28:10.339695 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:28:10.339702 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:28:10.339710 | orchestrator | 2026-04-18 00:28:10.339716 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-18 00:28:10.339722 | orchestrator | Saturday 18 April 2026 00:28:05 +0000 (0:00:01.299) 0:00:20.257 ******** 2026-04-18 00:28:10.339727 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:10.339732 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:10.339737 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:10.339742 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:10.339747 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:10.339753 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:10.339757 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:10.339762 | orchestrator | 2026-04-18 00:28:10.339768 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-18 00:28:10.339773 | orchestrator | Saturday 18 April 2026 00:28:06 +0000 (0:00:00.233) 0:00:20.491 ******** 2026-04-18 00:28:10.339778 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:10.339783 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:10.339788 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:10.339793 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:10.339798 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:10.339803 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:10.339808 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:10.339813 | orchestrator | 2026-04-18 00:28:10.339819 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-18 00:28:10.339824 | orchestrator | Saturday 18 April 2026 00:28:06 +0000 (0:00:00.212) 0:00:20.704 ******** 2026-04-18 00:28:10.339829 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:10.339834 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:10.339839 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:10.339849 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:10.339854 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:10.339859 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:10.339864 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:10.339869 | orchestrator | 2026-04-18 00:28:10.339875 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-18 00:28:10.339881 | orchestrator | Saturday 18 April 2026 00:28:06 +0000 (0:00:00.221) 0:00:20.926 ******** 2026-04-18 00:28:10.339888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:28:10.339895 | orchestrator | 2026-04-18 00:28:10.339901 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-18 00:28:10.339907 | orchestrator | Saturday 18 April 2026 00:28:06 +0000 (0:00:00.260) 0:00:21.186 ******** 2026-04-18 00:28:10.339912 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:10.339918 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:10.339924 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:10.339929 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:10.339935 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:10.339940 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:10.339946 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:10.339952 | orchestrator | 2026-04-18 00:28:10.339958 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-18 00:28:10.339963 | orchestrator | Saturday 18 April 2026 00:28:07 +0000 (0:00:00.606) 0:00:21.793 ******** 2026-04-18 00:28:10.339969 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:28:10.339975 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:28:10.339980 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:28:10.339986 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:28:10.339992 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:28:10.340001 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:28:10.340007 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:28:10.340013 | orchestrator | 2026-04-18 00:28:10.340018 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-18 00:28:10.340023 | orchestrator | Saturday 18 April 2026 00:28:07 +0000 (0:00:00.213) 0:00:22.006 ******** 2026-04-18 00:28:10.340028 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:10.340033 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:28:10.340038 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:10.340043 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:28:10.340048 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:10.340053 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:28:10.340058 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:10.340063 | orchestrator | 2026-04-18 00:28:10.340068 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-18 00:28:10.340073 | orchestrator | Saturday 18 April 2026 00:28:08 +0000 (0:00:01.145) 0:00:23.151 ******** 2026-04-18 00:28:10.340078 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:10.340083 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:10.340088 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:10.340093 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:10.340098 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:10.340103 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:10.340108 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:10.340113 | orchestrator | 2026-04-18 00:28:10.340118 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-18 00:28:10.340123 | orchestrator | Saturday 18 April 2026 00:28:09 +0000 (0:00:00.514) 0:00:23.665 ******** 2026-04-18 00:28:10.340129 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:10.340134 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:28:10.340139 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:10.340144 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:28:10.340153 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:52.458675 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:52.458794 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:28:52.458812 | orchestrator | 2026-04-18 00:28:52.458825 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-18 00:28:52.458838 | orchestrator | Saturday 18 April 2026 00:28:10 +0000 (0:00:01.181) 0:00:24.847 ******** 2026-04-18 00:28:52.458849 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:52.458860 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:52.458873 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:52.458892 | orchestrator | changed: [testbed-manager] 2026-04-18 00:28:52.458911 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:28:52.458929 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:28:52.458948 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:28:52.458965 | orchestrator | 2026-04-18 00:28:52.458983 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-18 00:28:52.459000 | orchestrator | Saturday 18 April 2026 00:28:27 +0000 (0:00:17.295) 0:00:42.142 ******** 2026-04-18 00:28:52.459018 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:52.459035 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:52.459052 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:52.459069 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:52.459086 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:52.459104 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:52.459119 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:52.459135 | orchestrator | 2026-04-18 00:28:52.459152 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-18 00:28:52.459171 | orchestrator | Saturday 18 April 2026 00:28:27 +0000 (0:00:00.213) 0:00:42.356 ******** 2026-04-18 00:28:52.459190 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:52.459208 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:52.459225 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:52.459243 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:52.459261 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:52.459277 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:52.459295 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:52.459312 | orchestrator | 2026-04-18 00:28:52.459330 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-18 00:28:52.459350 | orchestrator | Saturday 18 April 2026 00:28:28 +0000 (0:00:00.212) 0:00:42.568 ******** 2026-04-18 00:28:52.459367 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:52.459384 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:52.459403 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:52.459420 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:52.459437 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:52.459454 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:52.459471 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:52.459486 | orchestrator | 2026-04-18 00:28:52.459504 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-18 00:28:52.459521 | orchestrator | Saturday 18 April 2026 00:28:28 +0000 (0:00:00.207) 0:00:42.776 ******** 2026-04-18 00:28:52.459540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:28:52.459560 | orchestrator | 2026-04-18 00:28:52.459576 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-18 00:28:52.459594 | orchestrator | Saturday 18 April 2026 00:28:28 +0000 (0:00:00.277) 0:00:43.054 ******** 2026-04-18 00:28:52.459611 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:52.459664 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:52.459683 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:52.459702 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:52.459719 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:52.459735 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:52.459750 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:52.459766 | orchestrator | 2026-04-18 00:28:52.459821 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-18 00:28:52.459840 | orchestrator | Saturday 18 April 2026 00:28:30 +0000 (0:00:01.770) 0:00:44.824 ******** 2026-04-18 00:28:52.459856 | orchestrator | changed: [testbed-manager] 2026-04-18 00:28:52.459874 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:28:52.459890 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:28:52.459908 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:28:52.459925 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:28:52.459941 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:28:52.459956 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:28:52.459972 | orchestrator | 2026-04-18 00:28:52.460008 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-18 00:28:52.460027 | orchestrator | Saturday 18 April 2026 00:28:31 +0000 (0:00:01.179) 0:00:46.003 ******** 2026-04-18 00:28:52.460044 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:52.460061 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:52.460078 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:52.460094 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:52.460112 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:52.460129 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:52.460147 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:52.460164 | orchestrator | 2026-04-18 00:28:52.460182 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-18 00:28:52.460199 | orchestrator | Saturday 18 April 2026 00:28:32 +0000 (0:00:00.821) 0:00:46.825 ******** 2026-04-18 00:28:52.460219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:28:52.460241 | orchestrator | 2026-04-18 00:28:52.460260 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-18 00:28:52.460280 | orchestrator | Saturday 18 April 2026 00:28:32 +0000 (0:00:00.284) 0:00:47.109 ******** 2026-04-18 00:28:52.460291 | orchestrator | changed: [testbed-manager] 2026-04-18 00:28:52.460302 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:28:52.460313 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:28:52.460324 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:28:52.460334 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:28:52.460345 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:28:52.460356 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:28:52.460366 | orchestrator | 2026-04-18 00:28:52.460402 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-18 00:28:52.460413 | orchestrator | Saturday 18 April 2026 00:28:33 +0000 (0:00:00.986) 0:00:48.096 ******** 2026-04-18 00:28:52.460424 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:28:52.460435 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:28:52.460445 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:28:52.460456 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:28:52.460466 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:28:52.460477 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:28:52.460487 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:28:52.460498 | orchestrator | 2026-04-18 00:28:52.460509 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-18 00:28:52.460519 | orchestrator | Saturday 18 April 2026 00:28:33 +0000 (0:00:00.172) 0:00:48.269 ******** 2026-04-18 00:28:52.460530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:28:52.460541 | orchestrator | 2026-04-18 00:28:52.460552 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-18 00:28:52.460563 | orchestrator | Saturday 18 April 2026 00:28:34 +0000 (0:00:00.287) 0:00:48.556 ******** 2026-04-18 00:28:52.460589 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:52.460600 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:52.460610 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:52.460708 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:52.460722 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:52.460733 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:52.460744 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:52.460754 | orchestrator | 2026-04-18 00:28:52.460765 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-18 00:28:52.460776 | orchestrator | Saturday 18 April 2026 00:28:36 +0000 (0:00:01.844) 0:00:50.401 ******** 2026-04-18 00:28:52.460787 | orchestrator | changed: [testbed-manager] 2026-04-18 00:28:52.460799 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:28:52.460810 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:28:52.460821 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:28:52.460832 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:28:52.460843 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:28:52.460853 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:28:52.460864 | orchestrator | 2026-04-18 00:28:52.460875 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-18 00:28:52.460886 | orchestrator | Saturday 18 April 2026 00:28:37 +0000 (0:00:01.234) 0:00:51.635 ******** 2026-04-18 00:28:52.460897 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:28:52.460908 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:28:52.460919 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:28:52.460930 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:28:52.460940 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:28:52.460951 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:28:52.460962 | orchestrator | changed: [testbed-manager] 2026-04-18 00:28:52.460973 | orchestrator | 2026-04-18 00:28:52.460984 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-18 00:28:52.460995 | orchestrator | Saturday 18 April 2026 00:28:49 +0000 (0:00:11.902) 0:01:03.537 ******** 2026-04-18 00:28:52.461006 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:52.461017 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:52.461028 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:52.461038 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:52.461049 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:52.461060 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:52.461071 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:52.461082 | orchestrator | 2026-04-18 00:28:52.461093 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-18 00:28:52.461104 | orchestrator | Saturday 18 April 2026 00:28:50 +0000 (0:00:01.494) 0:01:05.032 ******** 2026-04-18 00:28:52.461115 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:52.461126 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:52.461136 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:52.461147 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:52.461158 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:52.461169 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:52.461179 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:52.461190 | orchestrator | 2026-04-18 00:28:52.461201 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-18 00:28:52.461213 | orchestrator | Saturday 18 April 2026 00:28:51 +0000 (0:00:01.124) 0:01:06.156 ******** 2026-04-18 00:28:52.461224 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:52.461235 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:52.461246 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:52.461256 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:52.461267 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:52.461278 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:52.461289 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:52.461300 | orchestrator | 2026-04-18 00:28:52.461311 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-18 00:28:52.461322 | orchestrator | Saturday 18 April 2026 00:28:51 +0000 (0:00:00.201) 0:01:06.357 ******** 2026-04-18 00:28:52.461342 | orchestrator | ok: [testbed-manager] 2026-04-18 00:28:52.461353 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:28:52.461364 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:28:52.461375 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:28:52.461386 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:28:52.461397 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:28:52.461408 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:28:52.461418 | orchestrator | 2026-04-18 00:28:52.461430 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-18 00:28:52.461440 | orchestrator | Saturday 18 April 2026 00:28:52 +0000 (0:00:00.196) 0:01:06.554 ******** 2026-04-18 00:28:52.461452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:28:52.461465 | orchestrator | 2026-04-18 00:28:52.461490 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-18 00:31:14.392120 | orchestrator | Saturday 18 April 2026 00:28:52 +0000 (0:00:00.260) 0:01:06.814 ******** 2026-04-18 00:31:14.392222 | orchestrator | ok: [testbed-manager] 2026-04-18 00:31:14.392233 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:31:14.392240 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:31:14.392246 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:31:14.392257 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:31:14.392268 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:31:14.392278 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:31:14.392288 | orchestrator | 2026-04-18 00:31:14.392297 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-18 00:31:14.392309 | orchestrator | Saturday 18 April 2026 00:28:54 +0000 (0:00:01.985) 0:01:08.799 ******** 2026-04-18 00:31:14.392319 | orchestrator | changed: [testbed-manager] 2026-04-18 00:31:14.392331 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:31:14.392341 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:31:14.392353 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:31:14.392422 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:31:14.392433 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:31:14.392443 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:31:14.392453 | orchestrator | 2026-04-18 00:31:14.392485 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-18 00:31:14.392497 | orchestrator | Saturday 18 April 2026 00:28:55 +0000 (0:00:00.683) 0:01:09.483 ******** 2026-04-18 00:31:14.392506 | orchestrator | ok: [testbed-manager] 2026-04-18 00:31:14.392518 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:31:14.392530 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:31:14.392540 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:31:14.392550 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:31:14.392558 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:31:14.392568 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:31:14.392577 | orchestrator | 2026-04-18 00:31:14.392587 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-18 00:31:14.392598 | orchestrator | Saturday 18 April 2026 00:28:55 +0000 (0:00:00.305) 0:01:09.789 ******** 2026-04-18 00:31:14.392609 | orchestrator | ok: [testbed-manager] 2026-04-18 00:31:14.392621 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:31:14.392632 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:31:14.392643 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:31:14.392653 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:31:14.392663 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:31:14.392673 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:31:14.392684 | orchestrator | 2026-04-18 00:31:14.392694 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-18 00:31:14.392704 | orchestrator | Saturday 18 April 2026 00:28:56 +0000 (0:00:01.523) 0:01:11.313 ******** 2026-04-18 00:31:14.392714 | orchestrator | changed: [testbed-manager] 2026-04-18 00:31:14.392728 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:31:14.392771 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:31:14.392779 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:31:14.392784 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:31:14.392791 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:31:14.392796 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:31:14.392804 | orchestrator | 2026-04-18 00:31:14.392811 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-18 00:31:14.392817 | orchestrator | Saturday 18 April 2026 00:28:59 +0000 (0:00:02.432) 0:01:13.745 ******** 2026-04-18 00:31:14.392823 | orchestrator | ok: [testbed-manager] 2026-04-18 00:31:14.392829 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:31:14.392836 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:31:14.392846 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:31:14.392854 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:31:14.392864 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:31:14.392873 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:31:14.392882 | orchestrator | 2026-04-18 00:31:14.392890 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-18 00:31:14.392897 | orchestrator | Saturday 18 April 2026 00:29:02 +0000 (0:00:02.664) 0:01:16.409 ******** 2026-04-18 00:31:14.392905 | orchestrator | ok: [testbed-manager] 2026-04-18 00:31:14.392914 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:31:14.392924 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:31:14.392933 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:31:14.392942 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:31:14.392952 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:31:14.392963 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:31:14.392973 | orchestrator | 2026-04-18 00:31:14.392983 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-18 00:31:14.392993 | orchestrator | Saturday 18 April 2026 00:29:40 +0000 (0:00:38.619) 0:01:55.029 ******** 2026-04-18 00:31:14.393003 | orchestrator | changed: [testbed-manager] 2026-04-18 00:31:14.393023 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:31:14.393034 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:31:14.393044 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:31:14.393054 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:31:14.393063 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:31:14.393074 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:31:14.393083 | orchestrator | 2026-04-18 00:31:14.393093 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-18 00:31:14.393105 | orchestrator | Saturday 18 April 2026 00:30:59 +0000 (0:01:18.571) 0:03:13.601 ******** 2026-04-18 00:31:14.393115 | orchestrator | ok: [testbed-manager] 2026-04-18 00:31:14.393126 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:31:14.393133 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:31:14.393139 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:31:14.393145 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:31:14.393150 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:31:14.393157 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:31:14.393163 | orchestrator | 2026-04-18 00:31:14.393169 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-18 00:31:14.393177 | orchestrator | Saturday 18 April 2026 00:31:01 +0000 (0:00:01.918) 0:03:15.519 ******** 2026-04-18 00:31:14.393183 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:31:14.393189 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:31:14.393196 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:31:14.393202 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:31:14.393208 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:31:14.393215 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:31:14.393221 | orchestrator | changed: [testbed-manager] 2026-04-18 00:31:14.393228 | orchestrator | 2026-04-18 00:31:14.393235 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-18 00:31:14.393241 | orchestrator | Saturday 18 April 2026 00:31:13 +0000 (0:00:11.987) 0:03:27.506 ******** 2026-04-18 00:31:14.393278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-18 00:31:14.393305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-18 00:31:14.393315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-18 00:31:14.393324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-18 00:31:14.393331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-18 00:31:14.393339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-18 00:31:14.393345 | orchestrator | 2026-04-18 00:31:14.393352 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-18 00:31:14.393358 | orchestrator | Saturday 18 April 2026 00:31:13 +0000 (0:00:00.430) 0:03:27.936 ******** 2026-04-18 00:31:14.393412 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-18 00:31:14.393416 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:31:14.393420 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-18 00:31:14.393424 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-18 00:31:14.393432 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:31:14.393436 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:31:14.393440 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-18 00:31:14.393444 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:31:14.393448 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-18 00:31:14.393451 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-18 00:31:14.393455 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-18 00:31:14.393459 | orchestrator | 2026-04-18 00:31:14.393463 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-18 00:31:14.393466 | orchestrator | Saturday 18 April 2026 00:31:14 +0000 (0:00:00.755) 0:03:28.692 ******** 2026-04-18 00:31:14.393477 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-18 00:31:14.393486 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-18 00:31:14.393491 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-18 00:31:14.393500 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-18 00:31:14.393510 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-18 00:31:14.393526 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-18 00:31:21.687601 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-18 00:31:21.687695 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-18 00:31:21.687708 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-18 00:31:21.687720 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-18 00:31:21.687731 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:31:21.687739 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-18 00:31:21.687745 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-18 00:31:21.687752 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-18 00:31:21.687758 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-18 00:31:21.687765 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-18 00:31:21.687771 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-18 00:31:21.687777 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-18 00:31:21.687783 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-18 00:31:21.687790 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-18 00:31:21.687796 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-18 00:31:21.687802 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-18 00:31:21.687808 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-18 00:31:21.687814 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-18 00:31:21.687821 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-18 00:31:21.687827 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-18 00:31:21.687833 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:31:21.687839 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-18 00:31:21.687846 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-18 00:31:21.687853 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-18 00:31:21.687859 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-18 00:31:21.687865 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-18 00:31:21.687893 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:31:21.687900 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-18 00:31:21.687906 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-18 00:31:21.687913 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-18 00:31:21.687953 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-18 00:31:21.687960 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-18 00:31:21.687967 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-18 00:31:21.687973 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-18 00:31:21.687979 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-18 00:31:21.687985 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-18 00:31:21.687991 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-18 00:31:21.687998 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:31:21.688004 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-18 00:31:21.688010 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-18 00:31:21.688016 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-18 00:31:21.688022 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-18 00:31:21.688028 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-18 00:31:21.688048 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-18 00:31:21.688054 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-18 00:31:21.688061 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-18 00:31:21.688067 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-18 00:31:21.688073 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-18 00:31:21.688079 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-18 00:31:21.688086 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-18 00:31:21.688092 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-18 00:31:21.688098 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-18 00:31:21.688104 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-18 00:31:21.688110 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-18 00:31:21.688116 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-18 00:31:21.688123 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-18 00:31:21.688130 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-18 00:31:21.688137 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-18 00:31:21.688144 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-18 00:31:21.688151 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-18 00:31:21.688164 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-18 00:31:21.688171 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-18 00:31:21.688178 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-18 00:31:21.688185 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-18 00:31:21.688191 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-18 00:31:21.688198 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-18 00:31:21.688205 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-18 00:31:21.688212 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-18 00:31:21.688219 | orchestrator | 2026-04-18 00:31:21.688226 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-18 00:31:21.688233 | orchestrator | Saturday 18 April 2026 00:31:19 +0000 (0:00:05.112) 0:03:33.804 ******** 2026-04-18 00:31:21.688240 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-18 00:31:21.688247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-18 00:31:21.688254 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-18 00:31:21.688261 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-18 00:31:21.688268 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-18 00:31:21.688281 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-18 00:31:21.688288 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-18 00:31:21.688295 | orchestrator | 2026-04-18 00:31:21.688303 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-18 00:31:21.688309 | orchestrator | Saturday 18 April 2026 00:31:21 +0000 (0:00:01.647) 0:03:35.451 ******** 2026-04-18 00:31:21.688316 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-18 00:31:21.688324 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-18 00:31:21.688331 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:31:21.688338 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-18 00:31:21.688345 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:31:21.688367 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-18 00:31:21.688375 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:31:21.688382 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:31:21.688389 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-18 00:31:21.688396 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-18 00:31:21.688408 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-18 00:31:34.115160 | orchestrator | 2026-04-18 00:31:34.115307 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-18 00:31:34.115368 | orchestrator | Saturday 18 April 2026 00:31:21 +0000 (0:00:00.628) 0:03:36.080 ******** 2026-04-18 00:31:34.115390 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-18 00:31:34.115411 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:31:34.115471 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-18 00:31:34.115494 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:31:34.115548 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-18 00:31:34.115569 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:31:34.115588 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-18 00:31:34.115607 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:31:34.115626 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-18 00:31:34.115645 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-18 00:31:34.115664 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-18 00:31:34.115682 | orchestrator | 2026-04-18 00:31:34.115697 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-18 00:31:34.115714 | orchestrator | Saturday 18 April 2026 00:31:22 +0000 (0:00:00.610) 0:03:36.691 ******** 2026-04-18 00:31:34.115751 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-18 00:31:34.115786 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-18 00:31:34.115805 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:31:34.115824 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-18 00:31:34.115842 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:31:34.115859 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-18 00:31:34.115877 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:31:34.115895 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:31:34.115914 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-18 00:31:34.115933 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-18 00:31:34.115952 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-18 00:31:34.115965 | orchestrator | 2026-04-18 00:31:34.115977 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-18 00:31:34.115988 | orchestrator | Saturday 18 April 2026 00:31:23 +0000 (0:00:00.732) 0:03:37.423 ******** 2026-04-18 00:31:34.115999 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:31:34.116009 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:31:34.116029 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:31:34.116053 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:31:34.116079 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:31:34.116096 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:31:34.116113 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:31:34.116131 | orchestrator | 2026-04-18 00:31:34.116148 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-18 00:31:34.116165 | orchestrator | Saturday 18 April 2026 00:31:23 +0000 (0:00:00.301) 0:03:37.724 ******** 2026-04-18 00:31:34.116182 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:31:34.116220 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:31:34.116240 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:31:34.116258 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:31:34.116275 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:31:34.116294 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:31:34.116313 | orchestrator | ok: [testbed-manager] 2026-04-18 00:31:34.116330 | orchestrator | 2026-04-18 00:31:34.116378 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-18 00:31:34.116390 | orchestrator | Saturday 18 April 2026 00:31:28 +0000 (0:00:05.009) 0:03:42.734 ******** 2026-04-18 00:31:34.116421 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-18 00:31:34.116433 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-18 00:31:34.116444 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:31:34.116455 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:31:34.116466 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-18 00:31:34.116477 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-18 00:31:34.116488 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:31:34.116499 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-18 00:31:34.116509 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:31:34.116520 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-18 00:31:34.116531 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:31:34.116541 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:31:34.116552 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-18 00:31:34.116563 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:31:34.116574 | orchestrator | 2026-04-18 00:31:34.116584 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-18 00:31:34.116595 | orchestrator | Saturday 18 April 2026 00:31:28 +0000 (0:00:00.321) 0:03:43.056 ******** 2026-04-18 00:31:34.116606 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-18 00:31:34.116620 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-18 00:31:34.116639 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-18 00:31:34.116689 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-18 00:31:34.116710 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-18 00:31:34.116728 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-18 00:31:34.116746 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-18 00:31:34.116763 | orchestrator | 2026-04-18 00:31:34.116775 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-18 00:31:34.116786 | orchestrator | Saturday 18 April 2026 00:31:29 +0000 (0:00:01.122) 0:03:44.179 ******** 2026-04-18 00:31:34.116799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:31:34.116813 | orchestrator | 2026-04-18 00:31:34.116824 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-18 00:31:34.116835 | orchestrator | Saturday 18 April 2026 00:31:30 +0000 (0:00:00.430) 0:03:44.609 ******** 2026-04-18 00:31:34.116845 | orchestrator | ok: [testbed-manager] 2026-04-18 00:31:34.116856 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:31:34.116872 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:31:34.116889 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:31:34.116916 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:31:34.116935 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:31:34.116952 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:31:34.116970 | orchestrator | 2026-04-18 00:31:34.116989 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-18 00:31:34.117008 | orchestrator | Saturday 18 April 2026 00:31:31 +0000 (0:00:01.405) 0:03:46.015 ******** 2026-04-18 00:31:34.117026 | orchestrator | ok: [testbed-manager] 2026-04-18 00:31:34.117041 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:31:34.117052 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:31:34.117063 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:31:34.117074 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:31:34.117084 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:31:34.117095 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:31:34.117105 | orchestrator | 2026-04-18 00:31:34.117116 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-18 00:31:34.117127 | orchestrator | Saturday 18 April 2026 00:31:32 +0000 (0:00:00.608) 0:03:46.623 ******** 2026-04-18 00:31:34.117138 | orchestrator | changed: [testbed-manager] 2026-04-18 00:31:34.117149 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:31:34.117172 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:31:34.117184 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:31:34.117194 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:31:34.117205 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:31:34.117215 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:31:34.117226 | orchestrator | 2026-04-18 00:31:34.117237 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-18 00:31:34.117248 | orchestrator | Saturday 18 April 2026 00:31:32 +0000 (0:00:00.686) 0:03:47.310 ******** 2026-04-18 00:31:34.117258 | orchestrator | ok: [testbed-manager] 2026-04-18 00:31:34.117268 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:31:34.117277 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:31:34.117287 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:31:34.117296 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:31:34.117306 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:31:34.117315 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:31:34.117325 | orchestrator | 2026-04-18 00:31:34.117334 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-18 00:31:34.117376 | orchestrator | Saturday 18 April 2026 00:31:33 +0000 (0:00:00.621) 0:03:47.931 ******** 2026-04-18 00:31:34.117401 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776470738.8748608, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:34.117416 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776470776.5929646, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:34.117427 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776470738.4842765, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:34.117462 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776470723.5358598, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:40.000786 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776470737.2614334, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:40.000930 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776470728.4981704, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:40.000947 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776470737.044621, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:40.000959 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:40.000986 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:40.000998 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:40.001009 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:40.001049 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:40.001069 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:40.001081 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-18 00:31:40.001094 | orchestrator | 2026-04-18 00:31:40.001108 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-18 00:31:40.001122 | orchestrator | Saturday 18 April 2026 00:31:34 +0000 (0:00:01.086) 0:03:49.017 ******** 2026-04-18 00:31:40.001142 | orchestrator | changed: [testbed-manager] 2026-04-18 00:31:40.001168 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:31:40.001195 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:31:40.001213 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:31:40.001233 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:31:40.001256 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:31:40.001282 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:31:40.001300 | orchestrator | 2026-04-18 00:31:40.001321 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-18 00:31:40.001379 | orchestrator | Saturday 18 April 2026 00:31:35 +0000 (0:00:01.268) 0:03:50.286 ******** 2026-04-18 00:31:40.001397 | orchestrator | changed: [testbed-manager] 2026-04-18 00:31:40.001410 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:31:40.001423 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:31:40.001435 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:31:40.001448 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:31:40.001461 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:31:40.001474 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:31:40.001486 | orchestrator | 2026-04-18 00:31:40.001508 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-18 00:31:40.001521 | orchestrator | Saturday 18 April 2026 00:31:37 +0000 (0:00:01.247) 0:03:51.533 ******** 2026-04-18 00:31:40.001532 | orchestrator | changed: [testbed-manager] 2026-04-18 00:31:40.001543 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:31:40.001554 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:31:40.001564 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:31:40.001575 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:31:40.001586 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:31:40.001596 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:31:40.001607 | orchestrator | 2026-04-18 00:31:40.001618 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-18 00:31:40.001629 | orchestrator | Saturday 18 April 2026 00:31:38 +0000 (0:00:01.308) 0:03:52.841 ******** 2026-04-18 00:31:40.001639 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:31:40.001650 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:31:40.001661 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:31:40.001671 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:31:40.001682 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:31:40.001693 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:31:40.001713 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:31:40.001723 | orchestrator | 2026-04-18 00:31:40.001734 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-18 00:31:40.001745 | orchestrator | Saturday 18 April 2026 00:31:38 +0000 (0:00:00.288) 0:03:53.130 ******** 2026-04-18 00:31:40.001756 | orchestrator | ok: [testbed-manager] 2026-04-18 00:31:40.001768 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:31:40.001779 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:31:40.001789 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:31:40.001800 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:31:40.001810 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:31:40.001821 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:31:40.001832 | orchestrator | 2026-04-18 00:31:40.001842 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-18 00:31:40.001854 | orchestrator | Saturday 18 April 2026 00:31:39 +0000 (0:00:00.797) 0:03:53.928 ******** 2026-04-18 00:31:40.001867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:31:40.001880 | orchestrator | 2026-04-18 00:31:40.001891 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-18 00:31:40.001912 | orchestrator | Saturday 18 April 2026 00:31:39 +0000 (0:00:00.429) 0:03:54.357 ******** 2026-04-18 00:33:02.870797 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:02.870911 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:33:02.870927 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:33:02.870938 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:33:02.870949 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:33:02.870960 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:33:02.870971 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:33:02.870982 | orchestrator | 2026-04-18 00:33:02.870994 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-18 00:33:02.871006 | orchestrator | Saturday 18 April 2026 00:31:49 +0000 (0:00:09.364) 0:04:03.722 ******** 2026-04-18 00:33:02.871017 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:02.871028 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:33:02.871039 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:33:02.871050 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:33:02.871061 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:33:02.871071 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:33:02.871082 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:33:02.871093 | orchestrator | 2026-04-18 00:33:02.871104 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-18 00:33:02.871115 | orchestrator | Saturday 18 April 2026 00:31:50 +0000 (0:00:01.294) 0:04:05.016 ******** 2026-04-18 00:33:02.871125 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:02.871136 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:33:02.871147 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:33:02.871158 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:33:02.871168 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:33:02.871179 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:33:02.871189 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:33:02.871201 | orchestrator | 2026-04-18 00:33:02.871212 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-18 00:33:02.871223 | orchestrator | Saturday 18 April 2026 00:31:51 +0000 (0:00:01.009) 0:04:06.026 ******** 2026-04-18 00:33:02.871234 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:02.871245 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:33:02.871338 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:33:02.871352 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:33:02.871364 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:33:02.871377 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:33:02.871389 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:33:02.871401 | orchestrator | 2026-04-18 00:33:02.871446 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-18 00:33:02.871468 | orchestrator | Saturday 18 April 2026 00:31:51 +0000 (0:00:00.279) 0:04:06.305 ******** 2026-04-18 00:33:02.871485 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:02.871506 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:33:02.871526 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:33:02.871545 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:33:02.871560 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:33:02.871572 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:33:02.871584 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:33:02.871597 | orchestrator | 2026-04-18 00:33:02.871610 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-18 00:33:02.871622 | orchestrator | Saturday 18 April 2026 00:31:52 +0000 (0:00:00.291) 0:04:06.597 ******** 2026-04-18 00:33:02.871635 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:02.871647 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:33:02.871659 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:33:02.871672 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:33:02.871684 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:33:02.871696 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:33:02.871707 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:33:02.871717 | orchestrator | 2026-04-18 00:33:02.871744 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-18 00:33:02.871755 | orchestrator | Saturday 18 April 2026 00:31:52 +0000 (0:00:00.271) 0:04:06.869 ******** 2026-04-18 00:33:02.871766 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:02.871777 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:33:02.871787 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:33:02.871798 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:33:02.871808 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:33:02.871819 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:33:02.871829 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:33:02.871840 | orchestrator | 2026-04-18 00:33:02.871851 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-18 00:33:02.871862 | orchestrator | Saturday 18 April 2026 00:31:57 +0000 (0:00:05.294) 0:04:12.163 ******** 2026-04-18 00:33:02.871875 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:33:02.871888 | orchestrator | 2026-04-18 00:33:02.871899 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-18 00:33:02.871910 | orchestrator | Saturday 18 April 2026 00:31:58 +0000 (0:00:00.396) 0:04:12.560 ******** 2026-04-18 00:33:02.871921 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-18 00:33:02.871932 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-18 00:33:02.871943 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-18 00:33:02.871954 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-18 00:33:02.871965 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:33:02.871976 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-18 00:33:02.871987 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-18 00:33:02.871997 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:33:02.872008 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-18 00:33:02.872019 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-18 00:33:02.872029 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:33:02.872040 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-18 00:33:02.872051 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-18 00:33:02.872061 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:33:02.872072 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-18 00:33:02.872093 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-18 00:33:02.872123 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:33:02.872134 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:33:02.872145 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-18 00:33:02.872155 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-18 00:33:02.872166 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:33:02.872177 | orchestrator | 2026-04-18 00:33:02.872187 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-18 00:33:02.872203 | orchestrator | Saturday 18 April 2026 00:31:58 +0000 (0:00:00.316) 0:04:12.876 ******** 2026-04-18 00:33:02.872227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:33:02.872279 | orchestrator | 2026-04-18 00:33:02.872298 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-18 00:33:02.872315 | orchestrator | Saturday 18 April 2026 00:31:59 +0000 (0:00:00.500) 0:04:13.376 ******** 2026-04-18 00:33:02.872332 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-18 00:33:02.872350 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:33:02.872367 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-18 00:33:02.872385 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-18 00:33:02.872405 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:33:02.872422 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:33:02.872440 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-18 00:33:02.872458 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-18 00:33:02.872475 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:33:02.872493 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-18 00:33:02.872512 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:33:02.872530 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:33:02.872548 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-18 00:33:02.872566 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:33:02.872584 | orchestrator | 2026-04-18 00:33:02.872601 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-18 00:33:02.872619 | orchestrator | Saturday 18 April 2026 00:31:59 +0000 (0:00:00.306) 0:04:13.683 ******** 2026-04-18 00:33:02.872636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:33:02.872655 | orchestrator | 2026-04-18 00:33:02.872673 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-18 00:33:02.872690 | orchestrator | Saturday 18 April 2026 00:31:59 +0000 (0:00:00.419) 0:04:14.103 ******** 2026-04-18 00:33:02.872707 | orchestrator | changed: [testbed-manager] 2026-04-18 00:33:02.872726 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:33:02.872755 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:33:02.872773 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:33:02.872791 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:33:02.872809 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:33:02.872827 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:33:02.872844 | orchestrator | 2026-04-18 00:33:02.872863 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-18 00:33:02.872881 | orchestrator | Saturday 18 April 2026 00:32:35 +0000 (0:00:35.333) 0:04:49.436 ******** 2026-04-18 00:33:02.872899 | orchestrator | changed: [testbed-manager] 2026-04-18 00:33:02.872917 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:33:02.872935 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:33:02.872967 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:33:02.872985 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:33:02.873004 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:33:02.873020 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:33:02.873037 | orchestrator | 2026-04-18 00:33:02.873052 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-18 00:33:02.873069 | orchestrator | Saturday 18 April 2026 00:32:44 +0000 (0:00:09.151) 0:04:58.588 ******** 2026-04-18 00:33:02.873087 | orchestrator | changed: [testbed-manager] 2026-04-18 00:33:02.873104 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:33:02.873121 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:33:02.873139 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:33:02.873158 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:33:02.873176 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:33:02.873194 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:33:02.873212 | orchestrator | 2026-04-18 00:33:02.873231 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-18 00:33:02.873274 | orchestrator | Saturday 18 April 2026 00:32:53 +0000 (0:00:09.180) 0:05:07.768 ******** 2026-04-18 00:33:02.873293 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:02.873311 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:33:02.873331 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:33:02.873350 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:33:02.873368 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:33:02.873386 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:33:02.873403 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:33:02.873421 | orchestrator | 2026-04-18 00:33:02.873439 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-18 00:33:02.873458 | orchestrator | Saturday 18 April 2026 00:32:55 +0000 (0:00:02.135) 0:05:09.904 ******** 2026-04-18 00:33:02.873477 | orchestrator | changed: [testbed-manager] 2026-04-18 00:33:02.873496 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:33:02.873514 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:33:02.873531 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:33:02.873549 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:33:02.873568 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:33:02.873586 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:33:02.873605 | orchestrator | 2026-04-18 00:33:02.873642 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-18 00:33:13.734599 | orchestrator | Saturday 18 April 2026 00:33:02 +0000 (0:00:07.322) 0:05:17.226 ******** 2026-04-18 00:33:13.734716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:33:13.734734 | orchestrator | 2026-04-18 00:33:13.734747 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-18 00:33:13.734758 | orchestrator | Saturday 18 April 2026 00:33:03 +0000 (0:00:00.388) 0:05:17.615 ******** 2026-04-18 00:33:13.734770 | orchestrator | changed: [testbed-manager] 2026-04-18 00:33:13.734782 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:33:13.734793 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:33:13.734804 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:33:13.734815 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:33:13.734826 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:33:13.734837 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:33:13.734848 | orchestrator | 2026-04-18 00:33:13.734859 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-18 00:33:13.734871 | orchestrator | Saturday 18 April 2026 00:33:03 +0000 (0:00:00.724) 0:05:18.339 ******** 2026-04-18 00:33:13.734882 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:13.734894 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:33:13.734905 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:33:13.734916 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:33:13.734953 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:33:13.734965 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:33:13.734976 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:33:13.734987 | orchestrator | 2026-04-18 00:33:13.734999 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-18 00:33:13.735010 | orchestrator | Saturday 18 April 2026 00:33:05 +0000 (0:00:01.848) 0:05:20.187 ******** 2026-04-18 00:33:13.735021 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:33:13.735032 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:33:13.735043 | orchestrator | changed: [testbed-manager] 2026-04-18 00:33:13.735054 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:33:13.735064 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:33:13.735075 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:33:13.735086 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:33:13.735097 | orchestrator | 2026-04-18 00:33:13.735108 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-18 00:33:13.735119 | orchestrator | Saturday 18 April 2026 00:33:06 +0000 (0:00:00.763) 0:05:20.950 ******** 2026-04-18 00:33:13.735132 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:33:13.735144 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:33:13.735157 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:33:13.735169 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:33:13.735181 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:33:13.735194 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:33:13.735206 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:33:13.735219 | orchestrator | 2026-04-18 00:33:13.735231 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-18 00:33:13.735270 | orchestrator | Saturday 18 April 2026 00:33:06 +0000 (0:00:00.254) 0:05:21.205 ******** 2026-04-18 00:33:13.735283 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:33:13.735311 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:33:13.735324 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:33:13.735336 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:33:13.735348 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:33:13.735360 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:33:13.735373 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:33:13.735386 | orchestrator | 2026-04-18 00:33:13.735398 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-18 00:33:13.735411 | orchestrator | Saturday 18 April 2026 00:33:07 +0000 (0:00:00.385) 0:05:21.590 ******** 2026-04-18 00:33:13.735423 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:13.735435 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:33:13.735447 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:33:13.735460 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:33:13.735473 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:33:13.735485 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:33:13.735496 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:33:13.735507 | orchestrator | 2026-04-18 00:33:13.735518 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-18 00:33:13.735529 | orchestrator | Saturday 18 April 2026 00:33:07 +0000 (0:00:00.362) 0:05:21.952 ******** 2026-04-18 00:33:13.735540 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:33:13.735551 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:33:13.735657 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:33:13.735669 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:33:13.735680 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:33:13.735690 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:33:13.735701 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:33:13.735712 | orchestrator | 2026-04-18 00:33:13.735723 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-18 00:33:13.735735 | orchestrator | Saturday 18 April 2026 00:33:07 +0000 (0:00:00.267) 0:05:22.220 ******** 2026-04-18 00:33:13.735746 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:13.735758 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:33:13.735780 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:33:13.735791 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:33:13.735802 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:33:13.735812 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:33:13.735823 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:33:13.735834 | orchestrator | 2026-04-18 00:33:13.735845 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-18 00:33:13.735856 | orchestrator | Saturday 18 April 2026 00:33:08 +0000 (0:00:00.303) 0:05:22.524 ******** 2026-04-18 00:33:13.735867 | orchestrator | ok: [testbed-manager] =>  2026-04-18 00:33:13.735878 | orchestrator |  docker_version: 5:27.5.1 2026-04-18 00:33:13.735889 | orchestrator | ok: [testbed-node-0] =>  2026-04-18 00:33:13.735899 | orchestrator |  docker_version: 5:27.5.1 2026-04-18 00:33:13.735910 | orchestrator | ok: [testbed-node-1] =>  2026-04-18 00:33:13.735921 | orchestrator |  docker_version: 5:27.5.1 2026-04-18 00:33:13.735932 | orchestrator | ok: [testbed-node-2] =>  2026-04-18 00:33:13.735942 | orchestrator |  docker_version: 5:27.5.1 2026-04-18 00:33:13.735972 | orchestrator | ok: [testbed-node-3] =>  2026-04-18 00:33:13.735984 | orchestrator |  docker_version: 5:27.5.1 2026-04-18 00:33:13.735995 | orchestrator | ok: [testbed-node-4] =>  2026-04-18 00:33:13.736006 | orchestrator |  docker_version: 5:27.5.1 2026-04-18 00:33:13.736016 | orchestrator | ok: [testbed-node-5] =>  2026-04-18 00:33:13.736027 | orchestrator |  docker_version: 5:27.5.1 2026-04-18 00:33:13.736038 | orchestrator | 2026-04-18 00:33:13.736049 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-18 00:33:13.736060 | orchestrator | Saturday 18 April 2026 00:33:08 +0000 (0:00:00.232) 0:05:22.756 ******** 2026-04-18 00:33:13.736071 | orchestrator | ok: [testbed-manager] =>  2026-04-18 00:33:13.736082 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-18 00:33:13.736093 | orchestrator | ok: [testbed-node-0] =>  2026-04-18 00:33:13.736103 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-18 00:33:13.736114 | orchestrator | ok: [testbed-node-1] =>  2026-04-18 00:33:13.736125 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-18 00:33:13.736136 | orchestrator | ok: [testbed-node-2] =>  2026-04-18 00:33:13.736147 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-18 00:33:13.736157 | orchestrator | ok: [testbed-node-3] =>  2026-04-18 00:33:13.736168 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-18 00:33:13.736179 | orchestrator | ok: [testbed-node-4] =>  2026-04-18 00:33:13.736190 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-18 00:33:13.736201 | orchestrator | ok: [testbed-node-5] =>  2026-04-18 00:33:13.736212 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-18 00:33:13.736223 | orchestrator | 2026-04-18 00:33:13.736251 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-18 00:33:13.736263 | orchestrator | Saturday 18 April 2026 00:33:08 +0000 (0:00:00.247) 0:05:23.004 ******** 2026-04-18 00:33:13.736294 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:33:13.736305 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:33:13.736316 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:33:13.736326 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:33:13.736337 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:33:13.736348 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:33:13.736359 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:33:13.736370 | orchestrator | 2026-04-18 00:33:13.736381 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-18 00:33:13.736406 | orchestrator | Saturday 18 April 2026 00:33:08 +0000 (0:00:00.239) 0:05:23.244 ******** 2026-04-18 00:33:13.736418 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:33:13.736429 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:33:13.736439 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:33:13.736450 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:33:13.736461 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:33:13.736472 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:33:13.736483 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:33:13.736501 | orchestrator | 2026-04-18 00:33:13.736512 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-18 00:33:13.736523 | orchestrator | Saturday 18 April 2026 00:33:09 +0000 (0:00:00.244) 0:05:23.488 ******** 2026-04-18 00:33:13.736537 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:33:13.736550 | orchestrator | 2026-04-18 00:33:13.736561 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-18 00:33:13.736572 | orchestrator | Saturday 18 April 2026 00:33:09 +0000 (0:00:00.412) 0:05:23.900 ******** 2026-04-18 00:33:13.736583 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:13.736594 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:33:13.736605 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:33:13.736616 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:33:13.736627 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:33:13.736638 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:33:13.736649 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:33:13.736660 | orchestrator | 2026-04-18 00:33:13.736671 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-18 00:33:13.736682 | orchestrator | Saturday 18 April 2026 00:33:10 +0000 (0:00:00.817) 0:05:24.718 ******** 2026-04-18 00:33:13.736723 | orchestrator | ok: [testbed-manager] 2026-04-18 00:33:13.736735 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:33:13.736746 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:33:13.736758 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:33:13.736769 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:33:13.736780 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:33:13.736791 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:33:13.736802 | orchestrator | 2026-04-18 00:33:13.736813 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-18 00:33:13.736826 | orchestrator | Saturday 18 April 2026 00:33:13 +0000 (0:00:03.038) 0:05:27.757 ******** 2026-04-18 00:33:13.736837 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-18 00:33:13.736848 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-18 00:33:13.736859 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-18 00:33:13.736870 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:33:13.736881 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-18 00:33:13.736892 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-18 00:33:13.736903 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-18 00:33:13.736914 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:33:13.736925 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-18 00:33:13.736936 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-18 00:33:13.736947 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-18 00:33:13.736958 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:33:13.736969 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-18 00:33:13.736980 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-18 00:33:13.736991 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-18 00:33:13.737002 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-18 00:33:13.737022 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-18 00:34:17.572443 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-18 00:34:17.572544 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:34:17.572559 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-18 00:34:17.572617 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-18 00:34:17.572629 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-18 00:34:17.572659 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:34:17.572669 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:34:17.572678 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-18 00:34:17.572688 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-18 00:34:17.572698 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-18 00:34:17.572707 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:34:17.572717 | orchestrator | 2026-04-18 00:34:17.572728 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-18 00:34:17.572740 | orchestrator | Saturday 18 April 2026 00:33:13 +0000 (0:00:00.544) 0:05:28.302 ******** 2026-04-18 00:34:17.572750 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:17.572761 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:17.572772 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:17.572782 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:17.572793 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:17.572803 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:17.572814 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:17.572840 | orchestrator | 2026-04-18 00:34:17.572863 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-18 00:34:17.572875 | orchestrator | Saturday 18 April 2026 00:33:21 +0000 (0:00:07.192) 0:05:35.495 ******** 2026-04-18 00:34:17.572885 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:17.572896 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:17.572907 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:17.572918 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:17.572928 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:17.572939 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:17.572949 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:17.572960 | orchestrator | 2026-04-18 00:34:17.572971 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-18 00:34:17.572983 | orchestrator | Saturday 18 April 2026 00:33:22 +0000 (0:00:01.043) 0:05:36.538 ******** 2026-04-18 00:34:17.572997 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:17.573010 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:17.573022 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:17.573034 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:17.573046 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:17.573058 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:17.573070 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:17.573082 | orchestrator | 2026-04-18 00:34:17.573094 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-18 00:34:17.573106 | orchestrator | Saturday 18 April 2026 00:33:31 +0000 (0:00:08.905) 0:05:45.443 ******** 2026-04-18 00:34:17.573119 | orchestrator | changed: [testbed-manager] 2026-04-18 00:34:17.573132 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:17.573144 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:17.573175 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:17.573194 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:17.573206 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:17.573218 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:17.573230 | orchestrator | 2026-04-18 00:34:17.573243 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-18 00:34:17.573255 | orchestrator | Saturday 18 April 2026 00:33:34 +0000 (0:00:03.610) 0:05:49.054 ******** 2026-04-18 00:34:17.573267 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:17.573279 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:17.573291 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:17.573303 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:17.573315 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:17.573327 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:17.573339 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:17.573350 | orchestrator | 2026-04-18 00:34:17.573361 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-18 00:34:17.573380 | orchestrator | Saturday 18 April 2026 00:33:35 +0000 (0:00:01.295) 0:05:50.349 ******** 2026-04-18 00:34:17.573391 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:17.573402 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:17.573412 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:17.573423 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:17.573434 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:17.573445 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:17.573455 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:17.573466 | orchestrator | 2026-04-18 00:34:17.573477 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-18 00:34:17.573488 | orchestrator | Saturday 18 April 2026 00:33:37 +0000 (0:00:01.323) 0:05:51.673 ******** 2026-04-18 00:34:17.573498 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:34:17.573509 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:34:17.573520 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:34:17.573531 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:34:17.573541 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:34:17.573552 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:34:17.573562 | orchestrator | changed: [testbed-manager] 2026-04-18 00:34:17.573573 | orchestrator | 2026-04-18 00:34:17.573584 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-18 00:34:17.573595 | orchestrator | Saturday 18 April 2026 00:33:37 +0000 (0:00:00.620) 0:05:52.293 ******** 2026-04-18 00:34:17.573605 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:17.573616 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:17.573627 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:17.573638 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:17.573648 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:17.573659 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:17.573670 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:17.573680 | orchestrator | 2026-04-18 00:34:17.573691 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-18 00:34:17.573719 | orchestrator | Saturday 18 April 2026 00:33:48 +0000 (0:00:10.472) 0:06:02.766 ******** 2026-04-18 00:34:17.573731 | orchestrator | changed: [testbed-manager] 2026-04-18 00:34:17.573742 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:17.573752 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:17.573763 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:17.573774 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:17.573784 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:17.573795 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:17.573806 | orchestrator | 2026-04-18 00:34:17.573817 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-18 00:34:17.573827 | orchestrator | Saturday 18 April 2026 00:33:49 +0000 (0:00:01.192) 0:06:03.959 ******** 2026-04-18 00:34:17.573838 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:17.573849 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:17.573860 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:17.573870 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:17.573881 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:17.573892 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:17.573903 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:17.573913 | orchestrator | 2026-04-18 00:34:17.573924 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-18 00:34:17.573935 | orchestrator | Saturday 18 April 2026 00:33:59 +0000 (0:00:09.858) 0:06:13.817 ******** 2026-04-18 00:34:17.573946 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:17.573957 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:17.573967 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:17.573978 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:17.573989 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:17.573999 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:17.574075 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:17.574089 | orchestrator | 2026-04-18 00:34:17.574100 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-18 00:34:17.574111 | orchestrator | Saturday 18 April 2026 00:34:11 +0000 (0:00:11.756) 0:06:25.573 ******** 2026-04-18 00:34:17.574122 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-18 00:34:17.574133 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-18 00:34:17.574143 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-18 00:34:17.574154 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-18 00:34:17.574181 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-18 00:34:17.574226 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-18 00:34:17.574237 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-18 00:34:17.574248 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-18 00:34:17.574259 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-18 00:34:17.574270 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-18 00:34:17.574281 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-18 00:34:17.574291 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-18 00:34:17.574302 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-18 00:34:17.574313 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-18 00:34:17.574324 | orchestrator | 2026-04-18 00:34:17.574335 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-18 00:34:17.574347 | orchestrator | Saturday 18 April 2026 00:34:12 +0000 (0:00:01.134) 0:06:26.708 ******** 2026-04-18 00:34:17.574358 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:34:17.574369 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:34:17.574379 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:34:17.574390 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:34:17.574401 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:34:17.574412 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:34:17.574423 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:34:17.574433 | orchestrator | 2026-04-18 00:34:17.574444 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-18 00:34:17.574455 | orchestrator | Saturday 18 April 2026 00:34:12 +0000 (0:00:00.477) 0:06:27.185 ******** 2026-04-18 00:34:17.574466 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:17.574477 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:17.574488 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:17.574499 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:17.574509 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:17.574520 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:17.574531 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:17.574542 | orchestrator | 2026-04-18 00:34:17.574553 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-18 00:34:17.574565 | orchestrator | Saturday 18 April 2026 00:34:16 +0000 (0:00:03.990) 0:06:31.175 ******** 2026-04-18 00:34:17.574576 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:34:17.574587 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:34:17.574597 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:34:17.574608 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:34:17.574619 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:34:17.574629 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:34:17.574640 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:34:17.574651 | orchestrator | 2026-04-18 00:34:17.574662 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-18 00:34:17.574674 | orchestrator | Saturday 18 April 2026 00:34:17 +0000 (0:00:00.497) 0:06:31.672 ******** 2026-04-18 00:34:17.574685 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-18 00:34:17.574696 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-18 00:34:17.574714 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:34:17.574725 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-18 00:34:17.574736 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-18 00:34:17.574747 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:34:17.574758 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-18 00:34:17.574769 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-18 00:34:17.574780 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:34:17.574800 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-18 00:34:35.905905 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-18 00:34:35.906076 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:34:35.906095 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-18 00:34:35.906108 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-18 00:34:35.906119 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:34:35.906131 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-18 00:34:35.906182 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-18 00:34:35.906194 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:34:35.906205 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-18 00:34:35.906216 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-18 00:34:35.906227 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:34:35.906239 | orchestrator | 2026-04-18 00:34:35.906251 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-18 00:34:35.906263 | orchestrator | Saturday 18 April 2026 00:34:17 +0000 (0:00:00.534) 0:06:32.207 ******** 2026-04-18 00:34:35.906274 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:34:35.906285 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:34:35.906296 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:34:35.906307 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:34:35.906318 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:34:35.906328 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:34:35.906339 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:34:35.906350 | orchestrator | 2026-04-18 00:34:35.906361 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-18 00:34:35.906373 | orchestrator | Saturday 18 April 2026 00:34:18 +0000 (0:00:00.478) 0:06:32.686 ******** 2026-04-18 00:34:35.906384 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:34:35.906395 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:34:35.906405 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:34:35.906416 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:34:35.906427 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:34:35.906439 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:34:35.906451 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:34:35.906463 | orchestrator | 2026-04-18 00:34:35.906476 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-18 00:34:35.906488 | orchestrator | Saturday 18 April 2026 00:34:18 +0000 (0:00:00.531) 0:06:33.217 ******** 2026-04-18 00:34:35.906500 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:34:35.906513 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:34:35.906525 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:34:35.906537 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:34:35.906549 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:34:35.906561 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:34:35.906573 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:34:35.906585 | orchestrator | 2026-04-18 00:34:35.906598 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-18 00:34:35.906611 | orchestrator | Saturday 18 April 2026 00:34:19 +0000 (0:00:00.465) 0:06:33.683 ******** 2026-04-18 00:34:35.906658 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:35.906672 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:34:35.906699 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:34:35.906712 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:34:35.906725 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:34:35.906737 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:34:35.906749 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:34:35.906761 | orchestrator | 2026-04-18 00:34:35.906774 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-18 00:34:35.906786 | orchestrator | Saturday 18 April 2026 00:34:21 +0000 (0:00:02.057) 0:06:35.741 ******** 2026-04-18 00:34:35.906799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:34:35.906814 | orchestrator | 2026-04-18 00:34:35.906827 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-18 00:34:35.906840 | orchestrator | Saturday 18 April 2026 00:34:22 +0000 (0:00:00.783) 0:06:36.524 ******** 2026-04-18 00:34:35.906852 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:35.906863 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:35.906874 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:35.906884 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:35.906895 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:35.906906 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:35.906916 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:35.906927 | orchestrator | 2026-04-18 00:34:35.906938 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-18 00:34:35.906948 | orchestrator | Saturday 18 April 2026 00:34:23 +0000 (0:00:01.022) 0:06:37.547 ******** 2026-04-18 00:34:35.906959 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:35.906970 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:35.906981 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:35.906991 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:35.907002 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:35.907012 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:35.907023 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:35.907034 | orchestrator | 2026-04-18 00:34:35.907045 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-18 00:34:35.907056 | orchestrator | Saturday 18 April 2026 00:34:23 +0000 (0:00:00.742) 0:06:38.290 ******** 2026-04-18 00:34:35.907066 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:35.907077 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:35.907088 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:35.907098 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:35.907109 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:35.907120 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:35.907131 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:35.907162 | orchestrator | 2026-04-18 00:34:35.907173 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-18 00:34:35.907204 | orchestrator | Saturday 18 April 2026 00:34:25 +0000 (0:00:01.355) 0:06:39.645 ******** 2026-04-18 00:34:35.907215 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:34:35.907226 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:34:35.907237 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:34:35.907248 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:34:35.907258 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:34:35.907269 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:34:35.907280 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:34:35.907290 | orchestrator | 2026-04-18 00:34:35.907301 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-18 00:34:35.907312 | orchestrator | Saturday 18 April 2026 00:34:26 +0000 (0:00:01.334) 0:06:40.980 ******** 2026-04-18 00:34:35.907323 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:35.907334 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:35.907354 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:35.907365 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:35.907376 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:35.907387 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:35.907397 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:35.907408 | orchestrator | 2026-04-18 00:34:35.907419 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-18 00:34:35.907430 | orchestrator | Saturday 18 April 2026 00:34:27 +0000 (0:00:01.311) 0:06:42.292 ******** 2026-04-18 00:34:35.907441 | orchestrator | changed: [testbed-manager] 2026-04-18 00:34:35.907451 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:34:35.907462 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:34:35.907472 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:34:35.907483 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:34:35.907494 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:34:35.907504 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:34:35.907515 | orchestrator | 2026-04-18 00:34:35.907526 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-18 00:34:35.907537 | orchestrator | Saturday 18 April 2026 00:34:29 +0000 (0:00:01.646) 0:06:43.938 ******** 2026-04-18 00:34:35.907548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:34:35.907560 | orchestrator | 2026-04-18 00:34:35.907570 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-18 00:34:35.907581 | orchestrator | Saturday 18 April 2026 00:34:30 +0000 (0:00:00.729) 0:06:44.668 ******** 2026-04-18 00:34:35.907592 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:35.907603 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:34:35.907613 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:34:35.907624 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:34:35.907635 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:34:35.907645 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:34:35.907656 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:34:35.907667 | orchestrator | 2026-04-18 00:34:35.907677 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-18 00:34:35.907688 | orchestrator | Saturday 18 April 2026 00:34:31 +0000 (0:00:01.261) 0:06:45.929 ******** 2026-04-18 00:34:35.907699 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:35.907710 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:34:35.907720 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:34:35.907737 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:34:35.907748 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:34:35.907759 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:34:35.907769 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:34:35.907780 | orchestrator | 2026-04-18 00:34:35.907791 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-18 00:34:35.907802 | orchestrator | Saturday 18 April 2026 00:34:32 +0000 (0:00:01.231) 0:06:47.161 ******** 2026-04-18 00:34:35.907813 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:35.907823 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:34:35.907834 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:34:35.907845 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:34:35.907855 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:34:35.907866 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:34:35.907876 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:34:35.907887 | orchestrator | 2026-04-18 00:34:35.907898 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-18 00:34:35.907909 | orchestrator | Saturday 18 April 2026 00:34:33 +0000 (0:00:01.063) 0:06:48.224 ******** 2026-04-18 00:34:35.907919 | orchestrator | ok: [testbed-manager] 2026-04-18 00:34:35.907930 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:34:35.907941 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:34:35.907951 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:34:35.907969 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:34:35.907980 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:34:35.907991 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:34:35.908001 | orchestrator | 2026-04-18 00:34:35.908012 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-18 00:34:35.908023 | orchestrator | Saturday 18 April 2026 00:34:34 +0000 (0:00:01.036) 0:06:49.261 ******** 2026-04-18 00:34:35.908034 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:34:35.908045 | orchestrator | 2026-04-18 00:34:35.908056 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-18 00:34:35.908066 | orchestrator | Saturday 18 April 2026 00:34:35 +0000 (0:00:00.781) 0:06:50.043 ******** 2026-04-18 00:34:35.908077 | orchestrator | 2026-04-18 00:34:35.908088 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-18 00:34:35.908099 | orchestrator | Saturday 18 April 2026 00:34:35 +0000 (0:00:00.037) 0:06:50.080 ******** 2026-04-18 00:34:35.908110 | orchestrator | 2026-04-18 00:34:35.908120 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-18 00:34:35.908131 | orchestrator | Saturday 18 April 2026 00:34:35 +0000 (0:00:00.133) 0:06:50.213 ******** 2026-04-18 00:34:35.908160 | orchestrator | 2026-04-18 00:34:35.908172 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-18 00:34:35.908190 | orchestrator | Saturday 18 April 2026 00:34:35 +0000 (0:00:00.048) 0:06:50.262 ******** 2026-04-18 00:35:00.805760 | orchestrator | 2026-04-18 00:35:00.805874 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-18 00:35:00.805892 | orchestrator | Saturday 18 April 2026 00:34:35 +0000 (0:00:00.037) 0:06:50.299 ******** 2026-04-18 00:35:00.805904 | orchestrator | 2026-04-18 00:35:00.805916 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-18 00:35:00.805927 | orchestrator | Saturday 18 April 2026 00:34:35 +0000 (0:00:00.040) 0:06:50.339 ******** 2026-04-18 00:35:00.805958 | orchestrator | 2026-04-18 00:35:00.805980 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-18 00:35:00.805992 | orchestrator | Saturday 18 April 2026 00:34:36 +0000 (0:00:00.036) 0:06:50.376 ******** 2026-04-18 00:35:00.806003 | orchestrator | 2026-04-18 00:35:00.806015 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-18 00:35:00.806077 | orchestrator | Saturday 18 April 2026 00:34:36 +0000 (0:00:00.073) 0:06:50.450 ******** 2026-04-18 00:35:00.806088 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:00.806100 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:00.806112 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:00.806173 | orchestrator | 2026-04-18 00:35:00.806185 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-18 00:35:00.806196 | orchestrator | Saturday 18 April 2026 00:34:37 +0000 (0:00:01.125) 0:06:51.575 ******** 2026-04-18 00:35:00.806208 | orchestrator | changed: [testbed-manager] 2026-04-18 00:35:00.806220 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:35:00.806231 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:35:00.806242 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:35:00.806254 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:35:00.806265 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:35:00.806279 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:35:00.806292 | orchestrator | 2026-04-18 00:35:00.806305 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-18 00:35:00.806318 | orchestrator | Saturday 18 April 2026 00:34:38 +0000 (0:00:01.206) 0:06:52.781 ******** 2026-04-18 00:35:00.806331 | orchestrator | changed: [testbed-manager] 2026-04-18 00:35:00.806344 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:35:00.806355 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:35:00.806367 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:35:00.806406 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:35:00.806417 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:35:00.806428 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:35:00.806439 | orchestrator | 2026-04-18 00:35:00.806450 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-18 00:35:00.806461 | orchestrator | Saturday 18 April 2026 00:34:39 +0000 (0:00:01.077) 0:06:53.859 ******** 2026-04-18 00:35:00.806472 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:35:00.806482 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:35:00.806493 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:35:00.806504 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:35:00.806515 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:35:00.806527 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:35:00.806537 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:35:00.806548 | orchestrator | 2026-04-18 00:35:00.806559 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-18 00:35:00.806571 | orchestrator | Saturday 18 April 2026 00:34:41 +0000 (0:00:02.199) 0:06:56.058 ******** 2026-04-18 00:35:00.806582 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:35:00.806592 | orchestrator | 2026-04-18 00:35:00.806604 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-18 00:35:00.806614 | orchestrator | Saturday 18 April 2026 00:34:41 +0000 (0:00:00.092) 0:06:56.151 ******** 2026-04-18 00:35:00.806632 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:00.806657 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:35:00.806680 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:35:00.806697 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:35:00.806715 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:35:00.806733 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:35:00.806750 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:35:00.806768 | orchestrator | 2026-04-18 00:35:00.806787 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-18 00:35:00.806807 | orchestrator | Saturday 18 April 2026 00:34:42 +0000 (0:00:01.037) 0:06:57.188 ******** 2026-04-18 00:35:00.806824 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:35:00.806843 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:35:00.806861 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:35:00.806879 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:35:00.806897 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:35:00.806916 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:35:00.806935 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:35:00.806954 | orchestrator | 2026-04-18 00:35:00.806972 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-18 00:35:00.806991 | orchestrator | Saturday 18 April 2026 00:34:43 +0000 (0:00:00.445) 0:06:57.634 ******** 2026-04-18 00:35:00.807011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:35:00.807031 | orchestrator | 2026-04-18 00:35:00.807049 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-18 00:35:00.807069 | orchestrator | Saturday 18 April 2026 00:34:44 +0000 (0:00:00.846) 0:06:58.480 ******** 2026-04-18 00:35:00.807089 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:00.807107 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:00.807154 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:00.807173 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:00.807192 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:35:00.807210 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:35:00.807225 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:35:00.807235 | orchestrator | 2026-04-18 00:35:00.807247 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-18 00:35:00.807258 | orchestrator | Saturday 18 April 2026 00:34:44 +0000 (0:00:00.869) 0:06:59.350 ******** 2026-04-18 00:35:00.807283 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-18 00:35:00.807316 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-18 00:35:00.807328 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-18 00:35:00.807339 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-18 00:35:00.807351 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-18 00:35:00.807367 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-18 00:35:00.807385 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-18 00:35:00.807427 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-18 00:35:00.807454 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-18 00:35:00.807472 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-18 00:35:00.807488 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-18 00:35:00.807507 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-18 00:35:00.807525 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-18 00:35:00.807545 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-18 00:35:00.807564 | orchestrator | 2026-04-18 00:35:00.807583 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-18 00:35:00.807596 | orchestrator | Saturday 18 April 2026 00:34:47 +0000 (0:00:02.536) 0:07:01.886 ******** 2026-04-18 00:35:00.807607 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:35:00.807618 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:35:00.807629 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:35:00.807639 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:35:00.807650 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:35:00.807661 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:35:00.807672 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:35:00.807682 | orchestrator | 2026-04-18 00:35:00.807693 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-18 00:35:00.807704 | orchestrator | Saturday 18 April 2026 00:34:47 +0000 (0:00:00.434) 0:07:02.321 ******** 2026-04-18 00:35:00.807717 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:35:00.807730 | orchestrator | 2026-04-18 00:35:00.807741 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-18 00:35:00.807752 | orchestrator | Saturday 18 April 2026 00:34:48 +0000 (0:00:00.797) 0:07:03.119 ******** 2026-04-18 00:35:00.807762 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:00.807773 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:00.807784 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:00.807794 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:00.807805 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:35:00.807816 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:35:00.807826 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:35:00.807837 | orchestrator | 2026-04-18 00:35:00.807848 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-18 00:35:00.807864 | orchestrator | Saturday 18 April 2026 00:34:49 +0000 (0:00:00.755) 0:07:03.875 ******** 2026-04-18 00:35:00.807876 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:00.807887 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:00.807897 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:00.807908 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:00.807918 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:35:00.807929 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:35:00.807939 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:35:00.807950 | orchestrator | 2026-04-18 00:35:00.807961 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-18 00:35:00.807972 | orchestrator | Saturday 18 April 2026 00:34:50 +0000 (0:00:00.794) 0:07:04.669 ******** 2026-04-18 00:35:00.807994 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:35:00.808005 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:35:00.808016 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:35:00.808026 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:35:00.808037 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:35:00.808048 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:35:00.808058 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:35:00.808069 | orchestrator | 2026-04-18 00:35:00.808080 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-18 00:35:00.808091 | orchestrator | Saturday 18 April 2026 00:34:50 +0000 (0:00:00.407) 0:07:05.077 ******** 2026-04-18 00:35:00.808102 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:00.808112 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:00.808155 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:00.808166 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:00.808177 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:35:00.808188 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:35:00.808198 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:35:00.808209 | orchestrator | 2026-04-18 00:35:00.808220 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-18 00:35:00.808231 | orchestrator | Saturday 18 April 2026 00:34:52 +0000 (0:00:01.359) 0:07:06.436 ******** 2026-04-18 00:35:00.808242 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:35:00.808252 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:35:00.808263 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:35:00.808274 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:35:00.808285 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:35:00.808296 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:35:00.808306 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:35:00.808317 | orchestrator | 2026-04-18 00:35:00.808328 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-18 00:35:00.808341 | orchestrator | Saturday 18 April 2026 00:34:52 +0000 (0:00:00.513) 0:07:06.950 ******** 2026-04-18 00:35:00.808360 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:00.808378 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:35:00.808395 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:35:00.808413 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:35:00.808430 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:35:00.808449 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:35:00.808483 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:35:33.258564 | orchestrator | 2026-04-18 00:35:33.258680 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-18 00:35:33.258697 | orchestrator | Saturday 18 April 2026 00:35:00 +0000 (0:00:08.269) 0:07:15.220 ******** 2026-04-18 00:35:33.258709 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:33.258721 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:35:33.258733 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:35:33.258744 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:35:33.258755 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:35:33.258765 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:35:33.258776 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:35:33.258787 | orchestrator | 2026-04-18 00:35:33.258799 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-18 00:35:33.258810 | orchestrator | Saturday 18 April 2026 00:35:02 +0000 (0:00:01.357) 0:07:16.578 ******** 2026-04-18 00:35:33.258821 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:33.258831 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:35:33.258842 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:35:33.258853 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:35:33.258863 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:35:33.258874 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:35:33.258885 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:35:33.258919 | orchestrator | 2026-04-18 00:35:33.258931 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-18 00:35:33.258942 | orchestrator | Saturday 18 April 2026 00:35:03 +0000 (0:00:01.779) 0:07:18.357 ******** 2026-04-18 00:35:33.258952 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:33.258963 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:35:33.258974 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:35:33.258985 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:35:33.258995 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:35:33.259006 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:35:33.259016 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:35:33.259027 | orchestrator | 2026-04-18 00:35:33.259038 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-18 00:35:33.259049 | orchestrator | Saturday 18 April 2026 00:35:05 +0000 (0:00:01.829) 0:07:20.187 ******** 2026-04-18 00:35:33.259060 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:33.259072 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:33.259144 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:33.259157 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:33.259169 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:35:33.259182 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:35:33.259194 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:35:33.259206 | orchestrator | 2026-04-18 00:35:33.259219 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-18 00:35:33.259232 | orchestrator | Saturday 18 April 2026 00:35:06 +0000 (0:00:00.873) 0:07:21.060 ******** 2026-04-18 00:35:33.259245 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:35:33.259256 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:35:33.259267 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:35:33.259278 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:35:33.259289 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:35:33.259299 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:35:33.259310 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:35:33.259320 | orchestrator | 2026-04-18 00:35:33.259345 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-18 00:35:33.259357 | orchestrator | Saturday 18 April 2026 00:35:07 +0000 (0:00:00.804) 0:07:21.865 ******** 2026-04-18 00:35:33.259367 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:35:33.259378 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:35:33.259389 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:35:33.259399 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:35:33.259410 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:35:33.259420 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:35:33.259431 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:35:33.259441 | orchestrator | 2026-04-18 00:35:33.259452 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-18 00:35:33.259463 | orchestrator | Saturday 18 April 2026 00:35:08 +0000 (0:00:00.683) 0:07:22.549 ******** 2026-04-18 00:35:33.259473 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:33.259484 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:33.259495 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:33.259505 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:33.259516 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:35:33.259527 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:35:33.259538 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:35:33.259549 | orchestrator | 2026-04-18 00:35:33.259560 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-18 00:35:33.259570 | orchestrator | Saturday 18 April 2026 00:35:08 +0000 (0:00:00.540) 0:07:23.089 ******** 2026-04-18 00:35:33.259581 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:33.259592 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:33.259602 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:33.259613 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:33.259623 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:35:33.259634 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:35:33.259652 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:35:33.259663 | orchestrator | 2026-04-18 00:35:33.259674 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-18 00:35:33.259685 | orchestrator | Saturday 18 April 2026 00:35:09 +0000 (0:00:00.602) 0:07:23.691 ******** 2026-04-18 00:35:33.259696 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:33.259706 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:33.259717 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:33.259727 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:33.259737 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:35:33.259748 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:35:33.259758 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:35:33.259769 | orchestrator | 2026-04-18 00:35:33.259780 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-18 00:35:33.259790 | orchestrator | Saturday 18 April 2026 00:35:09 +0000 (0:00:00.553) 0:07:24.244 ******** 2026-04-18 00:35:33.259801 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:33.259812 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:33.259822 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:35:33.259832 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:33.259843 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:33.259853 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:35:33.259864 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:35:33.259874 | orchestrator | 2026-04-18 00:35:33.259903 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-18 00:35:33.259915 | orchestrator | Saturday 18 April 2026 00:35:15 +0000 (0:00:05.702) 0:07:29.947 ******** 2026-04-18 00:35:33.259926 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:35:33.259937 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:35:33.259947 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:35:33.259958 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:35:33.259969 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:35:33.259979 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:35:33.259990 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:35:33.260000 | orchestrator | 2026-04-18 00:35:33.260011 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-18 00:35:33.260022 | orchestrator | Saturday 18 April 2026 00:35:16 +0000 (0:00:00.662) 0:07:30.610 ******** 2026-04-18 00:35:33.260035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:35:33.260049 | orchestrator | 2026-04-18 00:35:33.260060 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-18 00:35:33.260070 | orchestrator | Saturday 18 April 2026 00:35:17 +0000 (0:00:00.776) 0:07:31.386 ******** 2026-04-18 00:35:33.260102 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:33.260113 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:33.260124 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:33.260135 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:33.260145 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:35:33.260156 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:35:33.260167 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:35:33.260177 | orchestrator | 2026-04-18 00:35:33.260188 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-18 00:35:33.260199 | orchestrator | Saturday 18 April 2026 00:35:18 +0000 (0:00:01.923) 0:07:33.310 ******** 2026-04-18 00:35:33.260210 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:33.260220 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:33.260231 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:33.260241 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:33.260252 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:35:33.260262 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:35:33.260273 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:35:33.260283 | orchestrator | 2026-04-18 00:35:33.260294 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-18 00:35:33.260313 | orchestrator | Saturday 18 April 2026 00:35:20 +0000 (0:00:01.220) 0:07:34.530 ******** 2026-04-18 00:35:33.260324 | orchestrator | ok: [testbed-manager] 2026-04-18 00:35:33.260334 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:35:33.260345 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:35:33.260356 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:35:33.260366 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:35:33.260377 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:35:33.260387 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:35:33.260398 | orchestrator | 2026-04-18 00:35:33.260409 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-18 00:35:33.260419 | orchestrator | Saturday 18 April 2026 00:35:21 +0000 (0:00:00.878) 0:07:35.409 ******** 2026-04-18 00:35:33.260436 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-18 00:35:33.260448 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-18 00:35:33.260458 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-18 00:35:33.260470 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-18 00:35:33.260480 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-18 00:35:33.260491 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-18 00:35:33.260502 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-18 00:35:33.260513 | orchestrator | 2026-04-18 00:35:33.260524 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-18 00:35:33.260534 | orchestrator | Saturday 18 April 2026 00:35:22 +0000 (0:00:01.774) 0:07:37.184 ******** 2026-04-18 00:35:33.260546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:35:33.260557 | orchestrator | 2026-04-18 00:35:33.260568 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-18 00:35:33.260579 | orchestrator | Saturday 18 April 2026 00:35:23 +0000 (0:00:00.928) 0:07:38.113 ******** 2026-04-18 00:35:33.260590 | orchestrator | changed: [testbed-manager] 2026-04-18 00:35:33.260600 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:35:33.260611 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:35:33.260622 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:35:33.260633 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:35:33.260643 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:35:33.260654 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:35:33.260665 | orchestrator | 2026-04-18 00:35:33.260682 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-18 00:36:03.967892 | orchestrator | Saturday 18 April 2026 00:35:33 +0000 (0:00:09.502) 0:07:47.616 ******** 2026-04-18 00:36:03.968013 | orchestrator | ok: [testbed-manager] 2026-04-18 00:36:03.968032 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:36:03.968076 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:36:03.968089 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:36:03.968103 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:36:03.968116 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:36:03.968129 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:36:03.968142 | orchestrator | 2026-04-18 00:36:03.968155 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-18 00:36:03.968193 | orchestrator | Saturday 18 April 2026 00:35:34 +0000 (0:00:01.715) 0:07:49.332 ******** 2026-04-18 00:36:03.968205 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:36:03.968217 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:36:03.968230 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:36:03.968241 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:36:03.968254 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:36:03.968267 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:36:03.968278 | orchestrator | 2026-04-18 00:36:03.968290 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-18 00:36:03.968303 | orchestrator | Saturday 18 April 2026 00:35:36 +0000 (0:00:01.521) 0:07:50.853 ******** 2026-04-18 00:36:03.968315 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:03.968329 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:03.968341 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:03.968353 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:03.968366 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:03.968379 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:03.968390 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:03.968402 | orchestrator | 2026-04-18 00:36:03.968415 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-18 00:36:03.968430 | orchestrator | 2026-04-18 00:36:03.968443 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-18 00:36:03.968457 | orchestrator | Saturday 18 April 2026 00:35:37 +0000 (0:00:01.225) 0:07:52.079 ******** 2026-04-18 00:36:03.968469 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:36:03.968482 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:36:03.968495 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:36:03.968508 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:36:03.968520 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:36:03.968534 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:36:03.968547 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:36:03.968559 | orchestrator | 2026-04-18 00:36:03.968571 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-18 00:36:03.968585 | orchestrator | 2026-04-18 00:36:03.968597 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-18 00:36:03.968611 | orchestrator | Saturday 18 April 2026 00:35:38 +0000 (0:00:00.493) 0:07:52.572 ******** 2026-04-18 00:36:03.968622 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:03.968633 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:03.968646 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:03.968659 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:03.968672 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:03.968684 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:03.968697 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:03.968710 | orchestrator | 2026-04-18 00:36:03.968736 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-18 00:36:03.968748 | orchestrator | Saturday 18 April 2026 00:35:39 +0000 (0:00:01.298) 0:07:53.870 ******** 2026-04-18 00:36:03.968760 | orchestrator | ok: [testbed-manager] 2026-04-18 00:36:03.968773 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:36:03.968786 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:36:03.968799 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:36:03.968811 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:36:03.968823 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:36:03.968836 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:36:03.968849 | orchestrator | 2026-04-18 00:36:03.968863 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-18 00:36:03.968876 | orchestrator | Saturday 18 April 2026 00:35:41 +0000 (0:00:01.504) 0:07:55.375 ******** 2026-04-18 00:36:03.968889 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:36:03.968902 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:36:03.968915 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:36:03.968943 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:36:03.968957 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:36:03.968971 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:36:03.968985 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:36:03.968998 | orchestrator | 2026-04-18 00:36:03.969013 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-18 00:36:03.969027 | orchestrator | Saturday 18 April 2026 00:35:41 +0000 (0:00:00.407) 0:07:55.782 ******** 2026-04-18 00:36:03.969041 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:36:03.969083 | orchestrator | 2026-04-18 00:36:03.969097 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-18 00:36:03.969110 | orchestrator | Saturday 18 April 2026 00:35:42 +0000 (0:00:00.676) 0:07:56.459 ******** 2026-04-18 00:36:03.969125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:36:03.969141 | orchestrator | 2026-04-18 00:36:03.969155 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-18 00:36:03.969169 | orchestrator | Saturday 18 April 2026 00:35:42 +0000 (0:00:00.777) 0:07:57.237 ******** 2026-04-18 00:36:03.969182 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:03.969197 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:03.969211 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:03.969226 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:03.969240 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:03.969253 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:03.969267 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:03.969280 | orchestrator | 2026-04-18 00:36:03.969319 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-18 00:36:03.969331 | orchestrator | Saturday 18 April 2026 00:35:52 +0000 (0:00:09.656) 0:08:06.894 ******** 2026-04-18 00:36:03.969343 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:03.969355 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:03.969367 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:03.969379 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:03.969392 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:03.969404 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:03.969417 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:03.969429 | orchestrator | 2026-04-18 00:36:03.969442 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-18 00:36:03.969454 | orchestrator | Saturday 18 April 2026 00:35:53 +0000 (0:00:00.886) 0:08:07.780 ******** 2026-04-18 00:36:03.969466 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:03.969478 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:03.969489 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:03.969501 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:03.969514 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:03.969527 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:03.969539 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:03.969553 | orchestrator | 2026-04-18 00:36:03.969567 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-18 00:36:03.969578 | orchestrator | Saturday 18 April 2026 00:35:54 +0000 (0:00:01.409) 0:08:09.190 ******** 2026-04-18 00:36:03.969590 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:03.969602 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:03.969615 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:03.969627 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:03.969641 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:03.969654 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:03.969667 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:03.969695 | orchestrator | 2026-04-18 00:36:03.969709 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-18 00:36:03.969722 | orchestrator | Saturday 18 April 2026 00:35:56 +0000 (0:00:01.923) 0:08:11.113 ******** 2026-04-18 00:36:03.969735 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:03.969748 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:03.969761 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:03.969774 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:03.969787 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:03.969799 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:03.969812 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:03.969825 | orchestrator | 2026-04-18 00:36:03.969838 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-18 00:36:03.969850 | orchestrator | Saturday 18 April 2026 00:35:58 +0000 (0:00:01.319) 0:08:12.433 ******** 2026-04-18 00:36:03.969862 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:03.969874 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:03.969886 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:03.969898 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:03.969910 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:03.969922 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:03.969934 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:03.969946 | orchestrator | 2026-04-18 00:36:03.969967 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-18 00:36:03.969980 | orchestrator | 2026-04-18 00:36:03.969993 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-18 00:36:03.970006 | orchestrator | Saturday 18 April 2026 00:35:59 +0000 (0:00:01.224) 0:08:13.658 ******** 2026-04-18 00:36:03.970219 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:36:03.970241 | orchestrator | 2026-04-18 00:36:03.970253 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-18 00:36:03.970266 | orchestrator | Saturday 18 April 2026 00:36:00 +0000 (0:00:00.913) 0:08:14.571 ******** 2026-04-18 00:36:03.970279 | orchestrator | ok: [testbed-manager] 2026-04-18 00:36:03.970293 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:36:03.970306 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:36:03.970320 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:36:03.970334 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:36:03.970347 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:36:03.970360 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:36:03.970372 | orchestrator | 2026-04-18 00:36:03.970384 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-18 00:36:03.970396 | orchestrator | Saturday 18 April 2026 00:36:01 +0000 (0:00:00.824) 0:08:15.395 ******** 2026-04-18 00:36:03.970409 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:03.970421 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:03.970432 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:03.970442 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:03.970452 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:03.970463 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:03.970474 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:03.970485 | orchestrator | 2026-04-18 00:36:03.970495 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-18 00:36:03.970505 | orchestrator | Saturday 18 April 2026 00:36:02 +0000 (0:00:01.256) 0:08:16.652 ******** 2026-04-18 00:36:03.970516 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:36:03.970527 | orchestrator | 2026-04-18 00:36:03.970538 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-18 00:36:03.970548 | orchestrator | Saturday 18 April 2026 00:36:03 +0000 (0:00:00.801) 0:08:17.453 ******** 2026-04-18 00:36:03.970573 | orchestrator | ok: [testbed-manager] 2026-04-18 00:36:03.970584 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:36:03.970594 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:36:03.970604 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:36:03.970616 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:36:03.970626 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:36:03.970637 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:36:03.970649 | orchestrator | 2026-04-18 00:36:03.970677 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-18 00:36:05.449420 | orchestrator | Saturday 18 April 2026 00:36:03 +0000 (0:00:00.871) 0:08:18.324 ******** 2026-04-18 00:36:05.449521 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:05.449539 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:05.449550 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:05.449592 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:05.449631 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:05.449649 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:05.449669 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:05.449683 | orchestrator | 2026-04-18 00:36:05.449695 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:36:05.449707 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-18 00:36:05.449720 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-18 00:36:05.449731 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-18 00:36:05.449742 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-18 00:36:05.449752 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-18 00:36:05.449763 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-18 00:36:05.449774 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-18 00:36:05.449784 | orchestrator | 2026-04-18 00:36:05.449795 | orchestrator | 2026-04-18 00:36:05.449806 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:36:05.449817 | orchestrator | Saturday 18 April 2026 00:36:05 +0000 (0:00:01.213) 0:08:19.538 ******** 2026-04-18 00:36:05.449828 | orchestrator | =============================================================================== 2026-04-18 00:36:05.449839 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.57s 2026-04-18 00:36:05.449850 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.62s 2026-04-18 00:36:05.449861 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.33s 2026-04-18 00:36:05.449871 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.30s 2026-04-18 00:36:05.449882 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.99s 2026-04-18 00:36:05.449894 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.90s 2026-04-18 00:36:05.449905 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.76s 2026-04-18 00:36:05.449915 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.47s 2026-04-18 00:36:05.449926 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.86s 2026-04-18 00:36:05.449937 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.66s 2026-04-18 00:36:05.449977 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.50s 2026-04-18 00:36:05.449991 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.37s 2026-04-18 00:36:05.450004 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 9.18s 2026-04-18 00:36:05.450111 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.15s 2026-04-18 00:36:05.450141 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.91s 2026-04-18 00:36:05.450154 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.27s 2026-04-18 00:36:05.450167 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.32s 2026-04-18 00:36:05.450197 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.19s 2026-04-18 00:36:05.450210 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.70s 2026-04-18 00:36:05.450223 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.29s 2026-04-18 00:36:05.609149 | orchestrator | + osism apply fail2ban 2026-04-18 00:36:17.191697 | orchestrator | 2026-04-18 00:36:17 | INFO  | Prepare task for execution of fail2ban. 2026-04-18 00:36:17.264957 | orchestrator | 2026-04-18 00:36:17 | INFO  | Task 618811b3-3df7-4197-b993-7914c26a5050 (fail2ban) was prepared for execution. 2026-04-18 00:36:17.265090 | orchestrator | 2026-04-18 00:36:17 | INFO  | It takes a moment until task 618811b3-3df7-4197-b993-7914c26a5050 (fail2ban) has been started and output is visible here. 2026-04-18 00:36:39.338965 | orchestrator | 2026-04-18 00:36:39.339124 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-18 00:36:39.339148 | orchestrator | 2026-04-18 00:36:39.339164 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-18 00:36:39.339181 | orchestrator | Saturday 18 April 2026 00:36:20 +0000 (0:00:00.313) 0:00:00.313 ******** 2026-04-18 00:36:39.339199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:36:39.339217 | orchestrator | 2026-04-18 00:36:39.339226 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-18 00:36:39.339323 | orchestrator | Saturday 18 April 2026 00:36:21 +0000 (0:00:01.093) 0:00:01.407 ******** 2026-04-18 00:36:39.339334 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:39.339346 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:39.339355 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:39.339364 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:39.339372 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:39.339381 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:39.339391 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:39.339400 | orchestrator | 2026-04-18 00:36:39.339409 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-18 00:36:39.339418 | orchestrator | Saturday 18 April 2026 00:36:34 +0000 (0:00:12.772) 0:00:14.180 ******** 2026-04-18 00:36:39.339426 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:39.339435 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:39.339444 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:39.339452 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:39.339461 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:39.339469 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:39.339480 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:39.339496 | orchestrator | 2026-04-18 00:36:39.339539 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-18 00:36:39.339557 | orchestrator | Saturday 18 April 2026 00:36:35 +0000 (0:00:01.587) 0:00:15.767 ******** 2026-04-18 00:36:39.339571 | orchestrator | ok: [testbed-manager] 2026-04-18 00:36:39.339587 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:36:39.339633 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:36:39.339649 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:36:39.339664 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:36:39.339679 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:36:39.339691 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:36:39.339705 | orchestrator | 2026-04-18 00:36:39.339720 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-18 00:36:39.339733 | orchestrator | Saturday 18 April 2026 00:36:37 +0000 (0:00:01.339) 0:00:17.107 ******** 2026-04-18 00:36:39.339747 | orchestrator | changed: [testbed-manager] 2026-04-18 00:36:39.339762 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:36:39.339776 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:36:39.339790 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:36:39.339805 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:36:39.339818 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:36:39.339831 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:36:39.339844 | orchestrator | 2026-04-18 00:36:39.339859 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:36:39.339872 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:36:39.339907 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:36:39.339922 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:36:39.339936 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:36:39.339950 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:36:39.339965 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:36:39.339980 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:36:39.339994 | orchestrator | 2026-04-18 00:36:39.340036 | orchestrator | 2026-04-18 00:36:39.340052 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:36:39.340067 | orchestrator | Saturday 18 April 2026 00:36:39 +0000 (0:00:01.705) 0:00:18.813 ******** 2026-04-18 00:36:39.340081 | orchestrator | =============================================================================== 2026-04-18 00:36:39.340097 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.77s 2026-04-18 00:36:39.340110 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.71s 2026-04-18 00:36:39.340126 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.59s 2026-04-18 00:36:39.340153 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.34s 2026-04-18 00:36:39.340167 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.09s 2026-04-18 00:36:39.517348 | orchestrator | + osism apply network 2026-04-18 00:36:50.827735 | orchestrator | 2026-04-18 00:36:50 | INFO  | Prepare task for execution of network. 2026-04-18 00:36:50.984910 | orchestrator | 2026-04-18 00:36:50 | INFO  | Task dd9b1fb1-0fd3-441f-99f6-24232b04ba41 (network) was prepared for execution. 2026-04-18 00:36:50.985059 | orchestrator | 2026-04-18 00:36:50 | INFO  | It takes a moment until task dd9b1fb1-0fd3-441f-99f6-24232b04ba41 (network) has been started and output is visible here. 2026-04-18 00:37:18.069934 | orchestrator | 2026-04-18 00:37:18.070241 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-18 00:37:18.070277 | orchestrator | 2026-04-18 00:37:18.070330 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-18 00:37:18.070352 | orchestrator | Saturday 18 April 2026 00:36:54 +0000 (0:00:00.323) 0:00:00.323 ******** 2026-04-18 00:37:18.070372 | orchestrator | ok: [testbed-manager] 2026-04-18 00:37:18.070393 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:37:18.070412 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:37:18.070431 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:37:18.070451 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:37:18.070471 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:37:18.070489 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:37:18.070508 | orchestrator | 2026-04-18 00:37:18.070529 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-18 00:37:18.070549 | orchestrator | Saturday 18 April 2026 00:36:54 +0000 (0:00:00.611) 0:00:00.934 ******** 2026-04-18 00:37:18.070570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:37:18.070593 | orchestrator | 2026-04-18 00:37:18.070613 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-18 00:37:18.070650 | orchestrator | Saturday 18 April 2026 00:36:56 +0000 (0:00:01.182) 0:00:02.117 ******** 2026-04-18 00:37:18.070672 | orchestrator | ok: [testbed-manager] 2026-04-18 00:37:18.070692 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:37:18.070710 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:37:18.070728 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:37:18.070747 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:37:18.070766 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:37:18.070784 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:37:18.070802 | orchestrator | 2026-04-18 00:37:18.070821 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-18 00:37:18.070839 | orchestrator | Saturday 18 April 2026 00:36:58 +0000 (0:00:02.757) 0:00:04.875 ******** 2026-04-18 00:37:18.070857 | orchestrator | ok: [testbed-manager] 2026-04-18 00:37:18.070875 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:37:18.070893 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:37:18.070912 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:37:18.070932 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:37:18.070951 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:37:18.070996 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:37:18.071008 | orchestrator | 2026-04-18 00:37:18.071020 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-18 00:37:18.071031 | orchestrator | Saturday 18 April 2026 00:37:00 +0000 (0:00:01.662) 0:00:06.538 ******** 2026-04-18 00:37:18.071042 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-18 00:37:18.071053 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-18 00:37:18.071064 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-18 00:37:18.071075 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-18 00:37:18.071086 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-18 00:37:18.071112 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-18 00:37:18.071124 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-18 00:37:18.071135 | orchestrator | 2026-04-18 00:37:18.071146 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-18 00:37:18.071162 | orchestrator | Saturday 18 April 2026 00:37:01 +0000 (0:00:01.145) 0:00:07.683 ******** 2026-04-18 00:37:18.071180 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-18 00:37:18.071200 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-18 00:37:18.071218 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-18 00:37:18.071236 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-18 00:37:18.071254 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-18 00:37:18.071294 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-18 00:37:18.071314 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-18 00:37:18.071350 | orchestrator | 2026-04-18 00:37:18.071369 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-18 00:37:18.071388 | orchestrator | Saturday 18 April 2026 00:37:04 +0000 (0:00:03.123) 0:00:10.807 ******** 2026-04-18 00:37:18.071407 | orchestrator | changed: [testbed-manager] 2026-04-18 00:37:18.071426 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:37:18.071444 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:37:18.071459 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:37:18.071470 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:37:18.071481 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:37:18.071492 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:37:18.071503 | orchestrator | 2026-04-18 00:37:18.071513 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-18 00:37:18.071524 | orchestrator | Saturday 18 April 2026 00:37:06 +0000 (0:00:01.490) 0:00:12.298 ******** 2026-04-18 00:37:18.071535 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-18 00:37:18.071546 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-18 00:37:18.071557 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-18 00:37:18.071567 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-18 00:37:18.071578 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-18 00:37:18.071589 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-18 00:37:18.071600 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-18 00:37:18.071611 | orchestrator | 2026-04-18 00:37:18.071622 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-18 00:37:18.071636 | orchestrator | Saturday 18 April 2026 00:37:07 +0000 (0:00:01.604) 0:00:13.903 ******** 2026-04-18 00:37:18.071655 | orchestrator | ok: [testbed-manager] 2026-04-18 00:37:18.071673 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:37:18.071690 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:37:18.071707 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:37:18.071724 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:37:18.071741 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:37:18.071758 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:37:18.071778 | orchestrator | 2026-04-18 00:37:18.071798 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-18 00:37:18.071845 | orchestrator | Saturday 18 April 2026 00:37:08 +0000 (0:00:00.870) 0:00:14.773 ******** 2026-04-18 00:37:18.071865 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:37:18.071884 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:37:18.071896 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:37:18.071907 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:37:18.071917 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:37:18.071936 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:37:18.071954 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:37:18.071999 | orchestrator | 2026-04-18 00:37:18.072017 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-18 00:37:18.072036 | orchestrator | Saturday 18 April 2026 00:37:09 +0000 (0:00:00.754) 0:00:15.528 ******** 2026-04-18 00:37:18.072053 | orchestrator | ok: [testbed-manager] 2026-04-18 00:37:18.072069 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:37:18.072086 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:37:18.072104 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:37:18.072122 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:37:18.072140 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:37:18.072155 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:37:18.072171 | orchestrator | 2026-04-18 00:37:18.072188 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-18 00:37:18.072205 | orchestrator | Saturday 18 April 2026 00:37:11 +0000 (0:00:02.241) 0:00:17.769 ******** 2026-04-18 00:37:18.072222 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:37:18.072238 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:37:18.072255 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:37:18.072272 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:37:18.072304 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:37:18.072321 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:37:18.072339 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-18 00:37:18.072358 | orchestrator | 2026-04-18 00:37:18.072376 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-18 00:37:18.072393 | orchestrator | Saturday 18 April 2026 00:37:12 +0000 (0:00:00.846) 0:00:18.616 ******** 2026-04-18 00:37:18.072410 | orchestrator | ok: [testbed-manager] 2026-04-18 00:37:18.072427 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:37:18.072444 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:37:18.072461 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:37:18.072479 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:37:18.072496 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:37:18.072513 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:37:18.072532 | orchestrator | 2026-04-18 00:37:18.072550 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-18 00:37:18.072568 | orchestrator | Saturday 18 April 2026 00:37:14 +0000 (0:00:01.485) 0:00:20.102 ******** 2026-04-18 00:37:18.072588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:37:18.072609 | orchestrator | 2026-04-18 00:37:18.072640 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-18 00:37:18.072659 | orchestrator | Saturday 18 April 2026 00:37:15 +0000 (0:00:01.193) 0:00:21.295 ******** 2026-04-18 00:37:18.072678 | orchestrator | ok: [testbed-manager] 2026-04-18 00:37:18.072697 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:37:18.072715 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:37:18.072733 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:37:18.072751 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:37:18.072770 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:37:18.072789 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:37:18.072807 | orchestrator | 2026-04-18 00:37:18.072825 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-18 00:37:18.072845 | orchestrator | Saturday 18 April 2026 00:37:16 +0000 (0:00:01.105) 0:00:22.401 ******** 2026-04-18 00:37:18.072863 | orchestrator | ok: [testbed-manager] 2026-04-18 00:37:18.072881 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:37:18.072899 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:37:18.072918 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:37:18.072937 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:37:18.072956 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:37:18.073011 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:37:18.073022 | orchestrator | 2026-04-18 00:37:18.073033 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-18 00:37:18.073044 | orchestrator | Saturday 18 April 2026 00:37:17 +0000 (0:00:00.757) 0:00:23.158 ******** 2026-04-18 00:37:18.073055 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-18 00:37:18.073066 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-18 00:37:18.073077 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-18 00:37:18.073088 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-18 00:37:18.073098 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-18 00:37:18.073109 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-18 00:37:18.073120 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-18 00:37:18.073131 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-18 00:37:18.073142 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-18 00:37:18.073164 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-18 00:37:18.073175 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-18 00:37:18.073186 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-18 00:37:18.073197 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-18 00:37:18.073207 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-18 00:37:18.073218 | orchestrator | 2026-04-18 00:37:18.073245 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-18 00:37:32.820923 | orchestrator | Saturday 18 April 2026 00:37:18 +0000 (0:00:00.996) 0:00:24.154 ******** 2026-04-18 00:37:32.821115 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:37:32.821144 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:37:32.821164 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:37:32.821182 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:37:32.821200 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:37:32.821218 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:37:32.821237 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:37:32.821258 | orchestrator | 2026-04-18 00:37:32.821279 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-18 00:37:32.821299 | orchestrator | Saturday 18 April 2026 00:37:18 +0000 (0:00:00.722) 0:00:24.877 ******** 2026-04-18 00:37:32.821321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-1, testbed-manager, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:37:32.821344 | orchestrator | 2026-04-18 00:37:32.821356 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-18 00:37:32.821368 | orchestrator | Saturday 18 April 2026 00:37:22 +0000 (0:00:03.979) 0:00:28.856 ******** 2026-04-18 00:37:32.821381 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-18 00:37:32.821395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-18 00:37:32.821407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-18 00:37:32.821418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-18 00:37:32.821447 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-18 00:37:32.821460 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-18 00:37:32.821473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-18 00:37:32.821510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-18 00:37:32.821524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-18 00:37:32.821543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-18 00:37:32.821556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-18 00:37:32.821588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-18 00:37:32.821602 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-18 00:37:32.821615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-18 00:37:32.821628 | orchestrator | 2026-04-18 00:37:32.821640 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-18 00:37:32.821653 | orchestrator | Saturday 18 April 2026 00:37:27 +0000 (0:00:05.023) 0:00:33.880 ******** 2026-04-18 00:37:32.821666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-18 00:37:32.821678 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-18 00:37:32.821691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-18 00:37:32.821704 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-18 00:37:32.821722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-18 00:37:32.821735 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-18 00:37:32.821756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-18 00:37:32.821769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-18 00:37:32.821781 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-18 00:37:32.821801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-18 00:37:32.821820 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-18 00:37:32.821838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-18 00:37:32.821873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-18 00:37:44.861755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-18 00:37:44.861833 | orchestrator | 2026-04-18 00:37:44.861840 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-18 00:37:44.861847 | orchestrator | Saturday 18 April 2026 00:37:32 +0000 (0:00:05.208) 0:00:39.088 ******** 2026-04-18 00:37:44.861854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:37:44.861858 | orchestrator | 2026-04-18 00:37:44.861863 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-18 00:37:44.861868 | orchestrator | Saturday 18 April 2026 00:37:34 +0000 (0:00:01.004) 0:00:40.093 ******** 2026-04-18 00:37:44.861872 | orchestrator | ok: [testbed-manager] 2026-04-18 00:37:44.861878 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:37:44.861883 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:37:44.861887 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:37:44.861891 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:37:44.861896 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:37:44.861900 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:37:44.861904 | orchestrator | 2026-04-18 00:37:44.861909 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-18 00:37:44.861913 | orchestrator | Saturday 18 April 2026 00:37:35 +0000 (0:00:01.917) 0:00:42.010 ******** 2026-04-18 00:37:44.861918 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-18 00:37:44.861923 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-18 00:37:44.861991 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-18 00:37:44.861999 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-18 00:37:44.862004 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-18 00:37:44.862008 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-18 00:37:44.862012 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-18 00:37:44.862058 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-18 00:37:44.862064 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:37:44.862069 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-18 00:37:44.862079 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-18 00:37:44.862084 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-18 00:37:44.862088 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-18 00:37:44.862092 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:37:44.862097 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-18 00:37:44.862101 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-18 00:37:44.862105 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-18 00:37:44.862110 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:37:44.862114 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-18 00:37:44.862118 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-18 00:37:44.862123 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-18 00:37:44.862127 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-18 00:37:44.862131 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-18 00:37:44.862135 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:37:44.862140 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-18 00:37:44.862144 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-18 00:37:44.862148 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-18 00:37:44.862153 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-18 00:37:44.862157 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:37:44.862161 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:37:44.862165 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-18 00:37:44.862170 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-18 00:37:44.862174 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-18 00:37:44.862178 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-18 00:37:44.862182 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:37:44.862187 | orchestrator | 2026-04-18 00:37:44.862191 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-18 00:37:44.862206 | orchestrator | Saturday 18 April 2026 00:37:36 +0000 (0:00:00.674) 0:00:42.685 ******** 2026-04-18 00:37:44.862211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:37:44.862216 | orchestrator | 2026-04-18 00:37:44.862225 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-18 00:37:44.862229 | orchestrator | Saturday 18 April 2026 00:37:37 +0000 (0:00:01.088) 0:00:43.773 ******** 2026-04-18 00:37:44.862234 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:37:44.862239 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:37:44.862255 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:37:44.862260 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:37:44.862265 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:37:44.862269 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:37:44.862274 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:37:44.862278 | orchestrator | 2026-04-18 00:37:44.862283 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-18 00:37:44.862287 | orchestrator | Saturday 18 April 2026 00:37:38 +0000 (0:00:00.629) 0:00:44.402 ******** 2026-04-18 00:37:44.862292 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:37:44.862296 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:37:44.862301 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:37:44.862305 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:37:44.862310 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:37:44.862314 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:37:44.862319 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:37:44.862323 | orchestrator | 2026-04-18 00:37:44.862328 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-18 00:37:44.862332 | orchestrator | Saturday 18 April 2026 00:37:38 +0000 (0:00:00.514) 0:00:44.917 ******** 2026-04-18 00:37:44.862337 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:37:44.862341 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:37:44.862346 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:37:44.862350 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:37:44.862354 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:37:44.862359 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:37:44.862363 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:37:44.862368 | orchestrator | 2026-04-18 00:37:44.862372 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-18 00:37:44.862377 | orchestrator | Saturday 18 April 2026 00:37:39 +0000 (0:00:00.608) 0:00:45.525 ******** 2026-04-18 00:37:44.862381 | orchestrator | ok: [testbed-manager] 2026-04-18 00:37:44.862386 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:37:44.862390 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:37:44.862395 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:37:44.862399 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:37:44.862403 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:37:44.862410 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:37:44.862415 | orchestrator | 2026-04-18 00:37:44.862419 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-18 00:37:44.862424 | orchestrator | Saturday 18 April 2026 00:37:40 +0000 (0:00:01.421) 0:00:46.947 ******** 2026-04-18 00:37:44.862428 | orchestrator | ok: [testbed-manager] 2026-04-18 00:37:44.862433 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:37:44.862437 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:37:44.862442 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:37:44.862446 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:37:44.862451 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:37:44.862455 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:37:44.862460 | orchestrator | 2026-04-18 00:37:44.862464 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-18 00:37:44.862469 | orchestrator | Saturday 18 April 2026 00:37:41 +0000 (0:00:01.016) 0:00:47.964 ******** 2026-04-18 00:37:44.862473 | orchestrator | ok: [testbed-manager] 2026-04-18 00:37:44.862478 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:37:44.862484 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:37:44.862489 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:37:44.862494 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:37:44.862498 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:37:44.862506 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:37:44.862511 | orchestrator | 2026-04-18 00:37:44.862515 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-18 00:37:44.862520 | orchestrator | Saturday 18 April 2026 00:37:43 +0000 (0:00:01.912) 0:00:49.877 ******** 2026-04-18 00:37:44.862524 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:37:44.862529 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:37:44.862533 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:37:44.862538 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:37:44.862542 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:37:44.862547 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:37:44.862551 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:37:44.862556 | orchestrator | 2026-04-18 00:37:44.862560 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-18 00:37:44.862565 | orchestrator | Saturday 18 April 2026 00:37:44 +0000 (0:00:00.524) 0:00:50.401 ******** 2026-04-18 00:37:44.862569 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:37:44.862573 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:37:44.862578 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:37:44.862582 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:37:44.862587 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:37:44.862591 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:37:44.862596 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:37:44.862600 | orchestrator | 2026-04-18 00:37:44.862605 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:37:44.862610 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-18 00:37:44.862615 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-18 00:37:44.862623 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-18 00:37:45.030599 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-18 00:37:45.030698 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-18 00:37:45.030713 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-18 00:37:45.030727 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-18 00:37:45.030739 | orchestrator | 2026-04-18 00:37:45.030751 | orchestrator | 2026-04-18 00:37:45.030762 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:37:45.030775 | orchestrator | Saturday 18 April 2026 00:37:44 +0000 (0:00:00.546) 0:00:50.947 ******** 2026-04-18 00:37:45.030786 | orchestrator | =============================================================================== 2026-04-18 00:37:45.030797 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.21s 2026-04-18 00:37:45.030807 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.02s 2026-04-18 00:37:45.030818 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.98s 2026-04-18 00:37:45.030829 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.12s 2026-04-18 00:37:45.030840 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.76s 2026-04-18 00:37:45.030851 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.24s 2026-04-18 00:37:45.030862 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.92s 2026-04-18 00:37:45.030900 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 1.91s 2026-04-18 00:37:45.030912 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.66s 2026-04-18 00:37:45.030923 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.60s 2026-04-18 00:37:45.031026 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.49s 2026-04-18 00:37:45.031049 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.49s 2026-04-18 00:37:45.031084 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.42s 2026-04-18 00:37:45.031102 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.19s 2026-04-18 00:37:45.031120 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.18s 2026-04-18 00:37:45.031131 | orchestrator | osism.commons.network : Create required directories --------------------- 1.15s 2026-04-18 00:37:45.031144 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.11s 2026-04-18 00:37:45.031158 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.09s 2026-04-18 00:37:45.031170 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.02s 2026-04-18 00:37:45.031183 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.00s 2026-04-18 00:37:45.145811 | orchestrator | + osism apply wireguard 2026-04-18 00:37:56.264413 | orchestrator | 2026-04-18 00:37:56 | INFO  | Prepare task for execution of wireguard. 2026-04-18 00:37:56.336312 | orchestrator | 2026-04-18 00:37:56 | INFO  | Task 74ac4bef-30b1-4a3a-8d8b-58038b757dd5 (wireguard) was prepared for execution. 2026-04-18 00:37:56.336426 | orchestrator | 2026-04-18 00:37:56 | INFO  | It takes a moment until task 74ac4bef-30b1-4a3a-8d8b-58038b757dd5 (wireguard) has been started and output is visible here. 2026-04-18 00:38:13.443781 | orchestrator | 2026-04-18 00:38:13.443880 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-18 00:38:13.443893 | orchestrator | 2026-04-18 00:38:13.443951 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-18 00:38:13.443961 | orchestrator | Saturday 18 April 2026 00:37:59 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-04-18 00:38:13.443970 | orchestrator | ok: [testbed-manager] 2026-04-18 00:38:13.443980 | orchestrator | 2026-04-18 00:38:13.443989 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-18 00:38:13.443998 | orchestrator | Saturday 18 April 2026 00:38:01 +0000 (0:00:01.683) 0:00:01.964 ******** 2026-04-18 00:38:13.444007 | orchestrator | changed: [testbed-manager] 2026-04-18 00:38:13.444016 | orchestrator | 2026-04-18 00:38:13.444025 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-18 00:38:13.444033 | orchestrator | Saturday 18 April 2026 00:38:06 +0000 (0:00:05.781) 0:00:07.745 ******** 2026-04-18 00:38:13.444042 | orchestrator | changed: [testbed-manager] 2026-04-18 00:38:13.444051 | orchestrator | 2026-04-18 00:38:13.444059 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-18 00:38:13.444068 | orchestrator | Saturday 18 April 2026 00:38:07 +0000 (0:00:00.488) 0:00:08.233 ******** 2026-04-18 00:38:13.444077 | orchestrator | changed: [testbed-manager] 2026-04-18 00:38:13.444085 | orchestrator | 2026-04-18 00:38:13.444094 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-18 00:38:13.444102 | orchestrator | Saturday 18 April 2026 00:38:07 +0000 (0:00:00.393) 0:00:08.627 ******** 2026-04-18 00:38:13.444111 | orchestrator | ok: [testbed-manager] 2026-04-18 00:38:13.444120 | orchestrator | 2026-04-18 00:38:13.444129 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-18 00:38:13.444137 | orchestrator | Saturday 18 April 2026 00:38:08 +0000 (0:00:00.465) 0:00:09.093 ******** 2026-04-18 00:38:13.444146 | orchestrator | ok: [testbed-manager] 2026-04-18 00:38:13.444155 | orchestrator | 2026-04-18 00:38:13.444163 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-18 00:38:13.444197 | orchestrator | Saturday 18 April 2026 00:38:08 +0000 (0:00:00.354) 0:00:09.448 ******** 2026-04-18 00:38:13.444206 | orchestrator | ok: [testbed-manager] 2026-04-18 00:38:13.444215 | orchestrator | 2026-04-18 00:38:13.444223 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-18 00:38:13.444232 | orchestrator | Saturday 18 April 2026 00:38:09 +0000 (0:00:00.366) 0:00:09.814 ******** 2026-04-18 00:38:13.444240 | orchestrator | changed: [testbed-manager] 2026-04-18 00:38:13.444249 | orchestrator | 2026-04-18 00:38:13.444258 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-18 00:38:13.444266 | orchestrator | Saturday 18 April 2026 00:38:10 +0000 (0:00:01.040) 0:00:10.855 ******** 2026-04-18 00:38:13.444275 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-18 00:38:13.444284 | orchestrator | changed: [testbed-manager] 2026-04-18 00:38:13.444292 | orchestrator | 2026-04-18 00:38:13.444301 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-18 00:38:13.444309 | orchestrator | Saturday 18 April 2026 00:38:10 +0000 (0:00:00.841) 0:00:11.696 ******** 2026-04-18 00:38:13.444318 | orchestrator | changed: [testbed-manager] 2026-04-18 00:38:13.444327 | orchestrator | 2026-04-18 00:38:13.444337 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-18 00:38:13.444347 | orchestrator | Saturday 18 April 2026 00:38:12 +0000 (0:00:01.607) 0:00:13.304 ******** 2026-04-18 00:38:13.444357 | orchestrator | changed: [testbed-manager] 2026-04-18 00:38:13.444366 | orchestrator | 2026-04-18 00:38:13.444376 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:38:13.444386 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:38:13.444396 | orchestrator | 2026-04-18 00:38:13.444406 | orchestrator | 2026-04-18 00:38:13.444416 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:38:13.444426 | orchestrator | Saturday 18 April 2026 00:38:13 +0000 (0:00:00.763) 0:00:14.067 ******** 2026-04-18 00:38:13.444436 | orchestrator | =============================================================================== 2026-04-18 00:38:13.444447 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.78s 2026-04-18 00:38:13.444456 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.68s 2026-04-18 00:38:13.444480 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.61s 2026-04-18 00:38:13.444490 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.04s 2026-04-18 00:38:13.444500 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.84s 2026-04-18 00:38:13.444509 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.76s 2026-04-18 00:38:13.444519 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.49s 2026-04-18 00:38:13.444528 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.47s 2026-04-18 00:38:13.444539 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.39s 2026-04-18 00:38:13.444549 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.37s 2026-04-18 00:38:13.444558 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.35s 2026-04-18 00:38:13.552865 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-18 00:38:13.584565 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-18 00:38:13.584634 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-18 00:38:13.658980 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 187 0 --:--:-- --:--:-- --:--:-- 189 2026-04-18 00:38:13.669977 | orchestrator | + osism apply --environment custom workarounds 2026-04-18 00:38:14.760299 | orchestrator | 2026-04-18 00:38:14 | INFO  | Trying to run play workarounds in environment custom 2026-04-18 00:38:24.816383 | orchestrator | 2026-04-18 00:38:24 | INFO  | Prepare task for execution of workarounds. 2026-04-18 00:38:24.882955 | orchestrator | 2026-04-18 00:38:24 | INFO  | Task 612e5e9e-2e67-409c-9363-45c4ed3e3d6f (workarounds) was prepared for execution. 2026-04-18 00:38:24.883051 | orchestrator | 2026-04-18 00:38:24 | INFO  | It takes a moment until task 612e5e9e-2e67-409c-9363-45c4ed3e3d6f (workarounds) has been started and output is visible here. 2026-04-18 00:38:48.134906 | orchestrator | 2026-04-18 00:38:48.135022 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:38:48.135041 | orchestrator | 2026-04-18 00:38:48.135053 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-18 00:38:48.135065 | orchestrator | Saturday 18 April 2026 00:38:27 +0000 (0:00:00.173) 0:00:00.173 ******** 2026-04-18 00:38:48.135076 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-18 00:38:48.135088 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-18 00:38:48.135099 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-18 00:38:48.135110 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-18 00:38:48.135121 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-18 00:38:48.135132 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-18 00:38:48.135143 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-18 00:38:48.135154 | orchestrator | 2026-04-18 00:38:48.135166 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-18 00:38:48.135177 | orchestrator | 2026-04-18 00:38:48.135188 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-18 00:38:48.135198 | orchestrator | Saturday 18 April 2026 00:38:28 +0000 (0:00:00.667) 0:00:00.840 ******** 2026-04-18 00:38:48.135210 | orchestrator | ok: [testbed-manager] 2026-04-18 00:38:48.135221 | orchestrator | 2026-04-18 00:38:48.135232 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-18 00:38:48.135243 | orchestrator | 2026-04-18 00:38:48.135254 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-18 00:38:48.135266 | orchestrator | Saturday 18 April 2026 00:38:30 +0000 (0:00:02.279) 0:00:03.120 ******** 2026-04-18 00:38:48.135277 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:38:48.135288 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:38:48.135299 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:38:48.135309 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:38:48.135320 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:38:48.135331 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:38:48.135342 | orchestrator | 2026-04-18 00:38:48.135353 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-18 00:38:48.135364 | orchestrator | 2026-04-18 00:38:48.135376 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-18 00:38:48.135387 | orchestrator | Saturday 18 April 2026 00:38:32 +0000 (0:00:02.243) 0:00:05.363 ******** 2026-04-18 00:38:48.135398 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-18 00:38:48.135410 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-18 00:38:48.135424 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-18 00:38:48.135436 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-18 00:38:48.135449 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-18 00:38:48.135462 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-18 00:38:48.135498 | orchestrator | 2026-04-18 00:38:48.135527 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-18 00:38:48.135541 | orchestrator | Saturday 18 April 2026 00:38:34 +0000 (0:00:01.350) 0:00:06.714 ******** 2026-04-18 00:38:48.135553 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:38:48.135568 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:38:48.135586 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:38:48.135605 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:38:48.135622 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:38:48.135639 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:38:48.135658 | orchestrator | 2026-04-18 00:38:48.135677 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-18 00:38:48.135699 | orchestrator | Saturday 18 April 2026 00:38:38 +0000 (0:00:03.929) 0:00:10.643 ******** 2026-04-18 00:38:48.135718 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:38:48.135738 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:38:48.135752 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:38:48.135764 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:38:48.135776 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:38:48.135788 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:38:48.135804 | orchestrator | 2026-04-18 00:38:48.135823 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-18 00:38:48.135836 | orchestrator | 2026-04-18 00:38:48.135847 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-18 00:38:48.135858 | orchestrator | Saturday 18 April 2026 00:38:38 +0000 (0:00:00.472) 0:00:11.115 ******** 2026-04-18 00:38:48.135891 | orchestrator | changed: [testbed-manager] 2026-04-18 00:38:48.135904 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:38:48.135923 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:38:48.135934 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:38:48.135945 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:38:48.135956 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:38:48.135966 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:38:48.135977 | orchestrator | 2026-04-18 00:38:48.135988 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-18 00:38:48.135998 | orchestrator | Saturday 18 April 2026 00:38:40 +0000 (0:00:01.595) 0:00:12.710 ******** 2026-04-18 00:38:48.136009 | orchestrator | changed: [testbed-manager] 2026-04-18 00:38:48.136020 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:38:48.136030 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:38:48.136041 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:38:48.136052 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:38:48.136062 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:38:48.136094 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:38:48.136105 | orchestrator | 2026-04-18 00:38:48.136116 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-18 00:38:48.136127 | orchestrator | Saturday 18 April 2026 00:38:41 +0000 (0:00:01.413) 0:00:14.124 ******** 2026-04-18 00:38:48.136138 | orchestrator | ok: [testbed-manager] 2026-04-18 00:38:48.136149 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:38:48.136159 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:38:48.136170 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:38:48.136180 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:38:48.136191 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:38:48.136202 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:38:48.136212 | orchestrator | 2026-04-18 00:38:48.136223 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-18 00:38:48.136234 | orchestrator | Saturday 18 April 2026 00:38:43 +0000 (0:00:01.556) 0:00:15.680 ******** 2026-04-18 00:38:48.136245 | orchestrator | changed: [testbed-manager] 2026-04-18 00:38:48.136256 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:38:48.136267 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:38:48.136288 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:38:48.136299 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:38:48.136309 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:38:48.136320 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:38:48.136331 | orchestrator | 2026-04-18 00:38:48.136342 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-18 00:38:48.136353 | orchestrator | Saturday 18 April 2026 00:38:44 +0000 (0:00:01.495) 0:00:17.175 ******** 2026-04-18 00:38:48.136363 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:38:48.136374 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:38:48.136384 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:38:48.136395 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:38:48.136405 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:38:48.136416 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:38:48.136427 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:38:48.136437 | orchestrator | 2026-04-18 00:38:48.136448 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-18 00:38:48.136459 | orchestrator | 2026-04-18 00:38:48.136469 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-18 00:38:48.136480 | orchestrator | Saturday 18 April 2026 00:38:45 +0000 (0:00:00.601) 0:00:17.777 ******** 2026-04-18 00:38:48.136491 | orchestrator | ok: [testbed-manager] 2026-04-18 00:38:48.136501 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:38:48.136512 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:38:48.136523 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:38:48.136533 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:38:48.136544 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:38:48.136554 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:38:48.136565 | orchestrator | 2026-04-18 00:38:48.136576 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:38:48.136588 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-18 00:38:48.136600 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:38:48.136611 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:38:48.136622 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:38:48.136639 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:38:48.136650 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:38:48.136660 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:38:48.136671 | orchestrator | 2026-04-18 00:38:48.136682 | orchestrator | 2026-04-18 00:38:48.136693 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:38:48.136703 | orchestrator | Saturday 18 April 2026 00:38:48 +0000 (0:00:02.796) 0:00:20.573 ******** 2026-04-18 00:38:48.136714 | orchestrator | =============================================================================== 2026-04-18 00:38:48.136725 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.93s 2026-04-18 00:38:48.136735 | orchestrator | Install python3-docker -------------------------------------------------- 2.80s 2026-04-18 00:38:48.136746 | orchestrator | Apply netplan configuration --------------------------------------------- 2.28s 2026-04-18 00:38:48.136757 | orchestrator | Apply netplan configuration --------------------------------------------- 2.24s 2026-04-18 00:38:48.136776 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.60s 2026-04-18 00:38:48.136787 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.56s 2026-04-18 00:38:48.136798 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.50s 2026-04-18 00:38:48.136809 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.41s 2026-04-18 00:38:48.136819 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.35s 2026-04-18 00:38:48.136830 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.67s 2026-04-18 00:38:48.136841 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.60s 2026-04-18 00:38:48.136857 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.47s 2026-04-18 00:38:48.419487 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-18 00:38:59.673936 | orchestrator | 2026-04-18 00:38:59 | INFO  | Prepare task for execution of reboot. 2026-04-18 00:38:59.747027 | orchestrator | 2026-04-18 00:38:59 | INFO  | Task 5ea9b2c8-d33b-4e51-a33c-96945970d6cb (reboot) was prepared for execution. 2026-04-18 00:38:59.747106 | orchestrator | 2026-04-18 00:38:59 | INFO  | It takes a moment until task 5ea9b2c8-d33b-4e51-a33c-96945970d6cb (reboot) has been started and output is visible here. 2026-04-18 00:39:10.228455 | orchestrator | 2026-04-18 00:39:10.228574 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-18 00:39:10.228590 | orchestrator | 2026-04-18 00:39:10.228601 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-18 00:39:10.228612 | orchestrator | Saturday 18 April 2026 00:39:02 +0000 (0:00:00.222) 0:00:00.222 ******** 2026-04-18 00:39:10.228622 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:39:10.228633 | orchestrator | 2026-04-18 00:39:10.228643 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-18 00:39:10.228653 | orchestrator | Saturday 18 April 2026 00:39:02 +0000 (0:00:00.107) 0:00:00.329 ******** 2026-04-18 00:39:10.228663 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:39:10.228672 | orchestrator | 2026-04-18 00:39:10.228682 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-18 00:39:10.228692 | orchestrator | Saturday 18 April 2026 00:39:04 +0000 (0:00:01.234) 0:00:01.564 ******** 2026-04-18 00:39:10.228702 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:39:10.228721 | orchestrator | 2026-04-18 00:39:10.228731 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-18 00:39:10.228740 | orchestrator | 2026-04-18 00:39:10.228750 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-18 00:39:10.228760 | orchestrator | Saturday 18 April 2026 00:39:04 +0000 (0:00:00.095) 0:00:01.659 ******** 2026-04-18 00:39:10.228770 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:39:10.228779 | orchestrator | 2026-04-18 00:39:10.228789 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-18 00:39:10.228799 | orchestrator | Saturday 18 April 2026 00:39:04 +0000 (0:00:00.100) 0:00:01.759 ******** 2026-04-18 00:39:10.228808 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:39:10.228818 | orchestrator | 2026-04-18 00:39:10.228828 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-18 00:39:10.228838 | orchestrator | Saturday 18 April 2026 00:39:05 +0000 (0:00:00.986) 0:00:02.746 ******** 2026-04-18 00:39:10.228847 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:39:10.228899 | orchestrator | 2026-04-18 00:39:10.228928 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-18 00:39:10.228944 | orchestrator | 2026-04-18 00:39:10.228967 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-18 00:39:10.228986 | orchestrator | Saturday 18 April 2026 00:39:05 +0000 (0:00:00.107) 0:00:02.854 ******** 2026-04-18 00:39:10.229003 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:39:10.229051 | orchestrator | 2026-04-18 00:39:10.229069 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-18 00:39:10.229087 | orchestrator | Saturday 18 April 2026 00:39:05 +0000 (0:00:00.085) 0:00:02.939 ******** 2026-04-18 00:39:10.229103 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:39:10.229120 | orchestrator | 2026-04-18 00:39:10.229147 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-18 00:39:10.229183 | orchestrator | Saturday 18 April 2026 00:39:06 +0000 (0:00:01.002) 0:00:03.942 ******** 2026-04-18 00:39:10.229200 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:39:10.229216 | orchestrator | 2026-04-18 00:39:10.229230 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-18 00:39:10.229247 | orchestrator | 2026-04-18 00:39:10.229266 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-18 00:39:10.229282 | orchestrator | Saturday 18 April 2026 00:39:06 +0000 (0:00:00.113) 0:00:04.055 ******** 2026-04-18 00:39:10.229302 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:39:10.229325 | orchestrator | 2026-04-18 00:39:10.229343 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-18 00:39:10.229358 | orchestrator | Saturday 18 April 2026 00:39:06 +0000 (0:00:00.086) 0:00:04.141 ******** 2026-04-18 00:39:10.229373 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:39:10.229390 | orchestrator | 2026-04-18 00:39:10.229408 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-18 00:39:10.229426 | orchestrator | Saturday 18 April 2026 00:39:07 +0000 (0:00:00.961) 0:00:05.103 ******** 2026-04-18 00:39:10.229443 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:39:10.229460 | orchestrator | 2026-04-18 00:39:10.229478 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-18 00:39:10.229494 | orchestrator | 2026-04-18 00:39:10.229513 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-18 00:39:10.229531 | orchestrator | Saturday 18 April 2026 00:39:07 +0000 (0:00:00.096) 0:00:05.200 ******** 2026-04-18 00:39:10.229550 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:39:10.229567 | orchestrator | 2026-04-18 00:39:10.229584 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-18 00:39:10.229601 | orchestrator | Saturday 18 April 2026 00:39:07 +0000 (0:00:00.170) 0:00:05.370 ******** 2026-04-18 00:39:10.229619 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:39:10.229638 | orchestrator | 2026-04-18 00:39:10.229657 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-18 00:39:10.229674 | orchestrator | Saturday 18 April 2026 00:39:08 +0000 (0:00:00.978) 0:00:06.349 ******** 2026-04-18 00:39:10.229692 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:39:10.229710 | orchestrator | 2026-04-18 00:39:10.229726 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-18 00:39:10.229745 | orchestrator | 2026-04-18 00:39:10.229764 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-18 00:39:10.229783 | orchestrator | Saturday 18 April 2026 00:39:08 +0000 (0:00:00.097) 0:00:06.446 ******** 2026-04-18 00:39:10.229800 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:39:10.229818 | orchestrator | 2026-04-18 00:39:10.229838 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-18 00:39:10.229932 | orchestrator | Saturday 18 April 2026 00:39:09 +0000 (0:00:00.083) 0:00:06.530 ******** 2026-04-18 00:39:10.229953 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:39:10.229971 | orchestrator | 2026-04-18 00:39:10.229988 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-18 00:39:10.230005 | orchestrator | Saturday 18 April 2026 00:39:10 +0000 (0:00:01.029) 0:00:07.560 ******** 2026-04-18 00:39:10.230144 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:39:10.230167 | orchestrator | 2026-04-18 00:39:10.230184 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:39:10.230203 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:39:10.230242 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:39:10.230262 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:39:10.230281 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:39:10.230300 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:39:10.230319 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:39:10.230339 | orchestrator | 2026-04-18 00:39:10.230358 | orchestrator | 2026-04-18 00:39:10.230376 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:39:10.230395 | orchestrator | Saturday 18 April 2026 00:39:10 +0000 (0:00:00.034) 0:00:07.594 ******** 2026-04-18 00:39:10.230413 | orchestrator | =============================================================================== 2026-04-18 00:39:10.230424 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.19s 2026-04-18 00:39:10.230435 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.63s 2026-04-18 00:39:10.230446 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.55s 2026-04-18 00:39:10.338062 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-18 00:39:21.585568 | orchestrator | 2026-04-18 00:39:21 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-18 00:39:21.664071 | orchestrator | 2026-04-18 00:39:21 | INFO  | Task 858e50b7-ef36-4f21-932d-0c5776757bb5 (wait-for-connection) was prepared for execution. 2026-04-18 00:39:21.664182 | orchestrator | 2026-04-18 00:39:21 | INFO  | It takes a moment until task 858e50b7-ef36-4f21-932d-0c5776757bb5 (wait-for-connection) has been started and output is visible here. 2026-04-18 00:39:36.333358 | orchestrator | 2026-04-18 00:39:36.333475 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-18 00:39:36.333493 | orchestrator | 2026-04-18 00:39:36.333505 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-18 00:39:36.333517 | orchestrator | Saturday 18 April 2026 00:39:24 +0000 (0:00:00.229) 0:00:00.229 ******** 2026-04-18 00:39:36.333528 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:39:36.333540 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:39:36.333551 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:39:36.333562 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:39:36.333573 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:39:36.333583 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:39:36.333594 | orchestrator | 2026-04-18 00:39:36.333605 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:39:36.333617 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:39:36.333629 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:39:36.333640 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:39:36.333651 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:39:36.333662 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:39:36.333701 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:39:36.333713 | orchestrator | 2026-04-18 00:39:36.333724 | orchestrator | 2026-04-18 00:39:36.333735 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:39:36.333746 | orchestrator | Saturday 18 April 2026 00:39:36 +0000 (0:00:11.501) 0:00:11.731 ******** 2026-04-18 00:39:36.333757 | orchestrator | =============================================================================== 2026-04-18 00:39:36.333768 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.50s 2026-04-18 00:39:36.447157 | orchestrator | + osism apply hddtemp 2026-04-18 00:39:47.553753 | orchestrator | 2026-04-18 00:39:47 | INFO  | Prepare task for execution of hddtemp. 2026-04-18 00:39:47.620964 | orchestrator | 2026-04-18 00:39:47 | INFO  | Task d1b46985-96b5-44f5-abb0-50992d88e6ce (hddtemp) was prepared for execution. 2026-04-18 00:39:47.621063 | orchestrator | 2026-04-18 00:39:47 | INFO  | It takes a moment until task d1b46985-96b5-44f5-abb0-50992d88e6ce (hddtemp) has been started and output is visible here. 2026-04-18 00:40:14.774007 | orchestrator | 2026-04-18 00:40:14.774143 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-18 00:40:14.774156 | orchestrator | 2026-04-18 00:40:14.774163 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-18 00:40:14.774171 | orchestrator | Saturday 18 April 2026 00:39:50 +0000 (0:00:00.290) 0:00:00.290 ******** 2026-04-18 00:40:14.774178 | orchestrator | ok: [testbed-manager] 2026-04-18 00:40:14.774186 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:40:14.774193 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:40:14.774200 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:40:14.774207 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:40:14.774214 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:40:14.774222 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:40:14.774228 | orchestrator | 2026-04-18 00:40:14.774235 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-18 00:40:14.774243 | orchestrator | Saturday 18 April 2026 00:39:51 +0000 (0:00:00.492) 0:00:00.783 ******** 2026-04-18 00:40:14.774252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:40:14.774261 | orchestrator | 2026-04-18 00:40:14.774268 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-18 00:40:14.774275 | orchestrator | Saturday 18 April 2026 00:39:51 +0000 (0:00:00.829) 0:00:01.612 ******** 2026-04-18 00:40:14.774282 | orchestrator | ok: [testbed-manager] 2026-04-18 00:40:14.774288 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:40:14.774295 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:40:14.774301 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:40:14.774307 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:40:14.774313 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:40:14.774336 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:40:14.774343 | orchestrator | 2026-04-18 00:40:14.774350 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-18 00:40:14.774356 | orchestrator | Saturday 18 April 2026 00:39:54 +0000 (0:00:02.310) 0:00:03.923 ******** 2026-04-18 00:40:14.774362 | orchestrator | changed: [testbed-manager] 2026-04-18 00:40:14.774370 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:40:14.774377 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:40:14.774384 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:40:14.774390 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:40:14.774397 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:40:14.774404 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:40:14.774411 | orchestrator | 2026-04-18 00:40:14.774437 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-18 00:40:14.774444 | orchestrator | Saturday 18 April 2026 00:39:55 +0000 (0:00:00.881) 0:00:04.805 ******** 2026-04-18 00:40:14.774455 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:40:14.774461 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:40:14.774467 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:40:14.774473 | orchestrator | ok: [testbed-manager] 2026-04-18 00:40:14.774480 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:40:14.774486 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:40:14.774492 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:40:14.774499 | orchestrator | 2026-04-18 00:40:14.774506 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-18 00:40:14.774513 | orchestrator | Saturday 18 April 2026 00:39:56 +0000 (0:00:01.177) 0:00:05.982 ******** 2026-04-18 00:40:14.774519 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:40:14.774525 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:40:14.774531 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:40:14.774537 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:40:14.774544 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:40:14.774550 | orchestrator | changed: [testbed-manager] 2026-04-18 00:40:14.774557 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:40:14.774563 | orchestrator | 2026-04-18 00:40:14.774570 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-18 00:40:14.774578 | orchestrator | Saturday 18 April 2026 00:39:56 +0000 (0:00:00.532) 0:00:06.515 ******** 2026-04-18 00:40:14.774585 | orchestrator | changed: [testbed-manager] 2026-04-18 00:40:14.774592 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:40:14.774598 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:40:14.774604 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:40:14.774610 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:40:14.774617 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:40:14.774624 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:40:14.774631 | orchestrator | 2026-04-18 00:40:14.774639 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-18 00:40:14.774646 | orchestrator | Saturday 18 April 2026 00:40:11 +0000 (0:00:14.590) 0:00:21.106 ******** 2026-04-18 00:40:14.774655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:40:14.774663 | orchestrator | 2026-04-18 00:40:14.774670 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-18 00:40:14.774677 | orchestrator | Saturday 18 April 2026 00:40:12 +0000 (0:00:01.186) 0:00:22.292 ******** 2026-04-18 00:40:14.774685 | orchestrator | changed: [testbed-manager] 2026-04-18 00:40:14.774692 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:40:14.774699 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:40:14.774707 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:40:14.774714 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:40:14.774721 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:40:14.774727 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:40:14.774734 | orchestrator | 2026-04-18 00:40:14.774740 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:40:14.774748 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:40:14.774773 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-18 00:40:14.774782 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-18 00:40:14.774788 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-18 00:40:14.774858 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-18 00:40:14.774870 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-18 00:40:14.774876 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-18 00:40:14.774883 | orchestrator | 2026-04-18 00:40:14.774889 | orchestrator | 2026-04-18 00:40:14.774896 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:40:14.774903 | orchestrator | Saturday 18 April 2026 00:40:14 +0000 (0:00:01.913) 0:00:24.206 ******** 2026-04-18 00:40:14.774910 | orchestrator | =============================================================================== 2026-04-18 00:40:14.774916 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.59s 2026-04-18 00:40:14.774923 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.31s 2026-04-18 00:40:14.774929 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.91s 2026-04-18 00:40:14.774935 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.19s 2026-04-18 00:40:14.774942 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.18s 2026-04-18 00:40:14.774948 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.88s 2026-04-18 00:40:14.774954 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.83s 2026-04-18 00:40:14.774960 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.53s 2026-04-18 00:40:14.774966 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.49s 2026-04-18 00:40:14.940352 | orchestrator | ++ semver 10.0.0 7.1.1 2026-04-18 00:40:14.991558 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-18 00:40:14.991628 | orchestrator | + sudo systemctl restart manager.service 2026-04-18 00:40:28.459795 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-18 00:40:28.460038 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-18 00:40:28.460059 | orchestrator | + local max_attempts=60 2026-04-18 00:40:28.460073 | orchestrator | + local name=ceph-ansible 2026-04-18 00:40:28.460085 | orchestrator | + local attempt_num=1 2026-04-18 00:40:28.460097 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:40:28.488134 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:40:28.488223 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:40:28.488241 | orchestrator | + sleep 5 2026-04-18 00:40:33.491272 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:40:33.585281 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:40:33.585377 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:40:33.585390 | orchestrator | + sleep 5 2026-04-18 00:40:38.588727 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:40:38.617113 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:40:38.617221 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:40:38.617247 | orchestrator | + sleep 5 2026-04-18 00:40:43.621325 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:40:43.656056 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:40:43.656147 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:40:43.656155 | orchestrator | + sleep 5 2026-04-18 00:40:48.659591 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:40:48.695203 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:40:48.695279 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:40:48.695288 | orchestrator | + sleep 5 2026-04-18 00:40:53.699362 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:40:53.736506 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:40:53.736602 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:40:53.736648 | orchestrator | + sleep 5 2026-04-18 00:40:58.741064 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:40:58.783420 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:40:58.783540 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:40:58.783562 | orchestrator | + sleep 5 2026-04-18 00:41:03.787274 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:41:03.827138 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-18 00:41:03.827222 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:41:03.827234 | orchestrator | + sleep 5 2026-04-18 00:41:08.830331 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:41:08.864643 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-18 00:41:08.864729 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:41:08.864738 | orchestrator | + sleep 5 2026-04-18 00:41:13.868636 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:41:13.908078 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-18 00:41:13.908182 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:41:13.908198 | orchestrator | + sleep 5 2026-04-18 00:41:18.912288 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:41:18.949160 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-18 00:41:18.949264 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:41:18.949282 | orchestrator | + sleep 5 2026-04-18 00:41:23.953377 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:41:23.992363 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-18 00:41:23.992478 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:41:23.992496 | orchestrator | + sleep 5 2026-04-18 00:41:28.996945 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:41:29.031298 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-18 00:41:29.031393 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-18 00:41:29.031551 | orchestrator | + sleep 5 2026-04-18 00:41:34.034300 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-18 00:41:34.074683 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:41:34.074996 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-18 00:41:34.075032 | orchestrator | + local max_attempts=60 2026-04-18 00:41:34.075054 | orchestrator | + local name=kolla-ansible 2026-04-18 00:41:34.075093 | orchestrator | + local attempt_num=1 2026-04-18 00:41:34.075110 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-18 00:41:34.105868 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:41:34.105970 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-18 00:41:34.105986 | orchestrator | + local max_attempts=60 2026-04-18 00:41:34.105999 | orchestrator | + local name=osism-ansible 2026-04-18 00:41:34.106011 | orchestrator | + local attempt_num=1 2026-04-18 00:41:34.106435 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-18 00:41:34.147642 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-18 00:41:34.147758 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-18 00:41:34.147840 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-18 00:41:34.295257 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-18 00:41:34.442933 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-18 00:41:34.578321 | orchestrator | ARA in osism-ansible already disabled. 2026-04-18 00:41:34.694705 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-18 00:41:34.695618 | orchestrator | + osism apply gather-facts 2026-04-18 00:41:45.717546 | orchestrator | 2026-04-18 00:41:45 | INFO  | Prepare task for execution of gather-facts. 2026-04-18 00:41:45.784374 | orchestrator | 2026-04-18 00:41:45 | INFO  | Task e8b27f8b-3e30-4042-bbd9-a442830c2e3f (gather-facts) was prepared for execution. 2026-04-18 00:41:45.784731 | orchestrator | 2026-04-18 00:41:45 | INFO  | It takes a moment until task e8b27f8b-3e30-4042-bbd9-a442830c2e3f (gather-facts) has been started and output is visible here. 2026-04-18 00:41:49.331423 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-04-18 00:41:49.331526 | orchestrator | -vvvv to see details 2026-04-18 00:41:49.331565 | orchestrator | 2026-04-18 00:41:49.331578 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-18 00:41:49.331590 | orchestrator | 2026-04-18 00:41:49.331601 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-18 00:41:49.331628 | orchestrator | fatal: [testbed-manager]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.5\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.5: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-18 00:41:49.331642 | orchestrator | ...ignoring 2026-04-18 00:41:49.331654 | orchestrator | fatal: [testbed-node-0]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.10\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.10: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-18 00:41:49.331666 | orchestrator | ...ignoring 2026-04-18 00:41:49.331677 | orchestrator | fatal: [testbed-node-1]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.11\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.11: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-18 00:41:49.331688 | orchestrator | ...ignoring 2026-04-18 00:41:49.331700 | orchestrator | fatal: [testbed-node-5]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.15\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.15: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-18 00:41:49.331711 | orchestrator | ...ignoring 2026-04-18 00:41:49.331727 | orchestrator | fatal: [testbed-node-3]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.13\". Make sure this host can be reached over ssh: [Errno 32] Broken pipe. [Errno 32] Broken pipe", "unreachable": true} 2026-04-18 00:41:49.331748 | orchestrator | ...ignoring 2026-04-18 00:41:49.331767 | orchestrator | fatal: [testbed-node-2]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.12\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.12: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-18 00:41:49.331870 | orchestrator | ...ignoring 2026-04-18 00:41:49.331893 | orchestrator | fatal: [testbed-node-4]: UNREACHABLE! => {"changed": false, "msg": "Data could not be sent to remote host \"192.168.16.14\". Make sure this host can be reached over ssh: no such identity: /ansible/secrets/id_rsa: No such file or directory\r\ndragon@192.168.16.14: Permission denied (publickey).\r\n", "unreachable": true} 2026-04-18 00:41:49.331913 | orchestrator | ...ignoring 2026-04-18 00:41:49.331933 | orchestrator | 2026-04-18 00:41:49.331954 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-18 00:41:49.331974 | orchestrator | 2026-04-18 00:41:49.331987 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-18 00:41:49.331998 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:41:49.332009 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:41:49.332020 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:41:49.332031 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:41:49.332041 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:41:49.332052 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:41:49.332062 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:41:49.332073 | orchestrator | 2026-04-18 00:41:49.332084 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:41:49.332095 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-18 00:41:49.332119 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-18 00:41:49.332130 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-18 00:41:49.332141 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-18 00:41:49.332152 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-18 00:41:49.332181 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-18 00:41:49.332193 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-18 00:41:49.332204 | orchestrator | 2026-04-18 00:41:49.450632 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-18 00:41:49.467564 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-18 00:41:49.476687 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-18 00:41:49.486717 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-18 00:41:49.496258 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-18 00:41:49.517544 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-18 00:41:49.526092 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-18 00:41:49.539236 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-18 00:41:49.547814 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-18 00:41:49.560964 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-18 00:41:49.570920 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-18 00:41:49.584905 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-18 00:41:49.595927 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-18 00:41:49.609638 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-18 00:41:49.623955 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-18 00:41:49.641920 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-18 00:41:49.657714 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-18 00:41:49.676022 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-18 00:41:49.694573 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-18 00:41:49.709139 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-18 00:41:49.728581 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-18 00:41:49.746842 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-18 00:41:49.767553 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-18 00:41:49.786338 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-18 00:41:49.899337 | orchestrator | ok: Runtime: 0:23:28.728488 2026-04-18 00:41:49.999831 | 2026-04-18 00:41:49.999980 | TASK [Deploy services] 2026-04-18 00:41:50.535176 | orchestrator | skipping: Conditional result was False 2026-04-18 00:41:50.554511 | 2026-04-18 00:41:50.554749 | TASK [Deploy in a nutshell] 2026-04-18 00:41:51.269098 | orchestrator | + set -e 2026-04-18 00:41:51.270869 | orchestrator | 2026-04-18 00:41:51.270934 | orchestrator | # PULL IMAGES 2026-04-18 00:41:51.270941 | orchestrator | 2026-04-18 00:41:51.270954 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-18 00:41:51.270963 | orchestrator | ++ export INTERACTIVE=false 2026-04-18 00:41:51.270969 | orchestrator | ++ INTERACTIVE=false 2026-04-18 00:41:51.270988 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-18 00:41:51.270997 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-18 00:41:51.271003 | orchestrator | + source /opt/manager-vars.sh 2026-04-18 00:41:51.271007 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-18 00:41:51.271015 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-18 00:41:51.271019 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-18 00:41:51.271026 | orchestrator | ++ CEPH_VERSION=reef 2026-04-18 00:41:51.271030 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-18 00:41:51.271036 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-18 00:41:51.271040 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-18 00:41:51.271046 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-18 00:41:51.271050 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-18 00:41:51.271054 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-18 00:41:51.271058 | orchestrator | ++ export ARA=false 2026-04-18 00:41:51.271062 | orchestrator | ++ ARA=false 2026-04-18 00:41:51.271066 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-18 00:41:51.271070 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-18 00:41:51.271074 | orchestrator | ++ export TEMPEST=true 2026-04-18 00:41:51.271077 | orchestrator | ++ TEMPEST=true 2026-04-18 00:41:51.271081 | orchestrator | ++ export IS_ZUUL=true 2026-04-18 00:41:51.271085 | orchestrator | ++ IS_ZUUL=true 2026-04-18 00:41:51.271088 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.97 2026-04-18 00:41:51.271092 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.97 2026-04-18 00:41:51.271096 | orchestrator | ++ export EXTERNAL_API=false 2026-04-18 00:41:51.271100 | orchestrator | ++ EXTERNAL_API=false 2026-04-18 00:41:51.271103 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-18 00:41:51.271107 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-18 00:41:51.271111 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-18 00:41:51.271115 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-18 00:41:51.271118 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-18 00:41:51.271122 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-18 00:41:51.271126 | orchestrator | + echo 2026-04-18 00:41:51.271130 | orchestrator | + echo '# PULL IMAGES' 2026-04-18 00:41:51.271134 | orchestrator | + echo 2026-04-18 00:41:51.271142 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-18 00:41:51.329837 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-18 00:41:51.329913 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-18 00:41:52.516960 | orchestrator | 2026-04-18 00:41:52 | INFO  | Trying to run play pull-images in environment custom 2026-04-18 00:42:02.561344 | orchestrator | 2026-04-18 00:42:02 | INFO  | Prepare task for execution of pull-images. 2026-04-18 00:42:02.641663 | orchestrator | 2026-04-18 00:42:02 | INFO  | Task 88029109-3bca-4677-b1e1-222be1e1a1f2 (pull-images) was prepared for execution. 2026-04-18 00:42:02.641754 | orchestrator | 2026-04-18 00:42:02 | INFO  | Task 88029109-3bca-4677-b1e1-222be1e1a1f2 is running in background. No more output. Check ARA for logs. 2026-04-18 00:42:04.068463 | orchestrator | 2026-04-18 00:42:04 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-18 00:42:14.286305 | orchestrator | 2026-04-18 00:42:14 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-18 00:42:14.353921 | orchestrator | 2026-04-18 00:42:14 | INFO  | Task 032b3ca0-49d2-472b-8c09-4a153883851c (wipe-partitions) was prepared for execution. 2026-04-18 00:42:14.354077 | orchestrator | 2026-04-18 00:42:14 | INFO  | It takes a moment until task 032b3ca0-49d2-472b-8c09-4a153883851c (wipe-partitions) has been started and output is visible here. 2026-04-18 00:42:25.589137 | orchestrator | 2026-04-18 00:42:25.589237 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-18 00:42:25.589246 | orchestrator | 2026-04-18 00:42:25.589251 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-18 00:42:25.589261 | orchestrator | Saturday 18 April 2026 00:42:16 +0000 (0:00:00.120) 0:00:00.120 ******** 2026-04-18 00:42:25.589268 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:42:25.589292 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:42:25.589298 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:42:25.589303 | orchestrator | 2026-04-18 00:42:25.589309 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-18 00:42:25.589314 | orchestrator | Saturday 18 April 2026 00:42:18 +0000 (0:00:01.223) 0:00:01.344 ******** 2026-04-18 00:42:25.589319 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:42:25.589327 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:42:25.589332 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:42:25.589337 | orchestrator | 2026-04-18 00:42:25.589343 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-18 00:42:25.589348 | orchestrator | Saturday 18 April 2026 00:42:18 +0000 (0:00:00.211) 0:00:01.556 ******** 2026-04-18 00:42:25.589353 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:42:25.589359 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:42:25.589364 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:42:25.589369 | orchestrator | 2026-04-18 00:42:25.589374 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-18 00:42:25.589379 | orchestrator | Saturday 18 April 2026 00:42:18 +0000 (0:00:00.533) 0:00:02.090 ******** 2026-04-18 00:42:25.589384 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:42:25.589389 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:42:25.589394 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:42:25.589399 | orchestrator | 2026-04-18 00:42:25.589404 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-18 00:42:25.589409 | orchestrator | Saturday 18 April 2026 00:42:19 +0000 (0:00:00.209) 0:00:02.300 ******** 2026-04-18 00:42:25.589415 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-18 00:42:25.589422 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-18 00:42:25.589427 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-18 00:42:25.589432 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-18 00:42:25.589437 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-18 00:42:25.589442 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-18 00:42:25.589447 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-18 00:42:25.589452 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-18 00:42:25.589457 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-18 00:42:25.589462 | orchestrator | 2026-04-18 00:42:25.589467 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-18 00:42:25.589472 | orchestrator | Saturday 18 April 2026 00:42:20 +0000 (0:00:01.409) 0:00:03.709 ******** 2026-04-18 00:42:25.589478 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-18 00:42:25.589483 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-18 00:42:25.589488 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-18 00:42:25.589493 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-18 00:42:25.589498 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-18 00:42:25.589503 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-18 00:42:25.589507 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-18 00:42:25.589512 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-18 00:42:25.589521 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-18 00:42:25.589526 | orchestrator | 2026-04-18 00:42:25.589531 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-18 00:42:25.589536 | orchestrator | Saturday 18 April 2026 00:42:21 +0000 (0:00:01.442) 0:00:05.152 ******** 2026-04-18 00:42:25.589542 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-18 00:42:25.589547 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-18 00:42:25.589552 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-18 00:42:25.589557 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-18 00:42:25.589562 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-18 00:42:25.589572 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-18 00:42:25.589578 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-18 00:42:25.589583 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-18 00:42:25.589588 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-18 00:42:25.589593 | orchestrator | 2026-04-18 00:42:25.589598 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-18 00:42:25.589603 | orchestrator | Saturday 18 April 2026 00:42:24 +0000 (0:00:02.131) 0:00:07.284 ******** 2026-04-18 00:42:25.589608 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:42:25.589613 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:42:25.589618 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:42:25.589623 | orchestrator | 2026-04-18 00:42:25.589628 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-18 00:42:25.589633 | orchestrator | Saturday 18 April 2026 00:42:24 +0000 (0:00:00.606) 0:00:07.890 ******** 2026-04-18 00:42:25.589638 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:42:25.589643 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:42:25.589648 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:42:25.589653 | orchestrator | 2026-04-18 00:42:25.589659 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:42:25.589665 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:42:25.589671 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:42:25.589687 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:42:25.589694 | orchestrator | 2026-04-18 00:42:25.589700 | orchestrator | 2026-04-18 00:42:25.589706 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:42:25.589711 | orchestrator | Saturday 18 April 2026 00:42:25 +0000 (0:00:00.650) 0:00:08.540 ******** 2026-04-18 00:42:25.589717 | orchestrator | =============================================================================== 2026-04-18 00:42:25.589723 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2026-04-18 00:42:25.589729 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.44s 2026-04-18 00:42:25.589735 | orchestrator | Check device availability ----------------------------------------------- 1.41s 2026-04-18 00:42:25.589741 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.22s 2026-04-18 00:42:25.589747 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2026-04-18 00:42:25.589753 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2026-04-18 00:42:25.589758 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.53s 2026-04-18 00:42:25.589764 | orchestrator | Remove all rook related logical devices --------------------------------- 0.21s 2026-04-18 00:42:25.589770 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.21s 2026-04-18 00:42:36.832337 | orchestrator | 2026-04-18 00:42:36 | INFO  | Prepare task for execution of facts. 2026-04-18 00:42:36.899976 | orchestrator | 2026-04-18 00:42:36 | INFO  | Task 02b3c42e-5d4f-4dc7-9ae0-9826d4cbcae8 (facts) was prepared for execution. 2026-04-18 00:42:36.900048 | orchestrator | 2026-04-18 00:42:36 | INFO  | It takes a moment until task 02b3c42e-5d4f-4dc7-9ae0-9826d4cbcae8 (facts) has been started and output is visible here. 2026-04-18 00:42:48.398261 | orchestrator | 2026-04-18 00:42:48.398363 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-18 00:42:48.398379 | orchestrator | 2026-04-18 00:42:48.398391 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-18 00:42:48.398431 | orchestrator | Saturday 18 April 2026 00:42:39 +0000 (0:00:00.298) 0:00:00.298 ******** 2026-04-18 00:42:48.398444 | orchestrator | ok: [testbed-manager] 2026-04-18 00:42:48.398456 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:42:48.398467 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:42:48.398479 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:42:48.398490 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:42:48.398501 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:42:48.398512 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:42:48.398523 | orchestrator | 2026-04-18 00:42:48.398535 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-18 00:42:48.398546 | orchestrator | Saturday 18 April 2026 00:42:41 +0000 (0:00:01.324) 0:00:01.623 ******** 2026-04-18 00:42:48.398558 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:42:48.398570 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:42:48.398582 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:42:48.398609 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:42:48.398621 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:42:48.398631 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:42:48.398642 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:42:48.398654 | orchestrator | 2026-04-18 00:42:48.398665 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-18 00:42:48.398676 | orchestrator | 2026-04-18 00:42:48.398688 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-18 00:42:48.398699 | orchestrator | Saturday 18 April 2026 00:42:42 +0000 (0:00:01.047) 0:00:02.670 ******** 2026-04-18 00:42:48.398712 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:42:48.398723 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:42:48.398734 | orchestrator | ok: [testbed-manager] 2026-04-18 00:42:48.398744 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:42:48.398755 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:42:48.398766 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:42:48.398777 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:42:48.398787 | orchestrator | 2026-04-18 00:42:48.398798 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-18 00:42:48.398809 | orchestrator | 2026-04-18 00:42:48.398820 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-18 00:42:48.398886 | orchestrator | Saturday 18 April 2026 00:42:47 +0000 (0:00:05.374) 0:00:08.044 ******** 2026-04-18 00:42:48.398898 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:42:48.398909 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:42:48.398919 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:42:48.398930 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:42:48.398941 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:42:48.398951 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:42:48.398963 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:42:48.398973 | orchestrator | 2026-04-18 00:42:48.398984 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:42:48.398995 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:42:48.399008 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:42:48.399018 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:42:48.399029 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:42:48.399040 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:42:48.399051 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:42:48.399070 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:42:48.399081 | orchestrator | 2026-04-18 00:42:48.399092 | orchestrator | 2026-04-18 00:42:48.399103 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:42:48.399114 | orchestrator | Saturday 18 April 2026 00:42:48 +0000 (0:00:00.515) 0:00:08.560 ******** 2026-04-18 00:42:48.399125 | orchestrator | =============================================================================== 2026-04-18 00:42:48.399135 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.37s 2026-04-18 00:42:48.399146 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.32s 2026-04-18 00:42:48.399157 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2026-04-18 00:42:48.399167 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-04-18 00:42:49.831558 | orchestrator | 2026-04-18 00:42:49 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-18 00:42:49.892192 | orchestrator | 2026-04-18 00:42:49 | INFO  | Task d3a6269f-7e65-4ed3-8cd4-d115d202bdb5 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-18 00:42:49.892277 | orchestrator | 2026-04-18 00:42:49 | INFO  | It takes a moment until task d3a6269f-7e65-4ed3-8cd4-d115d202bdb5 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-18 00:43:00.711226 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-18 00:43:00.711297 | orchestrator | 2.16.14 2026-04-18 00:43:00.711304 | orchestrator | 2026-04-18 00:43:00.711310 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-18 00:43:00.711315 | orchestrator | 2026-04-18 00:43:00.711319 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-18 00:43:00.711324 | orchestrator | Saturday 18 April 2026 00:42:54 +0000 (0:00:00.255) 0:00:00.255 ******** 2026-04-18 00:43:00.711329 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-18 00:43:00.711339 | orchestrator | 2026-04-18 00:43:00.711344 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-18 00:43:00.711348 | orchestrator | Saturday 18 April 2026 00:42:54 +0000 (0:00:00.206) 0:00:00.461 ******** 2026-04-18 00:43:00.711352 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:43:00.711357 | orchestrator | 2026-04-18 00:43:00.711361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711365 | orchestrator | Saturday 18 April 2026 00:42:54 +0000 (0:00:00.200) 0:00:00.662 ******** 2026-04-18 00:43:00.711369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-18 00:43:00.711374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-18 00:43:00.711378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-18 00:43:00.711382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-18 00:43:00.711386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-18 00:43:00.711389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-18 00:43:00.711393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-18 00:43:00.711397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-18 00:43:00.711401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-18 00:43:00.711405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-18 00:43:00.711423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-18 00:43:00.711427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-18 00:43:00.711432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-18 00:43:00.711435 | orchestrator | 2026-04-18 00:43:00.711439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711443 | orchestrator | Saturday 18 April 2026 00:42:54 +0000 (0:00:00.305) 0:00:00.967 ******** 2026-04-18 00:43:00.711447 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711451 | orchestrator | 2026-04-18 00:43:00.711455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711459 | orchestrator | Saturday 18 April 2026 00:42:55 +0000 (0:00:00.359) 0:00:01.327 ******** 2026-04-18 00:43:00.711463 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711467 | orchestrator | 2026-04-18 00:43:00.711470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711474 | orchestrator | Saturday 18 April 2026 00:42:55 +0000 (0:00:00.177) 0:00:01.505 ******** 2026-04-18 00:43:00.711481 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711485 | orchestrator | 2026-04-18 00:43:00.711489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711493 | orchestrator | Saturday 18 April 2026 00:42:55 +0000 (0:00:00.180) 0:00:01.686 ******** 2026-04-18 00:43:00.711497 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711501 | orchestrator | 2026-04-18 00:43:00.711505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711509 | orchestrator | Saturday 18 April 2026 00:42:55 +0000 (0:00:00.180) 0:00:01.866 ******** 2026-04-18 00:43:00.711513 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711517 | orchestrator | 2026-04-18 00:43:00.711521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711524 | orchestrator | Saturday 18 April 2026 00:42:55 +0000 (0:00:00.177) 0:00:02.043 ******** 2026-04-18 00:43:00.711528 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711532 | orchestrator | 2026-04-18 00:43:00.711536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711540 | orchestrator | Saturday 18 April 2026 00:42:56 +0000 (0:00:00.179) 0:00:02.222 ******** 2026-04-18 00:43:00.711544 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711548 | orchestrator | 2026-04-18 00:43:00.711552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711555 | orchestrator | Saturday 18 April 2026 00:42:56 +0000 (0:00:00.174) 0:00:02.397 ******** 2026-04-18 00:43:00.711559 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711563 | orchestrator | 2026-04-18 00:43:00.711567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711574 | orchestrator | Saturday 18 April 2026 00:42:56 +0000 (0:00:00.184) 0:00:02.582 ******** 2026-04-18 00:43:00.711578 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80) 2026-04-18 00:43:00.711583 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80) 2026-04-18 00:43:00.711587 | orchestrator | 2026-04-18 00:43:00.711591 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711604 | orchestrator | Saturday 18 April 2026 00:42:56 +0000 (0:00:00.406) 0:00:02.988 ******** 2026-04-18 00:43:00.711608 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_60a8bfc5-b9d0-4eee-9755-9876332cb447) 2026-04-18 00:43:00.711612 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_60a8bfc5-b9d0-4eee-9755-9876332cb447) 2026-04-18 00:43:00.711616 | orchestrator | 2026-04-18 00:43:00.711620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711624 | orchestrator | Saturday 18 April 2026 00:42:57 +0000 (0:00:00.407) 0:00:03.395 ******** 2026-04-18 00:43:00.711632 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7641f05d-52e5-4bbf-9f1e-20500225c31b) 2026-04-18 00:43:00.711636 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7641f05d-52e5-4bbf-9f1e-20500225c31b) 2026-04-18 00:43:00.711640 | orchestrator | 2026-04-18 00:43:00.711644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711648 | orchestrator | Saturday 18 April 2026 00:42:57 +0000 (0:00:00.497) 0:00:03.893 ******** 2026-04-18 00:43:00.711651 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7f3bae2c-a706-4f5f-b58d-9951f9e0de3e) 2026-04-18 00:43:00.711655 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7f3bae2c-a706-4f5f-b58d-9951f9e0de3e) 2026-04-18 00:43:00.711659 | orchestrator | 2026-04-18 00:43:00.711663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:00.711667 | orchestrator | Saturday 18 April 2026 00:42:58 +0000 (0:00:00.544) 0:00:04.438 ******** 2026-04-18 00:43:00.711671 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-18 00:43:00.711675 | orchestrator | 2026-04-18 00:43:00.711679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:00.711682 | orchestrator | Saturday 18 April 2026 00:42:59 +0000 (0:00:00.689) 0:00:05.127 ******** 2026-04-18 00:43:00.711686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-18 00:43:00.711690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-18 00:43:00.711694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-18 00:43:00.711698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-18 00:43:00.711702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-18 00:43:00.711706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-18 00:43:00.711710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-18 00:43:00.711713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-18 00:43:00.711717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-18 00:43:00.711721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-18 00:43:00.711725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-18 00:43:00.711729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-18 00:43:00.711733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-18 00:43:00.711737 | orchestrator | 2026-04-18 00:43:00.711740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:00.711744 | orchestrator | Saturday 18 April 2026 00:42:59 +0000 (0:00:00.367) 0:00:05.494 ******** 2026-04-18 00:43:00.711748 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711752 | orchestrator | 2026-04-18 00:43:00.711756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:00.711760 | orchestrator | Saturday 18 April 2026 00:42:59 +0000 (0:00:00.198) 0:00:05.693 ******** 2026-04-18 00:43:00.711764 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711767 | orchestrator | 2026-04-18 00:43:00.711771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:00.711775 | orchestrator | Saturday 18 April 2026 00:42:59 +0000 (0:00:00.214) 0:00:05.908 ******** 2026-04-18 00:43:00.711779 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711784 | orchestrator | 2026-04-18 00:43:00.711788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:00.711796 | orchestrator | Saturday 18 April 2026 00:43:00 +0000 (0:00:00.196) 0:00:06.104 ******** 2026-04-18 00:43:00.711801 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711806 | orchestrator | 2026-04-18 00:43:00.711810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:00.711815 | orchestrator | Saturday 18 April 2026 00:43:00 +0000 (0:00:00.171) 0:00:06.276 ******** 2026-04-18 00:43:00.711820 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711824 | orchestrator | 2026-04-18 00:43:00.711850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:00.711856 | orchestrator | Saturday 18 April 2026 00:43:00 +0000 (0:00:00.163) 0:00:06.439 ******** 2026-04-18 00:43:00.711862 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711868 | orchestrator | 2026-04-18 00:43:00.711877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:00.711883 | orchestrator | Saturday 18 April 2026 00:43:00 +0000 (0:00:00.190) 0:00:06.630 ******** 2026-04-18 00:43:00.711889 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:00.711897 | orchestrator | 2026-04-18 00:43:00.711907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:07.164815 | orchestrator | Saturday 18 April 2026 00:43:00 +0000 (0:00:00.163) 0:00:06.794 ******** 2026-04-18 00:43:07.164996 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.165026 | orchestrator | 2026-04-18 00:43:07.165046 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:07.165067 | orchestrator | Saturday 18 April 2026 00:43:00 +0000 (0:00:00.156) 0:00:06.950 ******** 2026-04-18 00:43:07.165086 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-18 00:43:07.165106 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-18 00:43:07.165125 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-18 00:43:07.165143 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-18 00:43:07.165162 | orchestrator | 2026-04-18 00:43:07.165181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:07.165201 | orchestrator | Saturday 18 April 2026 00:43:01 +0000 (0:00:00.765) 0:00:07.716 ******** 2026-04-18 00:43:07.165220 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.165240 | orchestrator | 2026-04-18 00:43:07.165258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:07.165277 | orchestrator | Saturday 18 April 2026 00:43:01 +0000 (0:00:00.154) 0:00:07.870 ******** 2026-04-18 00:43:07.165296 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.165314 | orchestrator | 2026-04-18 00:43:07.165333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:07.165353 | orchestrator | Saturday 18 April 2026 00:43:01 +0000 (0:00:00.168) 0:00:08.038 ******** 2026-04-18 00:43:07.165372 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.165391 | orchestrator | 2026-04-18 00:43:07.165409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:07.165428 | orchestrator | Saturday 18 April 2026 00:43:02 +0000 (0:00:00.164) 0:00:08.203 ******** 2026-04-18 00:43:07.165447 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.165465 | orchestrator | 2026-04-18 00:43:07.165484 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-18 00:43:07.165504 | orchestrator | Saturday 18 April 2026 00:43:02 +0000 (0:00:00.169) 0:00:08.372 ******** 2026-04-18 00:43:07.165522 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-18 00:43:07.165540 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-18 00:43:07.165557 | orchestrator | 2026-04-18 00:43:07.165576 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-18 00:43:07.165594 | orchestrator | Saturday 18 April 2026 00:43:02 +0000 (0:00:00.148) 0:00:08.521 ******** 2026-04-18 00:43:07.165610 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.165657 | orchestrator | 2026-04-18 00:43:07.165677 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-18 00:43:07.165694 | orchestrator | Saturday 18 April 2026 00:43:02 +0000 (0:00:00.109) 0:00:08.630 ******** 2026-04-18 00:43:07.165712 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.165729 | orchestrator | 2026-04-18 00:43:07.165746 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-18 00:43:07.165764 | orchestrator | Saturday 18 April 2026 00:43:02 +0000 (0:00:00.102) 0:00:08.733 ******** 2026-04-18 00:43:07.165781 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.165798 | orchestrator | 2026-04-18 00:43:07.165816 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-18 00:43:07.165901 | orchestrator | Saturday 18 April 2026 00:43:02 +0000 (0:00:00.104) 0:00:08.838 ******** 2026-04-18 00:43:07.165921 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:43:07.165937 | orchestrator | 2026-04-18 00:43:07.165954 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-18 00:43:07.165970 | orchestrator | Saturday 18 April 2026 00:43:02 +0000 (0:00:00.110) 0:00:08.948 ******** 2026-04-18 00:43:07.165987 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93b19634-3a0b-57aa-985a-342cbb17f88c'}}) 2026-04-18 00:43:07.166005 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '97728c5d-edf3-594c-abdf-329078c85e67'}}) 2026-04-18 00:43:07.166105 | orchestrator | 2026-04-18 00:43:07.166121 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-18 00:43:07.166136 | orchestrator | Saturday 18 April 2026 00:43:02 +0000 (0:00:00.134) 0:00:09.083 ******** 2026-04-18 00:43:07.166151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93b19634-3a0b-57aa-985a-342cbb17f88c'}})  2026-04-18 00:43:07.166180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '97728c5d-edf3-594c-abdf-329078c85e67'}})  2026-04-18 00:43:07.166194 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.166208 | orchestrator | 2026-04-18 00:43:07.166223 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-18 00:43:07.166238 | orchestrator | Saturday 18 April 2026 00:43:03 +0000 (0:00:00.139) 0:00:09.222 ******** 2026-04-18 00:43:07.166252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93b19634-3a0b-57aa-985a-342cbb17f88c'}})  2026-04-18 00:43:07.166265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '97728c5d-edf3-594c-abdf-329078c85e67'}})  2026-04-18 00:43:07.166273 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.166285 | orchestrator | 2026-04-18 00:43:07.166298 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-18 00:43:07.166311 | orchestrator | Saturday 18 April 2026 00:43:03 +0000 (0:00:00.131) 0:00:09.354 ******** 2026-04-18 00:43:07.166324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93b19634-3a0b-57aa-985a-342cbb17f88c'}})  2026-04-18 00:43:07.166359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '97728c5d-edf3-594c-abdf-329078c85e67'}})  2026-04-18 00:43:07.166375 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.166389 | orchestrator | 2026-04-18 00:43:07.166403 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-18 00:43:07.166416 | orchestrator | Saturday 18 April 2026 00:43:03 +0000 (0:00:00.263) 0:00:09.617 ******** 2026-04-18 00:43:07.166429 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:43:07.166442 | orchestrator | 2026-04-18 00:43:07.166456 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-18 00:43:07.166470 | orchestrator | Saturday 18 April 2026 00:43:03 +0000 (0:00:00.129) 0:00:09.746 ******** 2026-04-18 00:43:07.166484 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:43:07.166498 | orchestrator | 2026-04-18 00:43:07.166529 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-18 00:43:07.166543 | orchestrator | Saturday 18 April 2026 00:43:03 +0000 (0:00:00.122) 0:00:09.869 ******** 2026-04-18 00:43:07.166557 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.166570 | orchestrator | 2026-04-18 00:43:07.166585 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-18 00:43:07.166599 | orchestrator | Saturday 18 April 2026 00:43:03 +0000 (0:00:00.121) 0:00:09.991 ******** 2026-04-18 00:43:07.166613 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.166626 | orchestrator | 2026-04-18 00:43:07.166640 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-18 00:43:07.166652 | orchestrator | Saturday 18 April 2026 00:43:04 +0000 (0:00:00.124) 0:00:10.115 ******** 2026-04-18 00:43:07.166677 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.166690 | orchestrator | 2026-04-18 00:43:07.166704 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-18 00:43:07.166717 | orchestrator | Saturday 18 April 2026 00:43:04 +0000 (0:00:00.118) 0:00:10.234 ******** 2026-04-18 00:43:07.166730 | orchestrator | ok: [testbed-node-3] => { 2026-04-18 00:43:07.166744 | orchestrator |  "ceph_osd_devices": { 2026-04-18 00:43:07.166758 | orchestrator |  "sdb": { 2026-04-18 00:43:07.166774 | orchestrator |  "osd_lvm_uuid": "93b19634-3a0b-57aa-985a-342cbb17f88c" 2026-04-18 00:43:07.166787 | orchestrator |  }, 2026-04-18 00:43:07.166800 | orchestrator |  "sdc": { 2026-04-18 00:43:07.166813 | orchestrator |  "osd_lvm_uuid": "97728c5d-edf3-594c-abdf-329078c85e67" 2026-04-18 00:43:07.166828 | orchestrator |  } 2026-04-18 00:43:07.166863 | orchestrator |  } 2026-04-18 00:43:07.166878 | orchestrator | } 2026-04-18 00:43:07.166891 | orchestrator | 2026-04-18 00:43:07.166905 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-18 00:43:07.166918 | orchestrator | Saturday 18 April 2026 00:43:04 +0000 (0:00:00.120) 0:00:10.354 ******** 2026-04-18 00:43:07.166932 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.166945 | orchestrator | 2026-04-18 00:43:07.166958 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-18 00:43:07.166971 | orchestrator | Saturday 18 April 2026 00:43:04 +0000 (0:00:00.118) 0:00:10.473 ******** 2026-04-18 00:43:07.166985 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.166998 | orchestrator | 2026-04-18 00:43:07.167013 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-18 00:43:07.167028 | orchestrator | Saturday 18 April 2026 00:43:04 +0000 (0:00:00.113) 0:00:10.587 ******** 2026-04-18 00:43:07.167042 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:43:07.167056 | orchestrator | 2026-04-18 00:43:07.167071 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-18 00:43:07.167085 | orchestrator | Saturday 18 April 2026 00:43:04 +0000 (0:00:00.119) 0:00:10.706 ******** 2026-04-18 00:43:07.167100 | orchestrator | changed: [testbed-node-3] => { 2026-04-18 00:43:07.167115 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-18 00:43:07.167130 | orchestrator |  "ceph_osd_devices": { 2026-04-18 00:43:07.167144 | orchestrator |  "sdb": { 2026-04-18 00:43:07.167158 | orchestrator |  "osd_lvm_uuid": "93b19634-3a0b-57aa-985a-342cbb17f88c" 2026-04-18 00:43:07.167173 | orchestrator |  }, 2026-04-18 00:43:07.167188 | orchestrator |  "sdc": { 2026-04-18 00:43:07.167202 | orchestrator |  "osd_lvm_uuid": "97728c5d-edf3-594c-abdf-329078c85e67" 2026-04-18 00:43:07.167215 | orchestrator |  } 2026-04-18 00:43:07.167229 | orchestrator |  }, 2026-04-18 00:43:07.167242 | orchestrator |  "lvm_volumes": [ 2026-04-18 00:43:07.167256 | orchestrator |  { 2026-04-18 00:43:07.167271 | orchestrator |  "data": "osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c", 2026-04-18 00:43:07.167286 | orchestrator |  "data_vg": "ceph-93b19634-3a0b-57aa-985a-342cbb17f88c" 2026-04-18 00:43:07.167313 | orchestrator |  }, 2026-04-18 00:43:07.167327 | orchestrator |  { 2026-04-18 00:43:07.167340 | orchestrator |  "data": "osd-block-97728c5d-edf3-594c-abdf-329078c85e67", 2026-04-18 00:43:07.167354 | orchestrator |  "data_vg": "ceph-97728c5d-edf3-594c-abdf-329078c85e67" 2026-04-18 00:43:07.167369 | orchestrator |  } 2026-04-18 00:43:07.167384 | orchestrator |  ] 2026-04-18 00:43:07.167399 | orchestrator |  } 2026-04-18 00:43:07.167413 | orchestrator | } 2026-04-18 00:43:07.167428 | orchestrator | 2026-04-18 00:43:07.167442 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-18 00:43:07.167457 | orchestrator | Saturday 18 April 2026 00:43:04 +0000 (0:00:00.181) 0:00:10.888 ******** 2026-04-18 00:43:07.167471 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-18 00:43:07.167486 | orchestrator | 2026-04-18 00:43:07.167500 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-18 00:43:07.167513 | orchestrator | 2026-04-18 00:43:07.167527 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-18 00:43:07.167540 | orchestrator | Saturday 18 April 2026 00:43:06 +0000 (0:00:01.939) 0:00:12.827 ******** 2026-04-18 00:43:07.167553 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-18 00:43:07.167567 | orchestrator | 2026-04-18 00:43:07.167579 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-18 00:43:07.167592 | orchestrator | Saturday 18 April 2026 00:43:06 +0000 (0:00:00.219) 0:00:13.046 ******** 2026-04-18 00:43:07.167606 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:43:07.167620 | orchestrator | 2026-04-18 00:43:07.167645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968058 | orchestrator | Saturday 18 April 2026 00:43:07 +0000 (0:00:00.202) 0:00:13.249 ******** 2026-04-18 00:43:13.968168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-18 00:43:13.968184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-18 00:43:13.968193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-18 00:43:13.968202 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-18 00:43:13.968210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-18 00:43:13.968219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-18 00:43:13.968227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-18 00:43:13.968235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-18 00:43:13.968248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-18 00:43:13.968257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-18 00:43:13.968266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-18 00:43:13.968274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-18 00:43:13.968282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-18 00:43:13.968290 | orchestrator | 2026-04-18 00:43:13.968299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968306 | orchestrator | Saturday 18 April 2026 00:43:07 +0000 (0:00:00.334) 0:00:13.583 ******** 2026-04-18 00:43:13.968315 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.968325 | orchestrator | 2026-04-18 00:43:13.968333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968341 | orchestrator | Saturday 18 April 2026 00:43:07 +0000 (0:00:00.176) 0:00:13.760 ******** 2026-04-18 00:43:13.968349 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.968378 | orchestrator | 2026-04-18 00:43:13.968387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968396 | orchestrator | Saturday 18 April 2026 00:43:07 +0000 (0:00:00.170) 0:00:13.930 ******** 2026-04-18 00:43:13.968404 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.968412 | orchestrator | 2026-04-18 00:43:13.968420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968428 | orchestrator | Saturday 18 April 2026 00:43:08 +0000 (0:00:00.176) 0:00:14.106 ******** 2026-04-18 00:43:13.968436 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.968444 | orchestrator | 2026-04-18 00:43:13.968452 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968459 | orchestrator | Saturday 18 April 2026 00:43:08 +0000 (0:00:00.172) 0:00:14.279 ******** 2026-04-18 00:43:13.968467 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.968475 | orchestrator | 2026-04-18 00:43:13.968483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968491 | orchestrator | Saturday 18 April 2026 00:43:08 +0000 (0:00:00.176) 0:00:14.455 ******** 2026-04-18 00:43:13.968499 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.968507 | orchestrator | 2026-04-18 00:43:13.968515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968523 | orchestrator | Saturday 18 April 2026 00:43:08 +0000 (0:00:00.421) 0:00:14.877 ******** 2026-04-18 00:43:13.968531 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.968539 | orchestrator | 2026-04-18 00:43:13.968547 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968555 | orchestrator | Saturday 18 April 2026 00:43:08 +0000 (0:00:00.194) 0:00:15.071 ******** 2026-04-18 00:43:13.968562 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.968570 | orchestrator | 2026-04-18 00:43:13.968578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968586 | orchestrator | Saturday 18 April 2026 00:43:09 +0000 (0:00:00.177) 0:00:15.249 ******** 2026-04-18 00:43:13.968594 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee) 2026-04-18 00:43:13.968603 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee) 2026-04-18 00:43:13.968612 | orchestrator | 2026-04-18 00:43:13.968621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968628 | orchestrator | Saturday 18 April 2026 00:43:09 +0000 (0:00:00.352) 0:00:15.601 ******** 2026-04-18 00:43:13.968637 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ca3776c9-6de1-440c-b162-bababc8c77c0) 2026-04-18 00:43:13.968645 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ca3776c9-6de1-440c-b162-bababc8c77c0) 2026-04-18 00:43:13.968654 | orchestrator | 2026-04-18 00:43:13.968664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968680 | orchestrator | Saturday 18 April 2026 00:43:09 +0000 (0:00:00.395) 0:00:15.997 ******** 2026-04-18 00:43:13.968689 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3edcf3c9-33ad-4391-9ec0-9b04bbd0d527) 2026-04-18 00:43:13.968699 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3edcf3c9-33ad-4391-9ec0-9b04bbd0d527) 2026-04-18 00:43:13.968708 | orchestrator | 2026-04-18 00:43:13.968717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968744 | orchestrator | Saturday 18 April 2026 00:43:10 +0000 (0:00:00.395) 0:00:16.393 ******** 2026-04-18 00:43:13.968753 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_599ac768-9444-46f8-8891-4fc4025f5c6d) 2026-04-18 00:43:13.968761 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_599ac768-9444-46f8-8891-4fc4025f5c6d) 2026-04-18 00:43:13.968770 | orchestrator | 2026-04-18 00:43:13.968777 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:13.968794 | orchestrator | Saturday 18 April 2026 00:43:10 +0000 (0:00:00.385) 0:00:16.778 ******** 2026-04-18 00:43:13.968803 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-18 00:43:13.968811 | orchestrator | 2026-04-18 00:43:13.968819 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:13.968827 | orchestrator | Saturday 18 April 2026 00:43:11 +0000 (0:00:00.325) 0:00:17.104 ******** 2026-04-18 00:43:13.968891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-18 00:43:13.968902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-18 00:43:13.968910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-18 00:43:13.968918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-18 00:43:13.968926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-18 00:43:13.968933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-18 00:43:13.968941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-18 00:43:13.968948 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-18 00:43:13.968955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-18 00:43:13.968963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-18 00:43:13.968970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-18 00:43:13.968978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-18 00:43:13.968986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-18 00:43:13.968993 | orchestrator | 2026-04-18 00:43:13.969001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:13.969010 | orchestrator | Saturday 18 April 2026 00:43:11 +0000 (0:00:00.375) 0:00:17.480 ******** 2026-04-18 00:43:13.969017 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.969025 | orchestrator | 2026-04-18 00:43:13.969034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:13.969042 | orchestrator | Saturday 18 April 2026 00:43:11 +0000 (0:00:00.174) 0:00:17.655 ******** 2026-04-18 00:43:13.969051 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.969059 | orchestrator | 2026-04-18 00:43:13.969068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:13.969076 | orchestrator | Saturday 18 April 2026 00:43:12 +0000 (0:00:00.545) 0:00:18.201 ******** 2026-04-18 00:43:13.969084 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.969093 | orchestrator | 2026-04-18 00:43:13.969101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:13.969109 | orchestrator | Saturday 18 April 2026 00:43:12 +0000 (0:00:00.196) 0:00:18.397 ******** 2026-04-18 00:43:13.969117 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.969126 | orchestrator | 2026-04-18 00:43:13.969134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:13.969142 | orchestrator | Saturday 18 April 2026 00:43:12 +0000 (0:00:00.189) 0:00:18.587 ******** 2026-04-18 00:43:13.969149 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.969158 | orchestrator | 2026-04-18 00:43:13.969166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:13.969175 | orchestrator | Saturday 18 April 2026 00:43:12 +0000 (0:00:00.179) 0:00:18.767 ******** 2026-04-18 00:43:13.969183 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.969191 | orchestrator | 2026-04-18 00:43:13.969200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:13.969219 | orchestrator | Saturday 18 April 2026 00:43:12 +0000 (0:00:00.187) 0:00:18.955 ******** 2026-04-18 00:43:13.969227 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.969235 | orchestrator | 2026-04-18 00:43:13.969243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:13.969250 | orchestrator | Saturday 18 April 2026 00:43:13 +0000 (0:00:00.190) 0:00:19.146 ******** 2026-04-18 00:43:13.969258 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:13.969266 | orchestrator | 2026-04-18 00:43:13.969274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:13.969282 | orchestrator | Saturday 18 April 2026 00:43:13 +0000 (0:00:00.193) 0:00:19.339 ******** 2026-04-18 00:43:13.969297 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-18 00:43:13.969305 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-18 00:43:13.969313 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-18 00:43:13.969320 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-18 00:43:13.969327 | orchestrator | 2026-04-18 00:43:13.969334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:13.969341 | orchestrator | Saturday 18 April 2026 00:43:13 +0000 (0:00:00.599) 0:00:19.939 ******** 2026-04-18 00:43:13.969348 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.940164 | orchestrator | 2026-04-18 00:43:19.940268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:19.940285 | orchestrator | Saturday 18 April 2026 00:43:14 +0000 (0:00:00.190) 0:00:20.129 ******** 2026-04-18 00:43:19.940298 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.940310 | orchestrator | 2026-04-18 00:43:19.940322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:19.940333 | orchestrator | Saturday 18 April 2026 00:43:14 +0000 (0:00:00.174) 0:00:20.304 ******** 2026-04-18 00:43:19.940343 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.940354 | orchestrator | 2026-04-18 00:43:19.940365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:19.940375 | orchestrator | Saturday 18 April 2026 00:43:14 +0000 (0:00:00.183) 0:00:20.488 ******** 2026-04-18 00:43:19.940386 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.940396 | orchestrator | 2026-04-18 00:43:19.940407 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-18 00:43:19.940418 | orchestrator | Saturday 18 April 2026 00:43:14 +0000 (0:00:00.199) 0:00:20.687 ******** 2026-04-18 00:43:19.940429 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-18 00:43:19.940439 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-18 00:43:19.940450 | orchestrator | 2026-04-18 00:43:19.940461 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-18 00:43:19.940472 | orchestrator | Saturday 18 April 2026 00:43:14 +0000 (0:00:00.305) 0:00:20.993 ******** 2026-04-18 00:43:19.940482 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.940493 | orchestrator | 2026-04-18 00:43:19.940504 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-18 00:43:19.940514 | orchestrator | Saturday 18 April 2026 00:43:15 +0000 (0:00:00.136) 0:00:21.129 ******** 2026-04-18 00:43:19.940525 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.940536 | orchestrator | 2026-04-18 00:43:19.940546 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-18 00:43:19.940557 | orchestrator | Saturday 18 April 2026 00:43:15 +0000 (0:00:00.134) 0:00:21.264 ******** 2026-04-18 00:43:19.940567 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.940578 | orchestrator | 2026-04-18 00:43:19.940589 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-18 00:43:19.940600 | orchestrator | Saturday 18 April 2026 00:43:15 +0000 (0:00:00.127) 0:00:21.392 ******** 2026-04-18 00:43:19.940610 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:43:19.940647 | orchestrator | 2026-04-18 00:43:19.940659 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-18 00:43:19.940670 | orchestrator | Saturday 18 April 2026 00:43:15 +0000 (0:00:00.136) 0:00:21.528 ******** 2026-04-18 00:43:19.940681 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9fd71a58-43ec-5e10-bd02-c7d805355b61'}}) 2026-04-18 00:43:19.940692 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'}}) 2026-04-18 00:43:19.940705 | orchestrator | 2026-04-18 00:43:19.940717 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-18 00:43:19.940729 | orchestrator | Saturday 18 April 2026 00:43:15 +0000 (0:00:00.162) 0:00:21.691 ******** 2026-04-18 00:43:19.940743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9fd71a58-43ec-5e10-bd02-c7d805355b61'}})  2026-04-18 00:43:19.940758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'}})  2026-04-18 00:43:19.940770 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.940782 | orchestrator | 2026-04-18 00:43:19.940795 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-18 00:43:19.940809 | orchestrator | Saturday 18 April 2026 00:43:15 +0000 (0:00:00.149) 0:00:21.840 ******** 2026-04-18 00:43:19.940822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9fd71a58-43ec-5e10-bd02-c7d805355b61'}})  2026-04-18 00:43:19.940834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'}})  2026-04-18 00:43:19.940906 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.940920 | orchestrator | 2026-04-18 00:43:19.940933 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-18 00:43:19.940945 | orchestrator | Saturday 18 April 2026 00:43:15 +0000 (0:00:00.145) 0:00:21.985 ******** 2026-04-18 00:43:19.940957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9fd71a58-43ec-5e10-bd02-c7d805355b61'}})  2026-04-18 00:43:19.940969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'}})  2026-04-18 00:43:19.940982 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.940994 | orchestrator | 2026-04-18 00:43:19.941005 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-18 00:43:19.941017 | orchestrator | Saturday 18 April 2026 00:43:16 +0000 (0:00:00.153) 0:00:22.139 ******** 2026-04-18 00:43:19.941030 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:43:19.941042 | orchestrator | 2026-04-18 00:43:19.941053 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-18 00:43:19.941065 | orchestrator | Saturday 18 April 2026 00:43:16 +0000 (0:00:00.133) 0:00:22.272 ******** 2026-04-18 00:43:19.941076 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:43:19.941087 | orchestrator | 2026-04-18 00:43:19.941097 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-18 00:43:19.941108 | orchestrator | Saturday 18 April 2026 00:43:16 +0000 (0:00:00.122) 0:00:22.395 ******** 2026-04-18 00:43:19.941135 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.941147 | orchestrator | 2026-04-18 00:43:19.941176 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-18 00:43:19.941187 | orchestrator | Saturday 18 April 2026 00:43:16 +0000 (0:00:00.125) 0:00:22.520 ******** 2026-04-18 00:43:19.941197 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.941208 | orchestrator | 2026-04-18 00:43:19.941219 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-18 00:43:19.941229 | orchestrator | Saturday 18 April 2026 00:43:16 +0000 (0:00:00.290) 0:00:22.811 ******** 2026-04-18 00:43:19.941240 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.941250 | orchestrator | 2026-04-18 00:43:19.941270 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-18 00:43:19.941281 | orchestrator | Saturday 18 April 2026 00:43:16 +0000 (0:00:00.129) 0:00:22.940 ******** 2026-04-18 00:43:19.941291 | orchestrator | ok: [testbed-node-4] => { 2026-04-18 00:43:19.941302 | orchestrator |  "ceph_osd_devices": { 2026-04-18 00:43:19.941313 | orchestrator |  "sdb": { 2026-04-18 00:43:19.941325 | orchestrator |  "osd_lvm_uuid": "9fd71a58-43ec-5e10-bd02-c7d805355b61" 2026-04-18 00:43:19.941336 | orchestrator |  }, 2026-04-18 00:43:19.941347 | orchestrator |  "sdc": { 2026-04-18 00:43:19.941358 | orchestrator |  "osd_lvm_uuid": "0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a" 2026-04-18 00:43:19.941374 | orchestrator |  } 2026-04-18 00:43:19.941393 | orchestrator |  } 2026-04-18 00:43:19.941414 | orchestrator | } 2026-04-18 00:43:19.941440 | orchestrator | 2026-04-18 00:43:19.941466 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-18 00:43:19.941486 | orchestrator | Saturday 18 April 2026 00:43:16 +0000 (0:00:00.121) 0:00:23.062 ******** 2026-04-18 00:43:19.941504 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.941522 | orchestrator | 2026-04-18 00:43:19.941542 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-18 00:43:19.941562 | orchestrator | Saturday 18 April 2026 00:43:17 +0000 (0:00:00.115) 0:00:23.177 ******** 2026-04-18 00:43:19.941581 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.941601 | orchestrator | 2026-04-18 00:43:19.941616 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-18 00:43:19.941626 | orchestrator | Saturday 18 April 2026 00:43:17 +0000 (0:00:00.133) 0:00:23.311 ******** 2026-04-18 00:43:19.941637 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:43:19.941648 | orchestrator | 2026-04-18 00:43:19.941659 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-18 00:43:19.941669 | orchestrator | Saturday 18 April 2026 00:43:17 +0000 (0:00:00.129) 0:00:23.441 ******** 2026-04-18 00:43:19.941680 | orchestrator | changed: [testbed-node-4] => { 2026-04-18 00:43:19.941691 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-18 00:43:19.941702 | orchestrator |  "ceph_osd_devices": { 2026-04-18 00:43:19.941712 | orchestrator |  "sdb": { 2026-04-18 00:43:19.941723 | orchestrator |  "osd_lvm_uuid": "9fd71a58-43ec-5e10-bd02-c7d805355b61" 2026-04-18 00:43:19.941734 | orchestrator |  }, 2026-04-18 00:43:19.941744 | orchestrator |  "sdc": { 2026-04-18 00:43:19.941755 | orchestrator |  "osd_lvm_uuid": "0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a" 2026-04-18 00:43:19.941766 | orchestrator |  } 2026-04-18 00:43:19.941777 | orchestrator |  }, 2026-04-18 00:43:19.941787 | orchestrator |  "lvm_volumes": [ 2026-04-18 00:43:19.941798 | orchestrator |  { 2026-04-18 00:43:19.941809 | orchestrator |  "data": "osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61", 2026-04-18 00:43:19.941820 | orchestrator |  "data_vg": "ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61" 2026-04-18 00:43:19.941831 | orchestrator |  }, 2026-04-18 00:43:19.941867 | orchestrator |  { 2026-04-18 00:43:19.941878 | orchestrator |  "data": "osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a", 2026-04-18 00:43:19.941889 | orchestrator |  "data_vg": "ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a" 2026-04-18 00:43:19.941899 | orchestrator |  } 2026-04-18 00:43:19.941910 | orchestrator |  ] 2026-04-18 00:43:19.941921 | orchestrator |  } 2026-04-18 00:43:19.941932 | orchestrator | } 2026-04-18 00:43:19.941942 | orchestrator | 2026-04-18 00:43:19.941953 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-18 00:43:19.941964 | orchestrator | Saturday 18 April 2026 00:43:17 +0000 (0:00:00.206) 0:00:23.647 ******** 2026-04-18 00:43:19.941975 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-18 00:43:19.941985 | orchestrator | 2026-04-18 00:43:19.941996 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-18 00:43:19.942086 | orchestrator | 2026-04-18 00:43:19.942101 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-18 00:43:19.942112 | orchestrator | Saturday 18 April 2026 00:43:18 +0000 (0:00:01.086) 0:00:24.734 ******** 2026-04-18 00:43:19.942123 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-18 00:43:19.942134 | orchestrator | 2026-04-18 00:43:19.942145 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-18 00:43:19.942156 | orchestrator | Saturday 18 April 2026 00:43:19 +0000 (0:00:00.436) 0:00:25.171 ******** 2026-04-18 00:43:19.942166 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:43:19.942177 | orchestrator | 2026-04-18 00:43:19.942188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:19.942198 | orchestrator | Saturday 18 April 2026 00:43:19 +0000 (0:00:00.575) 0:00:25.746 ******** 2026-04-18 00:43:19.942209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-18 00:43:19.942220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-18 00:43:19.942231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-18 00:43:19.942241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-18 00:43:19.942252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-18 00:43:19.942274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-18 00:43:27.744030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-18 00:43:27.744120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-18 00:43:27.744134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-18 00:43:27.744144 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-18 00:43:27.744154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-18 00:43:27.744163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-18 00:43:27.744173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-18 00:43:27.744183 | orchestrator | 2026-04-18 00:43:27.744192 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744198 | orchestrator | Saturday 18 April 2026 00:43:20 +0000 (0:00:00.358) 0:00:26.105 ******** 2026-04-18 00:43:27.744204 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.744214 | orchestrator | 2026-04-18 00:43:27.744223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744232 | orchestrator | Saturday 18 April 2026 00:43:20 +0000 (0:00:00.213) 0:00:26.318 ******** 2026-04-18 00:43:27.744241 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.744250 | orchestrator | 2026-04-18 00:43:27.744277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744287 | orchestrator | Saturday 18 April 2026 00:43:20 +0000 (0:00:00.190) 0:00:26.509 ******** 2026-04-18 00:43:27.744295 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.744305 | orchestrator | 2026-04-18 00:43:27.744315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744324 | orchestrator | Saturday 18 April 2026 00:43:20 +0000 (0:00:00.191) 0:00:26.701 ******** 2026-04-18 00:43:27.744333 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.744342 | orchestrator | 2026-04-18 00:43:27.744355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744364 | orchestrator | Saturday 18 April 2026 00:43:20 +0000 (0:00:00.186) 0:00:26.888 ******** 2026-04-18 00:43:27.744373 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.744404 | orchestrator | 2026-04-18 00:43:27.744414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744423 | orchestrator | Saturday 18 April 2026 00:43:20 +0000 (0:00:00.179) 0:00:27.067 ******** 2026-04-18 00:43:27.744432 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.744441 | orchestrator | 2026-04-18 00:43:27.744451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744460 | orchestrator | Saturday 18 April 2026 00:43:21 +0000 (0:00:00.190) 0:00:27.257 ******** 2026-04-18 00:43:27.744469 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.744478 | orchestrator | 2026-04-18 00:43:27.744487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744497 | orchestrator | Saturday 18 April 2026 00:43:21 +0000 (0:00:00.198) 0:00:27.456 ******** 2026-04-18 00:43:27.744506 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.744515 | orchestrator | 2026-04-18 00:43:27.744525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744534 | orchestrator | Saturday 18 April 2026 00:43:21 +0000 (0:00:00.190) 0:00:27.646 ******** 2026-04-18 00:43:27.744543 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5) 2026-04-18 00:43:27.744554 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5) 2026-04-18 00:43:27.744563 | orchestrator | 2026-04-18 00:43:27.744573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744582 | orchestrator | Saturday 18 April 2026 00:43:22 +0000 (0:00:00.612) 0:00:28.259 ******** 2026-04-18 00:43:27.744591 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_73edcc3e-3dbf-4300-a5bd-7c80b242e92a) 2026-04-18 00:43:27.744601 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_73edcc3e-3dbf-4300-a5bd-7c80b242e92a) 2026-04-18 00:43:27.744610 | orchestrator | 2026-04-18 00:43:27.744619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744629 | orchestrator | Saturday 18 April 2026 00:43:22 +0000 (0:00:00.748) 0:00:29.008 ******** 2026-04-18 00:43:27.744637 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b8f980bd-a337-4427-8cb0-8c7c2531a389) 2026-04-18 00:43:27.744647 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b8f980bd-a337-4427-8cb0-8c7c2531a389) 2026-04-18 00:43:27.744656 | orchestrator | 2026-04-18 00:43:27.744665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744675 | orchestrator | Saturday 18 April 2026 00:43:23 +0000 (0:00:00.418) 0:00:29.427 ******** 2026-04-18 00:43:27.744685 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5963e3c2-4ca5-403e-a695-14ba4ecb8231) 2026-04-18 00:43:27.744695 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5963e3c2-4ca5-403e-a695-14ba4ecb8231) 2026-04-18 00:43:27.744704 | orchestrator | 2026-04-18 00:43:27.744714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:43:27.744723 | orchestrator | Saturday 18 April 2026 00:43:23 +0000 (0:00:00.405) 0:00:29.833 ******** 2026-04-18 00:43:27.744733 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-18 00:43:27.744742 | orchestrator | 2026-04-18 00:43:27.744752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.744776 | orchestrator | Saturday 18 April 2026 00:43:24 +0000 (0:00:00.323) 0:00:30.156 ******** 2026-04-18 00:43:27.744787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-18 00:43:27.744796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-18 00:43:27.744805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-18 00:43:27.744815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-18 00:43:27.744831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-18 00:43:27.744857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-18 00:43:27.744868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-18 00:43:27.744878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-18 00:43:27.744888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-18 00:43:27.744898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-18 00:43:27.744907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-18 00:43:27.744917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-18 00:43:27.744925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-18 00:43:27.744934 | orchestrator | 2026-04-18 00:43:27.744944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.744953 | orchestrator | Saturday 18 April 2026 00:43:24 +0000 (0:00:00.358) 0:00:30.515 ******** 2026-04-18 00:43:27.744962 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.744971 | orchestrator | 2026-04-18 00:43:27.744980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.744988 | orchestrator | Saturday 18 April 2026 00:43:24 +0000 (0:00:00.191) 0:00:30.706 ******** 2026-04-18 00:43:27.744997 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.745006 | orchestrator | 2026-04-18 00:43:27.745016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.745025 | orchestrator | Saturday 18 April 2026 00:43:24 +0000 (0:00:00.198) 0:00:30.905 ******** 2026-04-18 00:43:27.745034 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.745044 | orchestrator | 2026-04-18 00:43:27.745053 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.745061 | orchestrator | Saturday 18 April 2026 00:43:25 +0000 (0:00:00.185) 0:00:31.091 ******** 2026-04-18 00:43:27.745070 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.745079 | orchestrator | 2026-04-18 00:43:27.745088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.745097 | orchestrator | Saturday 18 April 2026 00:43:25 +0000 (0:00:00.187) 0:00:31.278 ******** 2026-04-18 00:43:27.745106 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.745115 | orchestrator | 2026-04-18 00:43:27.745124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.745133 | orchestrator | Saturday 18 April 2026 00:43:25 +0000 (0:00:00.188) 0:00:31.467 ******** 2026-04-18 00:43:27.745142 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.745151 | orchestrator | 2026-04-18 00:43:27.745160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.745169 | orchestrator | Saturday 18 April 2026 00:43:26 +0000 (0:00:00.628) 0:00:32.096 ******** 2026-04-18 00:43:27.745177 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.745187 | orchestrator | 2026-04-18 00:43:27.745200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.745209 | orchestrator | Saturday 18 April 2026 00:43:26 +0000 (0:00:00.196) 0:00:32.292 ******** 2026-04-18 00:43:27.745218 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.745227 | orchestrator | 2026-04-18 00:43:27.745236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.745245 | orchestrator | Saturday 18 April 2026 00:43:26 +0000 (0:00:00.180) 0:00:32.472 ******** 2026-04-18 00:43:27.745254 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-18 00:43:27.745263 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-18 00:43:27.745289 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-18 00:43:27.745298 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-18 00:43:27.745307 | orchestrator | 2026-04-18 00:43:27.745316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.745325 | orchestrator | Saturday 18 April 2026 00:43:26 +0000 (0:00:00.599) 0:00:33.072 ******** 2026-04-18 00:43:27.745333 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.745343 | orchestrator | 2026-04-18 00:43:27.745355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.745364 | orchestrator | Saturday 18 April 2026 00:43:27 +0000 (0:00:00.194) 0:00:33.266 ******** 2026-04-18 00:43:27.745374 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.745382 | orchestrator | 2026-04-18 00:43:27.745392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.745401 | orchestrator | Saturday 18 April 2026 00:43:27 +0000 (0:00:00.193) 0:00:33.459 ******** 2026-04-18 00:43:27.745410 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.745419 | orchestrator | 2026-04-18 00:43:27.745428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:43:27.745437 | orchestrator | Saturday 18 April 2026 00:43:27 +0000 (0:00:00.182) 0:00:33.641 ******** 2026-04-18 00:43:27.745445 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:27.745454 | orchestrator | 2026-04-18 00:43:27.745470 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-18 00:43:31.686194 | orchestrator | Saturday 18 April 2026 00:43:27 +0000 (0:00:00.186) 0:00:33.828 ******** 2026-04-18 00:43:31.686303 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-18 00:43:31.686323 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-18 00:43:31.686334 | orchestrator | 2026-04-18 00:43:31.686346 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-18 00:43:31.686358 | orchestrator | Saturday 18 April 2026 00:43:27 +0000 (0:00:00.155) 0:00:33.983 ******** 2026-04-18 00:43:31.686369 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:31.686381 | orchestrator | 2026-04-18 00:43:31.686392 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-18 00:43:31.686402 | orchestrator | Saturday 18 April 2026 00:43:28 +0000 (0:00:00.126) 0:00:34.110 ******** 2026-04-18 00:43:31.686414 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:31.686427 | orchestrator | 2026-04-18 00:43:31.686439 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-18 00:43:31.686450 | orchestrator | Saturday 18 April 2026 00:43:28 +0000 (0:00:00.120) 0:00:34.231 ******** 2026-04-18 00:43:31.686460 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:31.686470 | orchestrator | 2026-04-18 00:43:31.686481 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-18 00:43:31.686492 | orchestrator | Saturday 18 April 2026 00:43:28 +0000 (0:00:00.125) 0:00:34.357 ******** 2026-04-18 00:43:31.686503 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:43:31.686514 | orchestrator | 2026-04-18 00:43:31.686525 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-18 00:43:31.686536 | orchestrator | Saturday 18 April 2026 00:43:28 +0000 (0:00:00.300) 0:00:34.657 ******** 2026-04-18 00:43:31.686549 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fe91ca0a-93bc-5e10-8732-62b62acecb68'}}) 2026-04-18 00:43:31.686562 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a409408a-9332-5b4b-a953-28c1be45fb12'}}) 2026-04-18 00:43:31.686574 | orchestrator | 2026-04-18 00:43:31.686586 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-18 00:43:31.686596 | orchestrator | Saturday 18 April 2026 00:43:28 +0000 (0:00:00.167) 0:00:34.825 ******** 2026-04-18 00:43:31.686607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fe91ca0a-93bc-5e10-8732-62b62acecb68'}})  2026-04-18 00:43:31.686674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a409408a-9332-5b4b-a953-28c1be45fb12'}})  2026-04-18 00:43:31.686699 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:31.686709 | orchestrator | 2026-04-18 00:43:31.686720 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-18 00:43:31.686732 | orchestrator | Saturday 18 April 2026 00:43:28 +0000 (0:00:00.135) 0:00:34.961 ******** 2026-04-18 00:43:31.686743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fe91ca0a-93bc-5e10-8732-62b62acecb68'}})  2026-04-18 00:43:31.686755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a409408a-9332-5b4b-a953-28c1be45fb12'}})  2026-04-18 00:43:31.686766 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:31.686776 | orchestrator | 2026-04-18 00:43:31.686787 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-18 00:43:31.686799 | orchestrator | Saturday 18 April 2026 00:43:29 +0000 (0:00:00.161) 0:00:35.122 ******** 2026-04-18 00:43:31.686811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fe91ca0a-93bc-5e10-8732-62b62acecb68'}})  2026-04-18 00:43:31.686822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a409408a-9332-5b4b-a953-28c1be45fb12'}})  2026-04-18 00:43:31.686833 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:31.686874 | orchestrator | 2026-04-18 00:43:31.686886 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-18 00:43:31.686898 | orchestrator | Saturday 18 April 2026 00:43:29 +0000 (0:00:00.137) 0:00:35.260 ******** 2026-04-18 00:43:31.686910 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:43:31.686922 | orchestrator | 2026-04-18 00:43:31.686934 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-18 00:43:31.686946 | orchestrator | Saturday 18 April 2026 00:43:29 +0000 (0:00:00.114) 0:00:35.374 ******** 2026-04-18 00:43:31.686957 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:43:31.686968 | orchestrator | 2026-04-18 00:43:31.686980 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-18 00:43:31.686991 | orchestrator | Saturday 18 April 2026 00:43:29 +0000 (0:00:00.134) 0:00:35.509 ******** 2026-04-18 00:43:31.687002 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:31.687013 | orchestrator | 2026-04-18 00:43:31.687024 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-18 00:43:31.687037 | orchestrator | Saturday 18 April 2026 00:43:29 +0000 (0:00:00.115) 0:00:35.625 ******** 2026-04-18 00:43:31.687047 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:31.687059 | orchestrator | 2026-04-18 00:43:31.687070 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-18 00:43:31.687080 | orchestrator | Saturday 18 April 2026 00:43:29 +0000 (0:00:00.137) 0:00:35.762 ******** 2026-04-18 00:43:31.687091 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:31.687101 | orchestrator | 2026-04-18 00:43:31.687112 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-18 00:43:31.687124 | orchestrator | Saturday 18 April 2026 00:43:29 +0000 (0:00:00.116) 0:00:35.879 ******** 2026-04-18 00:43:31.687135 | orchestrator | ok: [testbed-node-5] => { 2026-04-18 00:43:31.687147 | orchestrator |  "ceph_osd_devices": { 2026-04-18 00:43:31.687158 | orchestrator |  "sdb": { 2026-04-18 00:43:31.687193 | orchestrator |  "osd_lvm_uuid": "fe91ca0a-93bc-5e10-8732-62b62acecb68" 2026-04-18 00:43:31.687205 | orchestrator |  }, 2026-04-18 00:43:31.687216 | orchestrator |  "sdc": { 2026-04-18 00:43:31.687227 | orchestrator |  "osd_lvm_uuid": "a409408a-9332-5b4b-a953-28c1be45fb12" 2026-04-18 00:43:31.687237 | orchestrator |  } 2026-04-18 00:43:31.687247 | orchestrator |  } 2026-04-18 00:43:31.687258 | orchestrator | } 2026-04-18 00:43:31.687268 | orchestrator | 2026-04-18 00:43:31.687279 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-18 00:43:31.687303 | orchestrator | Saturday 18 April 2026 00:43:29 +0000 (0:00:00.128) 0:00:36.007 ******** 2026-04-18 00:43:31.687314 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:31.687325 | orchestrator | 2026-04-18 00:43:31.687336 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-18 00:43:31.687346 | orchestrator | Saturday 18 April 2026 00:43:30 +0000 (0:00:00.152) 0:00:36.160 ******** 2026-04-18 00:43:31.687356 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:31.687367 | orchestrator | 2026-04-18 00:43:31.687377 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-18 00:43:31.687387 | orchestrator | Saturday 18 April 2026 00:43:30 +0000 (0:00:00.291) 0:00:36.451 ******** 2026-04-18 00:43:31.687399 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:43:31.687409 | orchestrator | 2026-04-18 00:43:31.687420 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-18 00:43:31.687448 | orchestrator | Saturday 18 April 2026 00:43:30 +0000 (0:00:00.141) 0:00:36.593 ******** 2026-04-18 00:43:31.687460 | orchestrator | changed: [testbed-node-5] => { 2026-04-18 00:43:31.687471 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-18 00:43:31.687481 | orchestrator |  "ceph_osd_devices": { 2026-04-18 00:43:31.687492 | orchestrator |  "sdb": { 2026-04-18 00:43:31.687503 | orchestrator |  "osd_lvm_uuid": "fe91ca0a-93bc-5e10-8732-62b62acecb68" 2026-04-18 00:43:31.687513 | orchestrator |  }, 2026-04-18 00:43:31.687525 | orchestrator |  "sdc": { 2026-04-18 00:43:31.687535 | orchestrator |  "osd_lvm_uuid": "a409408a-9332-5b4b-a953-28c1be45fb12" 2026-04-18 00:43:31.687567 | orchestrator |  } 2026-04-18 00:43:31.687579 | orchestrator |  }, 2026-04-18 00:43:31.687588 | orchestrator |  "lvm_volumes": [ 2026-04-18 00:43:31.687600 | orchestrator |  { 2026-04-18 00:43:31.687610 | orchestrator |  "data": "osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68", 2026-04-18 00:43:31.687622 | orchestrator |  "data_vg": "ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68" 2026-04-18 00:43:31.687633 | orchestrator |  }, 2026-04-18 00:43:31.687643 | orchestrator |  { 2026-04-18 00:43:31.687658 | orchestrator |  "data": "osd-block-a409408a-9332-5b4b-a953-28c1be45fb12", 2026-04-18 00:43:31.687667 | orchestrator |  "data_vg": "ceph-a409408a-9332-5b4b-a953-28c1be45fb12" 2026-04-18 00:43:31.687677 | orchestrator |  } 2026-04-18 00:43:31.687688 | orchestrator |  ] 2026-04-18 00:43:31.687697 | orchestrator |  } 2026-04-18 00:43:31.687706 | orchestrator | } 2026-04-18 00:43:31.687715 | orchestrator | 2026-04-18 00:43:31.687725 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-18 00:43:31.687734 | orchestrator | Saturday 18 April 2026 00:43:30 +0000 (0:00:00.215) 0:00:36.809 ******** 2026-04-18 00:43:31.687744 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-18 00:43:31.687754 | orchestrator | 2026-04-18 00:43:31.687764 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:43:31.687775 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-18 00:43:31.687788 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-18 00:43:31.687799 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-18 00:43:31.687810 | orchestrator | 2026-04-18 00:43:31.687821 | orchestrator | 2026-04-18 00:43:31.687832 | orchestrator | 2026-04-18 00:43:31.687894 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:43:31.687908 | orchestrator | Saturday 18 April 2026 00:43:31 +0000 (0:00:00.948) 0:00:37.757 ******** 2026-04-18 00:43:31.687920 | orchestrator | =============================================================================== 2026-04-18 00:43:31.687940 | orchestrator | Write configuration file ------------------------------------------------ 3.98s 2026-04-18 00:43:31.687951 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2026-04-18 00:43:31.687961 | orchestrator | Add known links to the list of available block devices ------------------ 1.00s 2026-04-18 00:43:31.687972 | orchestrator | Get initial list of available block devices ----------------------------- 0.98s 2026-04-18 00:43:31.687982 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.86s 2026-04-18 00:43:31.687992 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-04-18 00:43:31.688003 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-04-18 00:43:31.688013 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-04-18 00:43:31.688024 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2026-04-18 00:43:31.688034 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-04-18 00:43:31.688044 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.61s 2026-04-18 00:43:31.688054 | orchestrator | Print configuration data ------------------------------------------------ 0.60s 2026-04-18 00:43:31.688065 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-04-18 00:43:31.688085 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-04-18 00:43:31.970232 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.55s 2026-04-18 00:43:31.970372 | orchestrator | Set WAL devices config data --------------------------------------------- 0.55s 2026-04-18 00:43:31.970389 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.55s 2026-04-18 00:43:31.970418 | orchestrator | Add known partitions to the list of available block devices ------------- 0.55s 2026-04-18 00:43:31.970430 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-04-18 00:43:31.970442 | orchestrator | Print DB devices -------------------------------------------------------- 0.54s 2026-04-18 00:43:53.600216 | orchestrator | 2026-04-18 00:43:53 | INFO  | Task d4291b3e-2561-41a2-a6be-0ff3097fb211 (sync inventory) is running in background. Output coming soon. 2026-04-18 00:44:20.593638 | orchestrator | 2026-04-18 00:43:55 | INFO  | Starting group_vars file reorganization 2026-04-18 00:44:20.593734 | orchestrator | 2026-04-18 00:43:55 | INFO  | Moved 0 file(s) to their respective directories 2026-04-18 00:44:20.593745 | orchestrator | 2026-04-18 00:43:55 | INFO  | Group_vars file reorganization completed 2026-04-18 00:44:20.593752 | orchestrator | 2026-04-18 00:43:57 | INFO  | Starting variable preparation from inventory 2026-04-18 00:44:20.593761 | orchestrator | 2026-04-18 00:44:00 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-18 00:44:20.593768 | orchestrator | 2026-04-18 00:44:00 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-18 00:44:20.593776 | orchestrator | 2026-04-18 00:44:00 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-18 00:44:20.593782 | orchestrator | 2026-04-18 00:44:00 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-18 00:44:20.593789 | orchestrator | 2026-04-18 00:44:00 | INFO  | Variable preparation completed 2026-04-18 00:44:20.593796 | orchestrator | 2026-04-18 00:44:02 | INFO  | Starting inventory overwrite handling 2026-04-18 00:44:20.593803 | orchestrator | 2026-04-18 00:44:02 | INFO  | Handling group overwrites in 99-overwrite 2026-04-18 00:44:20.593818 | orchestrator | 2026-04-18 00:44:02 | INFO  | Removing group frr:children from 60-generic 2026-04-18 00:44:20.593824 | orchestrator | 2026-04-18 00:44:02 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-18 00:44:20.593860 | orchestrator | 2026-04-18 00:44:02 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-18 00:44:20.593908 | orchestrator | 2026-04-18 00:44:02 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-18 00:44:20.593916 | orchestrator | 2026-04-18 00:44:02 | INFO  | Handling group overwrites in 20-roles 2026-04-18 00:44:20.593922 | orchestrator | 2026-04-18 00:44:02 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-18 00:44:20.593930 | orchestrator | 2026-04-18 00:44:02 | INFO  | Removed 5 group(s) in total 2026-04-18 00:44:20.593936 | orchestrator | 2026-04-18 00:44:02 | INFO  | Inventory overwrite handling completed 2026-04-18 00:44:20.593942 | orchestrator | 2026-04-18 00:44:03 | INFO  | Starting merge of inventory files 2026-04-18 00:44:20.593948 | orchestrator | 2026-04-18 00:44:03 | INFO  | Inventory files merged successfully 2026-04-18 00:44:20.593955 | orchestrator | 2026-04-18 00:44:07 | INFO  | Generating minified hosts file 2026-04-18 00:44:20.593961 | orchestrator | 2026-04-18 00:44:08 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-18 00:44:20.593972 | orchestrator | 2026-04-18 00:44:08 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-18 00:44:20.593980 | orchestrator | 2026-04-18 00:44:10 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-18 00:44:20.593986 | orchestrator | 2026-04-18 00:44:19 | INFO  | Successfully wrote ClusterShell configuration 2026-04-18 00:44:20.593993 | orchestrator | [master 1c889c4] 2026-04-18-00-44 2026-04-18 00:44:20.594000 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-18 00:44:20.594008 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-18 00:44:20.594072 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-18 00:44:20.594081 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-18 00:44:21.806857 | orchestrator | 2026-04-18 00:44:21 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-18 00:44:21.863861 | orchestrator | 2026-04-18 00:44:21 | INFO  | Task 114ba715-6deb-4fca-af2e-ca17633e907e (ceph-create-lvm-devices) was prepared for execution. 2026-04-18 00:44:21.864009 | orchestrator | 2026-04-18 00:44:21 | INFO  | It takes a moment until task 114ba715-6deb-4fca-af2e-ca17633e907e (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-18 00:44:32.101110 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-18 00:44:32.101243 | orchestrator | 2.16.14 2026-04-18 00:44:32.101258 | orchestrator | 2026-04-18 00:44:32.101271 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-18 00:44:32.101282 | orchestrator | 2026-04-18 00:44:32.101293 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-18 00:44:32.101304 | orchestrator | Saturday 18 April 2026 00:44:25 +0000 (0:00:00.203) 0:00:00.203 ******** 2026-04-18 00:44:32.101315 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-18 00:44:32.101325 | orchestrator | 2026-04-18 00:44:32.101335 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-18 00:44:32.101345 | orchestrator | Saturday 18 April 2026 00:44:25 +0000 (0:00:00.217) 0:00:00.420 ******** 2026-04-18 00:44:32.101354 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:44:32.101365 | orchestrator | 2026-04-18 00:44:32.101374 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.101385 | orchestrator | Saturday 18 April 2026 00:44:25 +0000 (0:00:00.198) 0:00:00.619 ******** 2026-04-18 00:44:32.101395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-18 00:44:32.101428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-18 00:44:32.101440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-18 00:44:32.101451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-18 00:44:32.101462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-18 00:44:32.101473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-18 00:44:32.101491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-18 00:44:32.101501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-18 00:44:32.101512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-18 00:44:32.101523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-18 00:44:32.101529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-18 00:44:32.101535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-18 00:44:32.101542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-18 00:44:32.101548 | orchestrator | 2026-04-18 00:44:32.101554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.101664 | orchestrator | Saturday 18 April 2026 00:44:26 +0000 (0:00:00.346) 0:00:00.965 ******** 2026-04-18 00:44:32.101674 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.101714 | orchestrator | 2026-04-18 00:44:32.101722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.101729 | orchestrator | Saturday 18 April 2026 00:44:26 +0000 (0:00:00.329) 0:00:01.295 ******** 2026-04-18 00:44:32.101736 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.101743 | orchestrator | 2026-04-18 00:44:32.101750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.101757 | orchestrator | Saturday 18 April 2026 00:44:26 +0000 (0:00:00.163) 0:00:01.458 ******** 2026-04-18 00:44:32.101764 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.101771 | orchestrator | 2026-04-18 00:44:32.101778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.101785 | orchestrator | Saturday 18 April 2026 00:44:26 +0000 (0:00:00.163) 0:00:01.622 ******** 2026-04-18 00:44:32.101792 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.101799 | orchestrator | 2026-04-18 00:44:32.101806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.101813 | orchestrator | Saturday 18 April 2026 00:44:27 +0000 (0:00:00.165) 0:00:01.788 ******** 2026-04-18 00:44:32.101820 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.101827 | orchestrator | 2026-04-18 00:44:32.101835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.101842 | orchestrator | Saturday 18 April 2026 00:44:27 +0000 (0:00:00.182) 0:00:01.970 ******** 2026-04-18 00:44:32.101849 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.101856 | orchestrator | 2026-04-18 00:44:32.101863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.101899 | orchestrator | Saturday 18 April 2026 00:44:27 +0000 (0:00:00.181) 0:00:02.151 ******** 2026-04-18 00:44:32.101908 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.101914 | orchestrator | 2026-04-18 00:44:32.101920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.101926 | orchestrator | Saturday 18 April 2026 00:44:27 +0000 (0:00:00.178) 0:00:02.330 ******** 2026-04-18 00:44:32.101932 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.101938 | orchestrator | 2026-04-18 00:44:32.101945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.101960 | orchestrator | Saturday 18 April 2026 00:44:27 +0000 (0:00:00.176) 0:00:02.507 ******** 2026-04-18 00:44:32.101966 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80) 2026-04-18 00:44:32.101973 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80) 2026-04-18 00:44:32.101979 | orchestrator | 2026-04-18 00:44:32.101986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.102010 | orchestrator | Saturday 18 April 2026 00:44:28 +0000 (0:00:00.333) 0:00:02.841 ******** 2026-04-18 00:44:32.102070 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_60a8bfc5-b9d0-4eee-9755-9876332cb447) 2026-04-18 00:44:32.102082 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_60a8bfc5-b9d0-4eee-9755-9876332cb447) 2026-04-18 00:44:32.102090 | orchestrator | 2026-04-18 00:44:32.102100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.102138 | orchestrator | Saturday 18 April 2026 00:44:28 +0000 (0:00:00.366) 0:00:03.207 ******** 2026-04-18 00:44:32.102150 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7641f05d-52e5-4bbf-9f1e-20500225c31b) 2026-04-18 00:44:32.102160 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7641f05d-52e5-4bbf-9f1e-20500225c31b) 2026-04-18 00:44:32.102223 | orchestrator | 2026-04-18 00:44:32.102284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.102297 | orchestrator | Saturday 18 April 2026 00:44:29 +0000 (0:00:00.459) 0:00:03.667 ******** 2026-04-18 00:44:32.102308 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7f3bae2c-a706-4f5f-b58d-9951f9e0de3e) 2026-04-18 00:44:32.102319 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7f3bae2c-a706-4f5f-b58d-9951f9e0de3e) 2026-04-18 00:44:32.102329 | orchestrator | 2026-04-18 00:44:32.102339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:32.102350 | orchestrator | Saturday 18 April 2026 00:44:29 +0000 (0:00:00.514) 0:00:04.181 ******** 2026-04-18 00:44:32.102361 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-18 00:44:32.102372 | orchestrator | 2026-04-18 00:44:32.102383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:32.102395 | orchestrator | Saturday 18 April 2026 00:44:30 +0000 (0:00:00.926) 0:00:05.108 ******** 2026-04-18 00:44:32.102405 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-18 00:44:32.102417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-18 00:44:32.102427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-18 00:44:32.102439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-18 00:44:32.102450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-18 00:44:32.102461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-18 00:44:32.102472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-18 00:44:32.102483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-18 00:44:32.102493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-18 00:44:32.102505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-18 00:44:32.102518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-18 00:44:32.102529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-18 00:44:32.102553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-18 00:44:32.102564 | orchestrator | 2026-04-18 00:44:32.102570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:32.102576 | orchestrator | Saturday 18 April 2026 00:44:30 +0000 (0:00:00.390) 0:00:05.499 ******** 2026-04-18 00:44:32.102582 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.102588 | orchestrator | 2026-04-18 00:44:32.102595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:32.102601 | orchestrator | Saturday 18 April 2026 00:44:31 +0000 (0:00:00.184) 0:00:05.683 ******** 2026-04-18 00:44:32.102607 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.102613 | orchestrator | 2026-04-18 00:44:32.102620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:32.102626 | orchestrator | Saturday 18 April 2026 00:44:31 +0000 (0:00:00.175) 0:00:05.858 ******** 2026-04-18 00:44:32.102632 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.102638 | orchestrator | 2026-04-18 00:44:32.102644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:32.102650 | orchestrator | Saturday 18 April 2026 00:44:31 +0000 (0:00:00.170) 0:00:06.028 ******** 2026-04-18 00:44:32.102656 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.102663 | orchestrator | 2026-04-18 00:44:32.102669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:32.102675 | orchestrator | Saturday 18 April 2026 00:44:31 +0000 (0:00:00.169) 0:00:06.198 ******** 2026-04-18 00:44:32.102681 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.102687 | orchestrator | 2026-04-18 00:44:32.102693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:32.102699 | orchestrator | Saturday 18 April 2026 00:44:31 +0000 (0:00:00.170) 0:00:06.368 ******** 2026-04-18 00:44:32.102706 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.102712 | orchestrator | 2026-04-18 00:44:32.102718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:32.102724 | orchestrator | Saturday 18 April 2026 00:44:31 +0000 (0:00:00.177) 0:00:06.545 ******** 2026-04-18 00:44:32.102730 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:32.102736 | orchestrator | 2026-04-18 00:44:32.102753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:39.164856 | orchestrator | Saturday 18 April 2026 00:44:32 +0000 (0:00:00.175) 0:00:06.721 ******** 2026-04-18 00:44:39.165728 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.165778 | orchestrator | 2026-04-18 00:44:39.165786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:39.165792 | orchestrator | Saturday 18 April 2026 00:44:32 +0000 (0:00:00.185) 0:00:06.907 ******** 2026-04-18 00:44:39.165797 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-18 00:44:39.165803 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-18 00:44:39.165809 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-18 00:44:39.165814 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-18 00:44:39.165819 | orchestrator | 2026-04-18 00:44:39.165824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:39.165829 | orchestrator | Saturday 18 April 2026 00:44:33 +0000 (0:00:00.844) 0:00:07.751 ******** 2026-04-18 00:44:39.165834 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.165839 | orchestrator | 2026-04-18 00:44:39.165844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:39.165849 | orchestrator | Saturday 18 April 2026 00:44:33 +0000 (0:00:00.179) 0:00:07.930 ******** 2026-04-18 00:44:39.165854 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.165859 | orchestrator | 2026-04-18 00:44:39.165863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:39.165868 | orchestrator | Saturday 18 April 2026 00:44:33 +0000 (0:00:00.184) 0:00:08.115 ******** 2026-04-18 00:44:39.165919 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.165926 | orchestrator | 2026-04-18 00:44:39.165931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:39.165936 | orchestrator | Saturday 18 April 2026 00:44:33 +0000 (0:00:00.167) 0:00:08.283 ******** 2026-04-18 00:44:39.165940 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.165945 | orchestrator | 2026-04-18 00:44:39.165959 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-18 00:44:39.165964 | orchestrator | Saturday 18 April 2026 00:44:33 +0000 (0:00:00.175) 0:00:08.459 ******** 2026-04-18 00:44:39.165969 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.165973 | orchestrator | 2026-04-18 00:44:39.165978 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-18 00:44:39.165983 | orchestrator | Saturday 18 April 2026 00:44:33 +0000 (0:00:00.112) 0:00:08.571 ******** 2026-04-18 00:44:39.165989 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93b19634-3a0b-57aa-985a-342cbb17f88c'}}) 2026-04-18 00:44:39.165994 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '97728c5d-edf3-594c-abdf-329078c85e67'}}) 2026-04-18 00:44:39.165999 | orchestrator | 2026-04-18 00:44:39.166004 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-18 00:44:39.166008 | orchestrator | Saturday 18 April 2026 00:44:34 +0000 (0:00:00.171) 0:00:08.742 ******** 2026-04-18 00:44:39.166058 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'}) 2026-04-18 00:44:39.166070 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'}) 2026-04-18 00:44:39.166079 | orchestrator | 2026-04-18 00:44:39.166087 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-18 00:44:39.166095 | orchestrator | Saturday 18 April 2026 00:44:36 +0000 (0:00:01.948) 0:00:10.690 ******** 2026-04-18 00:44:39.166103 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:39.166114 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:39.166122 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.166130 | orchestrator | 2026-04-18 00:44:39.166139 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-18 00:44:39.166148 | orchestrator | Saturday 18 April 2026 00:44:36 +0000 (0:00:00.135) 0:00:10.826 ******** 2026-04-18 00:44:39.166157 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'}) 2026-04-18 00:44:39.166165 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'}) 2026-04-18 00:44:39.166174 | orchestrator | 2026-04-18 00:44:39.166182 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-18 00:44:39.166191 | orchestrator | Saturday 18 April 2026 00:44:37 +0000 (0:00:01.384) 0:00:12.211 ******** 2026-04-18 00:44:39.166199 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:39.166208 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:39.166215 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.166223 | orchestrator | 2026-04-18 00:44:39.166231 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-18 00:44:39.166249 | orchestrator | Saturday 18 April 2026 00:44:37 +0000 (0:00:00.134) 0:00:12.345 ******** 2026-04-18 00:44:39.166282 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.166289 | orchestrator | 2026-04-18 00:44:39.166294 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-18 00:44:39.166299 | orchestrator | Saturday 18 April 2026 00:44:37 +0000 (0:00:00.102) 0:00:12.447 ******** 2026-04-18 00:44:39.166304 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:39.166309 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:39.166313 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.166318 | orchestrator | 2026-04-18 00:44:39.166324 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-18 00:44:39.166332 | orchestrator | Saturday 18 April 2026 00:44:38 +0000 (0:00:00.266) 0:00:12.714 ******** 2026-04-18 00:44:39.166340 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.166349 | orchestrator | 2026-04-18 00:44:39.166357 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-18 00:44:39.166366 | orchestrator | Saturday 18 April 2026 00:44:38 +0000 (0:00:00.121) 0:00:12.835 ******** 2026-04-18 00:44:39.166374 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:39.166383 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:39.166391 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.166400 | orchestrator | 2026-04-18 00:44:39.166406 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-18 00:44:39.166411 | orchestrator | Saturday 18 April 2026 00:44:38 +0000 (0:00:00.108) 0:00:12.944 ******** 2026-04-18 00:44:39.166416 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.166420 | orchestrator | 2026-04-18 00:44:39.166425 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-18 00:44:39.166430 | orchestrator | Saturday 18 April 2026 00:44:38 +0000 (0:00:00.109) 0:00:13.053 ******** 2026-04-18 00:44:39.166435 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:39.166441 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:39.166445 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.166450 | orchestrator | 2026-04-18 00:44:39.166455 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-18 00:44:39.166460 | orchestrator | Saturday 18 April 2026 00:44:38 +0000 (0:00:00.118) 0:00:13.172 ******** 2026-04-18 00:44:39.166465 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:44:39.166470 | orchestrator | 2026-04-18 00:44:39.166475 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-18 00:44:39.166480 | orchestrator | Saturday 18 April 2026 00:44:38 +0000 (0:00:00.131) 0:00:13.304 ******** 2026-04-18 00:44:39.166485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:39.166490 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:39.166495 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.166500 | orchestrator | 2026-04-18 00:44:39.166504 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-18 00:44:39.166514 | orchestrator | Saturday 18 April 2026 00:44:38 +0000 (0:00:00.123) 0:00:13.427 ******** 2026-04-18 00:44:39.166519 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:39.166524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:39.166529 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.166534 | orchestrator | 2026-04-18 00:44:39.166539 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-18 00:44:39.166543 | orchestrator | Saturday 18 April 2026 00:44:38 +0000 (0:00:00.128) 0:00:13.555 ******** 2026-04-18 00:44:39.166548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:39.166553 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:39.166558 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.166563 | orchestrator | 2026-04-18 00:44:39.166567 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-18 00:44:39.166572 | orchestrator | Saturday 18 April 2026 00:44:39 +0000 (0:00:00.114) 0:00:13.670 ******** 2026-04-18 00:44:39.166577 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:39.166582 | orchestrator | 2026-04-18 00:44:39.166586 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-18 00:44:39.166596 | orchestrator | Saturday 18 April 2026 00:44:39 +0000 (0:00:00.115) 0:00:13.785 ******** 2026-04-18 00:44:44.537582 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.537619 | orchestrator | 2026-04-18 00:44:44.537626 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-18 00:44:44.537631 | orchestrator | Saturday 18 April 2026 00:44:39 +0000 (0:00:00.114) 0:00:13.900 ******** 2026-04-18 00:44:44.537635 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.537640 | orchestrator | 2026-04-18 00:44:44.537644 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-18 00:44:44.537648 | orchestrator | Saturday 18 April 2026 00:44:39 +0000 (0:00:00.127) 0:00:14.027 ******** 2026-04-18 00:44:44.537652 | orchestrator | ok: [testbed-node-3] => { 2026-04-18 00:44:44.537656 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-18 00:44:44.537661 | orchestrator | } 2026-04-18 00:44:44.537665 | orchestrator | 2026-04-18 00:44:44.537669 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-18 00:44:44.537673 | orchestrator | Saturday 18 April 2026 00:44:39 +0000 (0:00:00.238) 0:00:14.265 ******** 2026-04-18 00:44:44.537676 | orchestrator | ok: [testbed-node-3] => { 2026-04-18 00:44:44.537680 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-18 00:44:44.537684 | orchestrator | } 2026-04-18 00:44:44.537687 | orchestrator | 2026-04-18 00:44:44.537691 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-18 00:44:44.537695 | orchestrator | Saturday 18 April 2026 00:44:39 +0000 (0:00:00.116) 0:00:14.381 ******** 2026-04-18 00:44:44.537699 | orchestrator | ok: [testbed-node-3] => { 2026-04-18 00:44:44.537702 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-18 00:44:44.537720 | orchestrator | } 2026-04-18 00:44:44.537724 | orchestrator | 2026-04-18 00:44:44.537728 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-18 00:44:44.537732 | orchestrator | Saturday 18 April 2026 00:44:39 +0000 (0:00:00.119) 0:00:14.501 ******** 2026-04-18 00:44:44.537735 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:44:44.537739 | orchestrator | 2026-04-18 00:44:44.537746 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-18 00:44:44.537750 | orchestrator | Saturday 18 April 2026 00:44:40 +0000 (0:00:00.603) 0:00:15.105 ******** 2026-04-18 00:44:44.537753 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:44:44.537772 | orchestrator | 2026-04-18 00:44:44.537776 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-18 00:44:44.537779 | orchestrator | Saturday 18 April 2026 00:44:40 +0000 (0:00:00.454) 0:00:15.559 ******** 2026-04-18 00:44:44.537783 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:44:44.537787 | orchestrator | 2026-04-18 00:44:44.537791 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-18 00:44:44.537794 | orchestrator | Saturday 18 April 2026 00:44:41 +0000 (0:00:00.511) 0:00:16.071 ******** 2026-04-18 00:44:44.537798 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:44:44.537802 | orchestrator | 2026-04-18 00:44:44.537805 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-18 00:44:44.537809 | orchestrator | Saturday 18 April 2026 00:44:41 +0000 (0:00:00.128) 0:00:16.200 ******** 2026-04-18 00:44:44.537813 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.537819 | orchestrator | 2026-04-18 00:44:44.537826 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-18 00:44:44.537832 | orchestrator | Saturday 18 April 2026 00:44:41 +0000 (0:00:00.093) 0:00:16.293 ******** 2026-04-18 00:44:44.537839 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.537845 | orchestrator | 2026-04-18 00:44:44.537852 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-18 00:44:44.537859 | orchestrator | Saturday 18 April 2026 00:44:41 +0000 (0:00:00.087) 0:00:16.381 ******** 2026-04-18 00:44:44.537865 | orchestrator | ok: [testbed-node-3] => { 2026-04-18 00:44:44.537871 | orchestrator |  "vgs_report": { 2026-04-18 00:44:44.537876 | orchestrator |  "vg": [] 2026-04-18 00:44:44.537934 | orchestrator |  } 2026-04-18 00:44:44.537938 | orchestrator | } 2026-04-18 00:44:44.537942 | orchestrator | 2026-04-18 00:44:44.537946 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-18 00:44:44.537949 | orchestrator | Saturday 18 April 2026 00:44:41 +0000 (0:00:00.131) 0:00:16.512 ******** 2026-04-18 00:44:44.537953 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.537957 | orchestrator | 2026-04-18 00:44:44.537960 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-18 00:44:44.537964 | orchestrator | Saturday 18 April 2026 00:44:42 +0000 (0:00:00.121) 0:00:16.634 ******** 2026-04-18 00:44:44.537969 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.537972 | orchestrator | 2026-04-18 00:44:44.537976 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-18 00:44:44.537980 | orchestrator | Saturday 18 April 2026 00:44:42 +0000 (0:00:00.106) 0:00:16.741 ******** 2026-04-18 00:44:44.537984 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.537987 | orchestrator | 2026-04-18 00:44:44.537991 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-18 00:44:44.537995 | orchestrator | Saturday 18 April 2026 00:44:42 +0000 (0:00:00.106) 0:00:16.847 ******** 2026-04-18 00:44:44.537998 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538002 | orchestrator | 2026-04-18 00:44:44.538006 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-18 00:44:44.538010 | orchestrator | Saturday 18 April 2026 00:44:42 +0000 (0:00:00.242) 0:00:17.089 ******** 2026-04-18 00:44:44.538043 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538049 | orchestrator | 2026-04-18 00:44:44.538054 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-18 00:44:44.538060 | orchestrator | Saturday 18 April 2026 00:44:42 +0000 (0:00:00.125) 0:00:17.215 ******** 2026-04-18 00:44:44.538066 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538071 | orchestrator | 2026-04-18 00:44:44.538076 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-18 00:44:44.538082 | orchestrator | Saturday 18 April 2026 00:44:42 +0000 (0:00:00.107) 0:00:17.322 ******** 2026-04-18 00:44:44.538089 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538095 | orchestrator | 2026-04-18 00:44:44.538101 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-18 00:44:44.538115 | orchestrator | Saturday 18 April 2026 00:44:42 +0000 (0:00:00.119) 0:00:17.442 ******** 2026-04-18 00:44:44.538134 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538142 | orchestrator | 2026-04-18 00:44:44.538148 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-18 00:44:44.538154 | orchestrator | Saturday 18 April 2026 00:44:42 +0000 (0:00:00.127) 0:00:17.569 ******** 2026-04-18 00:44:44.538160 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538166 | orchestrator | 2026-04-18 00:44:44.538173 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-18 00:44:44.538178 | orchestrator | Saturday 18 April 2026 00:44:43 +0000 (0:00:00.115) 0:00:17.684 ******** 2026-04-18 00:44:44.538182 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538186 | orchestrator | 2026-04-18 00:44:44.538191 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-18 00:44:44.538195 | orchestrator | Saturday 18 April 2026 00:44:43 +0000 (0:00:00.119) 0:00:17.803 ******** 2026-04-18 00:44:44.538199 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538203 | orchestrator | 2026-04-18 00:44:44.538208 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-18 00:44:44.538212 | orchestrator | Saturday 18 April 2026 00:44:43 +0000 (0:00:00.129) 0:00:17.932 ******** 2026-04-18 00:44:44.538216 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538223 | orchestrator | 2026-04-18 00:44:44.538229 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-18 00:44:44.538236 | orchestrator | Saturday 18 April 2026 00:44:43 +0000 (0:00:00.122) 0:00:18.056 ******** 2026-04-18 00:44:44.538243 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538249 | orchestrator | 2026-04-18 00:44:44.538256 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-18 00:44:44.538263 | orchestrator | Saturday 18 April 2026 00:44:43 +0000 (0:00:00.116) 0:00:18.172 ******** 2026-04-18 00:44:44.538270 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538277 | orchestrator | 2026-04-18 00:44:44.538288 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-18 00:44:44.538294 | orchestrator | Saturday 18 April 2026 00:44:43 +0000 (0:00:00.118) 0:00:18.291 ******** 2026-04-18 00:44:44.538302 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:44.538309 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:44.538313 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538317 | orchestrator | 2026-04-18 00:44:44.538322 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-18 00:44:44.538326 | orchestrator | Saturday 18 April 2026 00:44:43 +0000 (0:00:00.141) 0:00:18.433 ******** 2026-04-18 00:44:44.538330 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:44.538334 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:44.538338 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538343 | orchestrator | 2026-04-18 00:44:44.538347 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-18 00:44:44.538351 | orchestrator | Saturday 18 April 2026 00:44:44 +0000 (0:00:00.266) 0:00:18.699 ******** 2026-04-18 00:44:44.538356 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:44.538360 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:44.538369 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538373 | orchestrator | 2026-04-18 00:44:44.538377 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-18 00:44:44.538381 | orchestrator | Saturday 18 April 2026 00:44:44 +0000 (0:00:00.136) 0:00:18.836 ******** 2026-04-18 00:44:44.538385 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:44.538390 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:44.538394 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538399 | orchestrator | 2026-04-18 00:44:44.538403 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-18 00:44:44.538407 | orchestrator | Saturday 18 April 2026 00:44:44 +0000 (0:00:00.128) 0:00:18.964 ******** 2026-04-18 00:44:44.538411 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:44.538416 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:44.538420 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:44.538424 | orchestrator | 2026-04-18 00:44:44.538428 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-18 00:44:44.538432 | orchestrator | Saturday 18 April 2026 00:44:44 +0000 (0:00:00.138) 0:00:19.103 ******** 2026-04-18 00:44:44.538440 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:49.263753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:49.263875 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:49.263964 | orchestrator | 2026-04-18 00:44:49.263983 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-18 00:44:49.264002 | orchestrator | Saturday 18 April 2026 00:44:44 +0000 (0:00:00.137) 0:00:19.241 ******** 2026-04-18 00:44:49.264020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:49.264036 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:49.264049 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:49.264059 | orchestrator | 2026-04-18 00:44:49.264073 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-18 00:44:49.264089 | orchestrator | Saturday 18 April 2026 00:44:44 +0000 (0:00:00.141) 0:00:19.383 ******** 2026-04-18 00:44:49.264105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:49.264121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:49.264136 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:49.264151 | orchestrator | 2026-04-18 00:44:49.264167 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-18 00:44:49.264184 | orchestrator | Saturday 18 April 2026 00:44:44 +0000 (0:00:00.131) 0:00:19.515 ******** 2026-04-18 00:44:49.264201 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:44:49.264218 | orchestrator | 2026-04-18 00:44:49.264236 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-18 00:44:49.264288 | orchestrator | Saturday 18 April 2026 00:44:45 +0000 (0:00:00.514) 0:00:20.029 ******** 2026-04-18 00:44:49.264302 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:44:49.264313 | orchestrator | 2026-04-18 00:44:49.264324 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-18 00:44:49.264335 | orchestrator | Saturday 18 April 2026 00:44:45 +0000 (0:00:00.497) 0:00:20.527 ******** 2026-04-18 00:44:49.264346 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:44:49.264357 | orchestrator | 2026-04-18 00:44:49.264368 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-18 00:44:49.264379 | orchestrator | Saturday 18 April 2026 00:44:46 +0000 (0:00:00.131) 0:00:20.659 ******** 2026-04-18 00:44:49.264391 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'vg_name': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'}) 2026-04-18 00:44:49.264402 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'vg_name': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'}) 2026-04-18 00:44:49.264412 | orchestrator | 2026-04-18 00:44:49.264421 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-18 00:44:49.264432 | orchestrator | Saturday 18 April 2026 00:44:46 +0000 (0:00:00.148) 0:00:20.807 ******** 2026-04-18 00:44:49.264442 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:49.264452 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:49.264461 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:49.264471 | orchestrator | 2026-04-18 00:44:49.264481 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-18 00:44:49.264491 | orchestrator | Saturday 18 April 2026 00:44:46 +0000 (0:00:00.131) 0:00:20.938 ******** 2026-04-18 00:44:49.264517 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:49.264528 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:49.264553 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:49.264574 | orchestrator | 2026-04-18 00:44:49.264584 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-18 00:44:49.264594 | orchestrator | Saturday 18 April 2026 00:44:46 +0000 (0:00:00.322) 0:00:21.261 ******** 2026-04-18 00:44:49.264603 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'})  2026-04-18 00:44:49.264613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'})  2026-04-18 00:44:49.264622 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:44:49.264632 | orchestrator | 2026-04-18 00:44:49.264641 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-18 00:44:49.264651 | orchestrator | Saturday 18 April 2026 00:44:46 +0000 (0:00:00.156) 0:00:21.417 ******** 2026-04-18 00:44:49.264682 | orchestrator | ok: [testbed-node-3] => { 2026-04-18 00:44:49.264693 | orchestrator |  "lvm_report": { 2026-04-18 00:44:49.264703 | orchestrator |  "lv": [ 2026-04-18 00:44:49.264716 | orchestrator |  { 2026-04-18 00:44:49.264734 | orchestrator |  "lv_name": "osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c", 2026-04-18 00:44:49.264753 | orchestrator |  "vg_name": "ceph-93b19634-3a0b-57aa-985a-342cbb17f88c" 2026-04-18 00:44:49.264772 | orchestrator |  }, 2026-04-18 00:44:49.264790 | orchestrator |  { 2026-04-18 00:44:49.264824 | orchestrator |  "lv_name": "osd-block-97728c5d-edf3-594c-abdf-329078c85e67", 2026-04-18 00:44:49.264843 | orchestrator |  "vg_name": "ceph-97728c5d-edf3-594c-abdf-329078c85e67" 2026-04-18 00:44:49.264855 | orchestrator |  } 2026-04-18 00:44:49.264865 | orchestrator |  ], 2026-04-18 00:44:49.264874 | orchestrator |  "pv": [ 2026-04-18 00:44:49.264910 | orchestrator |  { 2026-04-18 00:44:49.264921 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-18 00:44:49.264930 | orchestrator |  "vg_name": "ceph-93b19634-3a0b-57aa-985a-342cbb17f88c" 2026-04-18 00:44:49.264940 | orchestrator |  }, 2026-04-18 00:44:49.264949 | orchestrator |  { 2026-04-18 00:44:49.264959 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-18 00:44:49.264968 | orchestrator |  "vg_name": "ceph-97728c5d-edf3-594c-abdf-329078c85e67" 2026-04-18 00:44:49.264978 | orchestrator |  } 2026-04-18 00:44:49.264987 | orchestrator |  ] 2026-04-18 00:44:49.264997 | orchestrator |  } 2026-04-18 00:44:49.265007 | orchestrator | } 2026-04-18 00:44:49.265016 | orchestrator | 2026-04-18 00:44:49.265026 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-18 00:44:49.265035 | orchestrator | 2026-04-18 00:44:49.265045 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-18 00:44:49.265061 | orchestrator | Saturday 18 April 2026 00:44:47 +0000 (0:00:00.269) 0:00:21.687 ******** 2026-04-18 00:44:49.265071 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-18 00:44:49.265080 | orchestrator | 2026-04-18 00:44:49.265090 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-18 00:44:49.265099 | orchestrator | Saturday 18 April 2026 00:44:47 +0000 (0:00:00.213) 0:00:21.900 ******** 2026-04-18 00:44:49.265109 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:44:49.265118 | orchestrator | 2026-04-18 00:44:49.265128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:49.265138 | orchestrator | Saturday 18 April 2026 00:44:47 +0000 (0:00:00.207) 0:00:22.108 ******** 2026-04-18 00:44:49.265147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-18 00:44:49.265157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-18 00:44:49.265166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-18 00:44:49.265175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-18 00:44:49.265185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-18 00:44:49.265194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-18 00:44:49.265204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-18 00:44:49.265213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-18 00:44:49.265223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-18 00:44:49.265232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-18 00:44:49.265241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-18 00:44:49.265251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-18 00:44:49.265260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-18 00:44:49.265270 | orchestrator | 2026-04-18 00:44:49.265280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:49.265289 | orchestrator | Saturday 18 April 2026 00:44:47 +0000 (0:00:00.380) 0:00:22.488 ******** 2026-04-18 00:44:49.265299 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:49.265308 | orchestrator | 2026-04-18 00:44:49.265317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:49.265332 | orchestrator | Saturday 18 April 2026 00:44:48 +0000 (0:00:00.174) 0:00:22.662 ******** 2026-04-18 00:44:49.265342 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:49.265351 | orchestrator | 2026-04-18 00:44:49.265360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:49.265370 | orchestrator | Saturday 18 April 2026 00:44:48 +0000 (0:00:00.179) 0:00:22.842 ******** 2026-04-18 00:44:49.265379 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:49.265389 | orchestrator | 2026-04-18 00:44:49.265398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:49.265408 | orchestrator | Saturday 18 April 2026 00:44:48 +0000 (0:00:00.171) 0:00:23.013 ******** 2026-04-18 00:44:49.265417 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:49.265427 | orchestrator | 2026-04-18 00:44:49.265436 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:49.265446 | orchestrator | Saturday 18 April 2026 00:44:48 +0000 (0:00:00.493) 0:00:23.506 ******** 2026-04-18 00:44:49.265455 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:49.265465 | orchestrator | 2026-04-18 00:44:49.265474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:49.265484 | orchestrator | Saturday 18 April 2026 00:44:49 +0000 (0:00:00.191) 0:00:23.698 ******** 2026-04-18 00:44:49.265494 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:49.265503 | orchestrator | 2026-04-18 00:44:49.265521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:58.870508 | orchestrator | Saturday 18 April 2026 00:44:49 +0000 (0:00:00.187) 0:00:23.886 ******** 2026-04-18 00:44:58.870575 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.870582 | orchestrator | 2026-04-18 00:44:58.870587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:58.870591 | orchestrator | Saturday 18 April 2026 00:44:49 +0000 (0:00:00.178) 0:00:24.064 ******** 2026-04-18 00:44:58.870595 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.870599 | orchestrator | 2026-04-18 00:44:58.870603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:58.870607 | orchestrator | Saturday 18 April 2026 00:44:49 +0000 (0:00:00.161) 0:00:24.225 ******** 2026-04-18 00:44:58.870612 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee) 2026-04-18 00:44:58.870617 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee) 2026-04-18 00:44:58.870621 | orchestrator | 2026-04-18 00:44:58.870624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:58.870628 | orchestrator | Saturday 18 April 2026 00:44:49 +0000 (0:00:00.391) 0:00:24.617 ******** 2026-04-18 00:44:58.870632 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ca3776c9-6de1-440c-b162-bababc8c77c0) 2026-04-18 00:44:58.870635 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ca3776c9-6de1-440c-b162-bababc8c77c0) 2026-04-18 00:44:58.870639 | orchestrator | 2026-04-18 00:44:58.870654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:58.870658 | orchestrator | Saturday 18 April 2026 00:44:50 +0000 (0:00:00.381) 0:00:24.998 ******** 2026-04-18 00:44:58.870662 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3edcf3c9-33ad-4391-9ec0-9b04bbd0d527) 2026-04-18 00:44:58.870665 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3edcf3c9-33ad-4391-9ec0-9b04bbd0d527) 2026-04-18 00:44:58.870669 | orchestrator | 2026-04-18 00:44:58.870673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:58.870676 | orchestrator | Saturday 18 April 2026 00:44:50 +0000 (0:00:00.403) 0:00:25.402 ******** 2026-04-18 00:44:58.870680 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_599ac768-9444-46f8-8891-4fc4025f5c6d) 2026-04-18 00:44:58.870699 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_599ac768-9444-46f8-8891-4fc4025f5c6d) 2026-04-18 00:44:58.870703 | orchestrator | 2026-04-18 00:44:58.870707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:44:58.870710 | orchestrator | Saturday 18 April 2026 00:44:51 +0000 (0:00:00.411) 0:00:25.813 ******** 2026-04-18 00:44:58.870714 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-18 00:44:58.870718 | orchestrator | 2026-04-18 00:44:58.870721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.870725 | orchestrator | Saturday 18 April 2026 00:44:51 +0000 (0:00:00.307) 0:00:26.121 ******** 2026-04-18 00:44:58.870729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-18 00:44:58.870733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-18 00:44:58.870737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-18 00:44:58.870741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-18 00:44:58.870744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-18 00:44:58.870748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-18 00:44:58.870751 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-18 00:44:58.870755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-18 00:44:58.870759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-18 00:44:58.870763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-18 00:44:58.870767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-18 00:44:58.870770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-18 00:44:58.870774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-18 00:44:58.870778 | orchestrator | 2026-04-18 00:44:58.870781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.870785 | orchestrator | Saturday 18 April 2026 00:44:52 +0000 (0:00:00.532) 0:00:26.653 ******** 2026-04-18 00:44:58.870789 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.870793 | orchestrator | 2026-04-18 00:44:58.870796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.870800 | orchestrator | Saturday 18 April 2026 00:44:52 +0000 (0:00:00.182) 0:00:26.835 ******** 2026-04-18 00:44:58.870804 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.870807 | orchestrator | 2026-04-18 00:44:58.870811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.870815 | orchestrator | Saturday 18 April 2026 00:44:52 +0000 (0:00:00.177) 0:00:27.013 ******** 2026-04-18 00:44:58.870818 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.870822 | orchestrator | 2026-04-18 00:44:58.870835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.870839 | orchestrator | Saturday 18 April 2026 00:44:52 +0000 (0:00:00.188) 0:00:27.201 ******** 2026-04-18 00:44:58.870843 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.870847 | orchestrator | 2026-04-18 00:44:58.870850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.870854 | orchestrator | Saturday 18 April 2026 00:44:52 +0000 (0:00:00.157) 0:00:27.359 ******** 2026-04-18 00:44:58.870858 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.870861 | orchestrator | 2026-04-18 00:44:58.870865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.870872 | orchestrator | Saturday 18 April 2026 00:44:52 +0000 (0:00:00.171) 0:00:27.531 ******** 2026-04-18 00:44:58.870876 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.870880 | orchestrator | 2026-04-18 00:44:58.870883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.870931 | orchestrator | Saturday 18 April 2026 00:44:53 +0000 (0:00:00.166) 0:00:27.697 ******** 2026-04-18 00:44:58.870935 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.870939 | orchestrator | 2026-04-18 00:44:58.870943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.870946 | orchestrator | Saturday 18 April 2026 00:44:53 +0000 (0:00:00.195) 0:00:27.892 ******** 2026-04-18 00:44:58.870950 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.870954 | orchestrator | 2026-04-18 00:44:58.870957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.870961 | orchestrator | Saturday 18 April 2026 00:44:53 +0000 (0:00:00.182) 0:00:28.075 ******** 2026-04-18 00:44:58.870965 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-18 00:44:58.870969 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-18 00:44:58.870973 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-18 00:44:58.870990 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-18 00:44:58.870997 | orchestrator | 2026-04-18 00:44:58.871010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.871016 | orchestrator | Saturday 18 April 2026 00:44:54 +0000 (0:00:00.783) 0:00:28.859 ******** 2026-04-18 00:44:58.871022 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.871029 | orchestrator | 2026-04-18 00:44:58.871034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.871040 | orchestrator | Saturday 18 April 2026 00:44:54 +0000 (0:00:00.173) 0:00:29.032 ******** 2026-04-18 00:44:58.871046 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.871052 | orchestrator | 2026-04-18 00:44:58.871058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.871065 | orchestrator | Saturday 18 April 2026 00:44:54 +0000 (0:00:00.189) 0:00:29.221 ******** 2026-04-18 00:44:58.871072 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.871078 | orchestrator | 2026-04-18 00:44:58.871088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:44:58.871093 | orchestrator | Saturday 18 April 2026 00:44:55 +0000 (0:00:00.549) 0:00:29.770 ******** 2026-04-18 00:44:58.871097 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.871101 | orchestrator | 2026-04-18 00:44:58.871106 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-18 00:44:58.871110 | orchestrator | Saturday 18 April 2026 00:44:55 +0000 (0:00:00.178) 0:00:29.949 ******** 2026-04-18 00:44:58.871115 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.871119 | orchestrator | 2026-04-18 00:44:58.871123 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-18 00:44:58.871127 | orchestrator | Saturday 18 April 2026 00:44:55 +0000 (0:00:00.131) 0:00:30.081 ******** 2026-04-18 00:44:58.871139 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9fd71a58-43ec-5e10-bd02-c7d805355b61'}}) 2026-04-18 00:44:58.871144 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'}}) 2026-04-18 00:44:58.871148 | orchestrator | 2026-04-18 00:44:58.871153 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-18 00:44:58.871157 | orchestrator | Saturday 18 April 2026 00:44:55 +0000 (0:00:00.175) 0:00:30.256 ******** 2026-04-18 00:44:58.871162 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'}) 2026-04-18 00:44:58.871168 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'}) 2026-04-18 00:44:58.871176 | orchestrator | 2026-04-18 00:44:58.871181 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-18 00:44:58.871185 | orchestrator | Saturday 18 April 2026 00:44:57 +0000 (0:00:01.805) 0:00:32.062 ******** 2026-04-18 00:44:58.871189 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:44:58.871195 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:44:58.871199 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:44:58.871203 | orchestrator | 2026-04-18 00:44:58.871208 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-18 00:44:58.871212 | orchestrator | Saturday 18 April 2026 00:44:57 +0000 (0:00:00.142) 0:00:32.204 ******** 2026-04-18 00:44:58.871216 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'}) 2026-04-18 00:44:58.871224 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'}) 2026-04-18 00:45:04.280923 | orchestrator | 2026-04-18 00:45:04.280996 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-18 00:45:04.281004 | orchestrator | Saturday 18 April 2026 00:44:58 +0000 (0:00:01.369) 0:00:33.574 ******** 2026-04-18 00:45:04.281009 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:04.281015 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:04.281019 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281024 | orchestrator | 2026-04-18 00:45:04.281028 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-18 00:45:04.281031 | orchestrator | Saturday 18 April 2026 00:44:59 +0000 (0:00:00.168) 0:00:33.742 ******** 2026-04-18 00:45:04.281035 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281039 | orchestrator | 2026-04-18 00:45:04.281043 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-18 00:45:04.281047 | orchestrator | Saturday 18 April 2026 00:44:59 +0000 (0:00:00.116) 0:00:33.859 ******** 2026-04-18 00:45:04.281061 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:04.281065 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:04.281069 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281073 | orchestrator | 2026-04-18 00:45:04.281076 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-18 00:45:04.281080 | orchestrator | Saturday 18 April 2026 00:44:59 +0000 (0:00:00.134) 0:00:33.993 ******** 2026-04-18 00:45:04.281084 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281088 | orchestrator | 2026-04-18 00:45:04.281091 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-18 00:45:04.281095 | orchestrator | Saturday 18 April 2026 00:44:59 +0000 (0:00:00.123) 0:00:34.117 ******** 2026-04-18 00:45:04.281099 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:04.281103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:04.281107 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281123 | orchestrator | 2026-04-18 00:45:04.281130 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-18 00:45:04.281137 | orchestrator | Saturday 18 April 2026 00:44:59 +0000 (0:00:00.137) 0:00:34.255 ******** 2026-04-18 00:45:04.281143 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281149 | orchestrator | 2026-04-18 00:45:04.281156 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-18 00:45:04.281162 | orchestrator | Saturday 18 April 2026 00:44:59 +0000 (0:00:00.336) 0:00:34.591 ******** 2026-04-18 00:45:04.281168 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:04.281175 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:04.281181 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281188 | orchestrator | 2026-04-18 00:45:04.281194 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-18 00:45:04.281200 | orchestrator | Saturday 18 April 2026 00:45:00 +0000 (0:00:00.130) 0:00:34.721 ******** 2026-04-18 00:45:04.281207 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:45:04.281214 | orchestrator | 2026-04-18 00:45:04.281221 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-18 00:45:04.281227 | orchestrator | Saturday 18 April 2026 00:45:00 +0000 (0:00:00.135) 0:00:34.857 ******** 2026-04-18 00:45:04.281234 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:04.281241 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:04.281246 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281250 | orchestrator | 2026-04-18 00:45:04.281253 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-18 00:45:04.281257 | orchestrator | Saturday 18 April 2026 00:45:00 +0000 (0:00:00.131) 0:00:34.989 ******** 2026-04-18 00:45:04.281261 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:04.281265 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:04.281268 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281272 | orchestrator | 2026-04-18 00:45:04.281276 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-18 00:45:04.281291 | orchestrator | Saturday 18 April 2026 00:45:00 +0000 (0:00:00.135) 0:00:35.124 ******** 2026-04-18 00:45:04.281295 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:04.281299 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:04.281303 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281307 | orchestrator | 2026-04-18 00:45:04.281310 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-18 00:45:04.281314 | orchestrator | Saturday 18 April 2026 00:45:00 +0000 (0:00:00.141) 0:00:35.266 ******** 2026-04-18 00:45:04.281318 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281322 | orchestrator | 2026-04-18 00:45:04.281325 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-18 00:45:04.281329 | orchestrator | Saturday 18 April 2026 00:45:00 +0000 (0:00:00.132) 0:00:35.399 ******** 2026-04-18 00:45:04.281334 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281346 | orchestrator | 2026-04-18 00:45:04.281351 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-18 00:45:04.281360 | orchestrator | Saturday 18 April 2026 00:45:00 +0000 (0:00:00.124) 0:00:35.524 ******** 2026-04-18 00:45:04.281382 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281387 | orchestrator | 2026-04-18 00:45:04.281393 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-18 00:45:04.281399 | orchestrator | Saturday 18 April 2026 00:45:01 +0000 (0:00:00.127) 0:00:35.651 ******** 2026-04-18 00:45:04.281404 | orchestrator | ok: [testbed-node-4] => { 2026-04-18 00:45:04.281409 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-18 00:45:04.281415 | orchestrator | } 2026-04-18 00:45:04.281421 | orchestrator | 2026-04-18 00:45:04.281427 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-18 00:45:04.281432 | orchestrator | Saturday 18 April 2026 00:45:01 +0000 (0:00:00.119) 0:00:35.771 ******** 2026-04-18 00:45:04.281438 | orchestrator | ok: [testbed-node-4] => { 2026-04-18 00:45:04.281444 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-18 00:45:04.281451 | orchestrator | } 2026-04-18 00:45:04.281457 | orchestrator | 2026-04-18 00:45:04.281463 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-18 00:45:04.281469 | orchestrator | Saturday 18 April 2026 00:45:01 +0000 (0:00:00.126) 0:00:35.898 ******** 2026-04-18 00:45:04.281476 | orchestrator | ok: [testbed-node-4] => { 2026-04-18 00:45:04.281482 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-18 00:45:04.281488 | orchestrator | } 2026-04-18 00:45:04.281495 | orchestrator | 2026-04-18 00:45:04.281501 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-18 00:45:04.281505 | orchestrator | Saturday 18 April 2026 00:45:01 +0000 (0:00:00.129) 0:00:36.027 ******** 2026-04-18 00:45:04.281508 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:45:04.281512 | orchestrator | 2026-04-18 00:45:04.281516 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-18 00:45:04.281519 | orchestrator | Saturday 18 April 2026 00:45:02 +0000 (0:00:00.729) 0:00:36.756 ******** 2026-04-18 00:45:04.281523 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:45:04.281527 | orchestrator | 2026-04-18 00:45:04.281531 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-18 00:45:04.281534 | orchestrator | Saturday 18 April 2026 00:45:02 +0000 (0:00:00.600) 0:00:37.357 ******** 2026-04-18 00:45:04.281538 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:45:04.281542 | orchestrator | 2026-04-18 00:45:04.281546 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-18 00:45:04.281549 | orchestrator | Saturday 18 April 2026 00:45:03 +0000 (0:00:00.642) 0:00:37.999 ******** 2026-04-18 00:45:04.281553 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:45:04.281557 | orchestrator | 2026-04-18 00:45:04.281561 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-18 00:45:04.281564 | orchestrator | Saturday 18 April 2026 00:45:03 +0000 (0:00:00.129) 0:00:38.128 ******** 2026-04-18 00:45:04.281568 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281572 | orchestrator | 2026-04-18 00:45:04.281575 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-18 00:45:04.281579 | orchestrator | Saturday 18 April 2026 00:45:03 +0000 (0:00:00.094) 0:00:38.223 ******** 2026-04-18 00:45:04.281583 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281587 | orchestrator | 2026-04-18 00:45:04.281590 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-18 00:45:04.281594 | orchestrator | Saturday 18 April 2026 00:45:03 +0000 (0:00:00.082) 0:00:38.305 ******** 2026-04-18 00:45:04.281598 | orchestrator | ok: [testbed-node-4] => { 2026-04-18 00:45:04.281602 | orchestrator |  "vgs_report": { 2026-04-18 00:45:04.281606 | orchestrator |  "vg": [] 2026-04-18 00:45:04.281610 | orchestrator |  } 2026-04-18 00:45:04.281614 | orchestrator | } 2026-04-18 00:45:04.281618 | orchestrator | 2026-04-18 00:45:04.281621 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-18 00:45:04.281630 | orchestrator | Saturday 18 April 2026 00:45:03 +0000 (0:00:00.126) 0:00:38.431 ******** 2026-04-18 00:45:04.281634 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281638 | orchestrator | 2026-04-18 00:45:04.281641 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-18 00:45:04.281645 | orchestrator | Saturday 18 April 2026 00:45:03 +0000 (0:00:00.137) 0:00:38.569 ******** 2026-04-18 00:45:04.281649 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281652 | orchestrator | 2026-04-18 00:45:04.281656 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-18 00:45:04.281660 | orchestrator | Saturday 18 April 2026 00:45:04 +0000 (0:00:00.107) 0:00:38.676 ******** 2026-04-18 00:45:04.281664 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281667 | orchestrator | 2026-04-18 00:45:04.281671 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-18 00:45:04.281675 | orchestrator | Saturday 18 April 2026 00:45:04 +0000 (0:00:00.111) 0:00:38.788 ******** 2026-04-18 00:45:04.281679 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:04.281682 | orchestrator | 2026-04-18 00:45:04.281690 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-18 00:45:08.449403 | orchestrator | Saturday 18 April 2026 00:45:04 +0000 (0:00:00.113) 0:00:38.902 ******** 2026-04-18 00:45:08.450387 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450431 | orchestrator | 2026-04-18 00:45:08.450441 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-18 00:45:08.450450 | orchestrator | Saturday 18 April 2026 00:45:04 +0000 (0:00:00.117) 0:00:39.020 ******** 2026-04-18 00:45:08.450463 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450477 | orchestrator | 2026-04-18 00:45:08.450490 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-18 00:45:08.450500 | orchestrator | Saturday 18 April 2026 00:45:04 +0000 (0:00:00.249) 0:00:39.270 ******** 2026-04-18 00:45:08.450507 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450514 | orchestrator | 2026-04-18 00:45:08.450521 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-18 00:45:08.450527 | orchestrator | Saturday 18 April 2026 00:45:04 +0000 (0:00:00.127) 0:00:39.397 ******** 2026-04-18 00:45:08.450537 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450547 | orchestrator | 2026-04-18 00:45:08.450554 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-18 00:45:08.450562 | orchestrator | Saturday 18 April 2026 00:45:04 +0000 (0:00:00.120) 0:00:39.517 ******** 2026-04-18 00:45:08.450573 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450585 | orchestrator | 2026-04-18 00:45:08.450593 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-18 00:45:08.450600 | orchestrator | Saturday 18 April 2026 00:45:05 +0000 (0:00:00.113) 0:00:39.631 ******** 2026-04-18 00:45:08.450607 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450617 | orchestrator | 2026-04-18 00:45:08.450628 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-18 00:45:08.450635 | orchestrator | Saturday 18 April 2026 00:45:05 +0000 (0:00:00.108) 0:00:39.740 ******** 2026-04-18 00:45:08.450642 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450647 | orchestrator | 2026-04-18 00:45:08.450654 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-18 00:45:08.450660 | orchestrator | Saturday 18 April 2026 00:45:05 +0000 (0:00:00.105) 0:00:39.845 ******** 2026-04-18 00:45:08.450667 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450672 | orchestrator | 2026-04-18 00:45:08.450679 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-18 00:45:08.450686 | orchestrator | Saturday 18 April 2026 00:45:05 +0000 (0:00:00.122) 0:00:39.967 ******** 2026-04-18 00:45:08.450693 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450699 | orchestrator | 2026-04-18 00:45:08.450731 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-18 00:45:08.450738 | orchestrator | Saturday 18 April 2026 00:45:05 +0000 (0:00:00.105) 0:00:40.073 ******** 2026-04-18 00:45:08.450745 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450751 | orchestrator | 2026-04-18 00:45:08.450758 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-18 00:45:08.450765 | orchestrator | Saturday 18 April 2026 00:45:05 +0000 (0:00:00.109) 0:00:40.183 ******** 2026-04-18 00:45:08.450773 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:08.450782 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:08.450789 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450796 | orchestrator | 2026-04-18 00:45:08.450803 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-18 00:45:08.450810 | orchestrator | Saturday 18 April 2026 00:45:05 +0000 (0:00:00.133) 0:00:40.316 ******** 2026-04-18 00:45:08.450816 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:08.450822 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:08.450829 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450836 | orchestrator | 2026-04-18 00:45:08.450843 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-18 00:45:08.450849 | orchestrator | Saturday 18 April 2026 00:45:05 +0000 (0:00:00.118) 0:00:40.435 ******** 2026-04-18 00:45:08.450949 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:08.450961 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:08.450967 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.450973 | orchestrator | 2026-04-18 00:45:08.450979 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-18 00:45:08.450985 | orchestrator | Saturday 18 April 2026 00:45:05 +0000 (0:00:00.136) 0:00:40.572 ******** 2026-04-18 00:45:08.450992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:08.450998 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:08.451005 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.451011 | orchestrator | 2026-04-18 00:45:08.451040 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-18 00:45:08.451048 | orchestrator | Saturday 18 April 2026 00:45:06 +0000 (0:00:00.331) 0:00:40.904 ******** 2026-04-18 00:45:08.451055 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:08.451063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:08.451069 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.451075 | orchestrator | 2026-04-18 00:45:08.451081 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-18 00:45:08.451087 | orchestrator | Saturday 18 April 2026 00:45:06 +0000 (0:00:00.139) 0:00:41.043 ******** 2026-04-18 00:45:08.451093 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:08.451112 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:08.451119 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.451125 | orchestrator | 2026-04-18 00:45:08.451131 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-18 00:45:08.451138 | orchestrator | Saturday 18 April 2026 00:45:06 +0000 (0:00:00.132) 0:00:41.176 ******** 2026-04-18 00:45:08.451144 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:08.451151 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:08.451158 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.451164 | orchestrator | 2026-04-18 00:45:08.451171 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-18 00:45:08.451177 | orchestrator | Saturday 18 April 2026 00:45:06 +0000 (0:00:00.124) 0:00:41.300 ******** 2026-04-18 00:45:08.451184 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:08.451190 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:08.451198 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.451206 | orchestrator | 2026-04-18 00:45:08.451214 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-18 00:45:08.451220 | orchestrator | Saturday 18 April 2026 00:45:06 +0000 (0:00:00.129) 0:00:41.430 ******** 2026-04-18 00:45:08.451226 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:45:08.451233 | orchestrator | 2026-04-18 00:45:08.451240 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-18 00:45:08.451247 | orchestrator | Saturday 18 April 2026 00:45:07 +0000 (0:00:00.572) 0:00:42.002 ******** 2026-04-18 00:45:08.451255 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:45:08.451261 | orchestrator | 2026-04-18 00:45:08.451268 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-18 00:45:08.451274 | orchestrator | Saturday 18 April 2026 00:45:07 +0000 (0:00:00.562) 0:00:42.564 ******** 2026-04-18 00:45:08.451280 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:45:08.451287 | orchestrator | 2026-04-18 00:45:08.451294 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-18 00:45:08.451302 | orchestrator | Saturday 18 April 2026 00:45:08 +0000 (0:00:00.134) 0:00:42.699 ******** 2026-04-18 00:45:08.451310 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'vg_name': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'}) 2026-04-18 00:45:08.451318 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'vg_name': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'}) 2026-04-18 00:45:08.451326 | orchestrator | 2026-04-18 00:45:08.451333 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-18 00:45:08.451341 | orchestrator | Saturday 18 April 2026 00:45:08 +0000 (0:00:00.167) 0:00:42.867 ******** 2026-04-18 00:45:08.451348 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:08.451355 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:08.451362 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:08.451369 | orchestrator | 2026-04-18 00:45:08.451383 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-18 00:45:08.451390 | orchestrator | Saturday 18 April 2026 00:45:08 +0000 (0:00:00.140) 0:00:43.007 ******** 2026-04-18 00:45:08.451397 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:08.451411 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:13.824195 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:13.824286 | orchestrator | 2026-04-18 00:45:13.824298 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-18 00:45:13.824307 | orchestrator | Saturday 18 April 2026 00:45:08 +0000 (0:00:00.153) 0:00:43.160 ******** 2026-04-18 00:45:13.824315 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'})  2026-04-18 00:45:13.824324 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'})  2026-04-18 00:45:13.824331 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:13.824338 | orchestrator | 2026-04-18 00:45:13.824345 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-18 00:45:13.824352 | orchestrator | Saturday 18 April 2026 00:45:08 +0000 (0:00:00.170) 0:00:43.331 ******** 2026-04-18 00:45:13.824358 | orchestrator | ok: [testbed-node-4] => { 2026-04-18 00:45:13.824365 | orchestrator |  "lvm_report": { 2026-04-18 00:45:13.824374 | orchestrator |  "lv": [ 2026-04-18 00:45:13.824380 | orchestrator |  { 2026-04-18 00:45:13.824402 | orchestrator |  "lv_name": "osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a", 2026-04-18 00:45:13.824411 | orchestrator |  "vg_name": "ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a" 2026-04-18 00:45:13.824417 | orchestrator |  }, 2026-04-18 00:45:13.824424 | orchestrator |  { 2026-04-18 00:45:13.824430 | orchestrator |  "lv_name": "osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61", 2026-04-18 00:45:13.824437 | orchestrator |  "vg_name": "ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61" 2026-04-18 00:45:13.824443 | orchestrator |  } 2026-04-18 00:45:13.824449 | orchestrator |  ], 2026-04-18 00:45:13.824456 | orchestrator |  "pv": [ 2026-04-18 00:45:13.824462 | orchestrator |  { 2026-04-18 00:45:13.824469 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-18 00:45:13.824475 | orchestrator |  "vg_name": "ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61" 2026-04-18 00:45:13.824482 | orchestrator |  }, 2026-04-18 00:45:13.824489 | orchestrator |  { 2026-04-18 00:45:13.824495 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-18 00:45:13.824501 | orchestrator |  "vg_name": "ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a" 2026-04-18 00:45:13.824507 | orchestrator |  } 2026-04-18 00:45:13.824514 | orchestrator |  ] 2026-04-18 00:45:13.824520 | orchestrator |  } 2026-04-18 00:45:13.824526 | orchestrator | } 2026-04-18 00:45:13.824533 | orchestrator | 2026-04-18 00:45:13.824538 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-18 00:45:13.824544 | orchestrator | 2026-04-18 00:45:13.824550 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-18 00:45:13.824558 | orchestrator | Saturday 18 April 2026 00:45:09 +0000 (0:00:00.449) 0:00:43.780 ******** 2026-04-18 00:45:13.824565 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-18 00:45:13.824572 | orchestrator | 2026-04-18 00:45:13.824578 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-18 00:45:13.824585 | orchestrator | Saturday 18 April 2026 00:45:09 +0000 (0:00:00.226) 0:00:44.006 ******** 2026-04-18 00:45:13.824591 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:45:13.824619 | orchestrator | 2026-04-18 00:45:13.824626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.824633 | orchestrator | Saturday 18 April 2026 00:45:09 +0000 (0:00:00.199) 0:00:44.206 ******** 2026-04-18 00:45:13.824640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-18 00:45:13.824647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-18 00:45:13.824654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-18 00:45:13.824660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-18 00:45:13.824671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-18 00:45:13.824678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-18 00:45:13.824684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-18 00:45:13.824691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-18 00:45:13.824698 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-18 00:45:13.824705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-18 00:45:13.824711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-18 00:45:13.824718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-18 00:45:13.824724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-18 00:45:13.824731 | orchestrator | 2026-04-18 00:45:13.824738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.824744 | orchestrator | Saturday 18 April 2026 00:45:09 +0000 (0:00:00.358) 0:00:44.565 ******** 2026-04-18 00:45:13.824751 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:13.824758 | orchestrator | 2026-04-18 00:45:13.824765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.824771 | orchestrator | Saturday 18 April 2026 00:45:10 +0000 (0:00:00.182) 0:00:44.747 ******** 2026-04-18 00:45:13.824778 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:13.824784 | orchestrator | 2026-04-18 00:45:13.824791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.824812 | orchestrator | Saturday 18 April 2026 00:45:10 +0000 (0:00:00.175) 0:00:44.922 ******** 2026-04-18 00:45:13.824820 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:13.824827 | orchestrator | 2026-04-18 00:45:13.824834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.824841 | orchestrator | Saturday 18 April 2026 00:45:10 +0000 (0:00:00.178) 0:00:45.101 ******** 2026-04-18 00:45:13.824847 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:13.824854 | orchestrator | 2026-04-18 00:45:13.824860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.824867 | orchestrator | Saturday 18 April 2026 00:45:10 +0000 (0:00:00.177) 0:00:45.279 ******** 2026-04-18 00:45:13.824873 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:13.824880 | orchestrator | 2026-04-18 00:45:13.824887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.824915 | orchestrator | Saturday 18 April 2026 00:45:10 +0000 (0:00:00.156) 0:00:45.435 ******** 2026-04-18 00:45:13.824921 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:13.824927 | orchestrator | 2026-04-18 00:45:13.824933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.824938 | orchestrator | Saturday 18 April 2026 00:45:11 +0000 (0:00:00.450) 0:00:45.885 ******** 2026-04-18 00:45:13.824951 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:13.824957 | orchestrator | 2026-04-18 00:45:13.824971 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.824978 | orchestrator | Saturday 18 April 2026 00:45:11 +0000 (0:00:00.176) 0:00:46.062 ******** 2026-04-18 00:45:13.824985 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:13.824992 | orchestrator | 2026-04-18 00:45:13.824998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.825005 | orchestrator | Saturday 18 April 2026 00:45:11 +0000 (0:00:00.176) 0:00:46.238 ******** 2026-04-18 00:45:13.825012 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5) 2026-04-18 00:45:13.825021 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5) 2026-04-18 00:45:13.825028 | orchestrator | 2026-04-18 00:45:13.825035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.825042 | orchestrator | Saturday 18 April 2026 00:45:11 +0000 (0:00:00.364) 0:00:46.602 ******** 2026-04-18 00:45:13.825048 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_73edcc3e-3dbf-4300-a5bd-7c80b242e92a) 2026-04-18 00:45:13.825055 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_73edcc3e-3dbf-4300-a5bd-7c80b242e92a) 2026-04-18 00:45:13.825062 | orchestrator | 2026-04-18 00:45:13.825068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.825075 | orchestrator | Saturday 18 April 2026 00:45:12 +0000 (0:00:00.371) 0:00:46.973 ******** 2026-04-18 00:45:13.825082 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b8f980bd-a337-4427-8cb0-8c7c2531a389) 2026-04-18 00:45:13.825089 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b8f980bd-a337-4427-8cb0-8c7c2531a389) 2026-04-18 00:45:13.825095 | orchestrator | 2026-04-18 00:45:13.825102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.825108 | orchestrator | Saturday 18 April 2026 00:45:12 +0000 (0:00:00.400) 0:00:47.374 ******** 2026-04-18 00:45:13.825115 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5963e3c2-4ca5-403e-a695-14ba4ecb8231) 2026-04-18 00:45:13.825121 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5963e3c2-4ca5-403e-a695-14ba4ecb8231) 2026-04-18 00:45:13.825128 | orchestrator | 2026-04-18 00:45:13.825134 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-18 00:45:13.825141 | orchestrator | Saturday 18 April 2026 00:45:13 +0000 (0:00:00.426) 0:00:47.801 ******** 2026-04-18 00:45:13.825147 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-18 00:45:13.825154 | orchestrator | 2026-04-18 00:45:13.825160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:13.825167 | orchestrator | Saturday 18 April 2026 00:45:13 +0000 (0:00:00.324) 0:00:48.125 ******** 2026-04-18 00:45:13.825173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-18 00:45:13.825180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-18 00:45:13.825186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-18 00:45:13.825193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-18 00:45:13.825199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-18 00:45:13.825205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-18 00:45:13.825211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-18 00:45:13.825218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-18 00:45:13.825223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-18 00:45:13.825234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-18 00:45:13.825241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-18 00:45:13.825256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-18 00:45:22.179598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-18 00:45:22.179690 | orchestrator | 2026-04-18 00:45:22.179701 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.179709 | orchestrator | Saturday 18 April 2026 00:45:13 +0000 (0:00:00.419) 0:00:48.545 ******** 2026-04-18 00:45:22.179715 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.179723 | orchestrator | 2026-04-18 00:45:22.179729 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.179735 | orchestrator | Saturday 18 April 2026 00:45:14 +0000 (0:00:00.179) 0:00:48.724 ******** 2026-04-18 00:45:22.179741 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.179747 | orchestrator | 2026-04-18 00:45:22.179754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.179761 | orchestrator | Saturday 18 April 2026 00:45:14 +0000 (0:00:00.169) 0:00:48.894 ******** 2026-04-18 00:45:22.179767 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.179773 | orchestrator | 2026-04-18 00:45:22.179780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.179787 | orchestrator | Saturday 18 April 2026 00:45:14 +0000 (0:00:00.561) 0:00:49.455 ******** 2026-04-18 00:45:22.179791 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.179795 | orchestrator | 2026-04-18 00:45:22.179799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.179803 | orchestrator | Saturday 18 April 2026 00:45:15 +0000 (0:00:00.175) 0:00:49.631 ******** 2026-04-18 00:45:22.179807 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.179811 | orchestrator | 2026-04-18 00:45:22.179815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.179819 | orchestrator | Saturday 18 April 2026 00:45:15 +0000 (0:00:00.183) 0:00:49.815 ******** 2026-04-18 00:45:22.179823 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.179827 | orchestrator | 2026-04-18 00:45:22.179831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.179835 | orchestrator | Saturday 18 April 2026 00:45:15 +0000 (0:00:00.182) 0:00:49.997 ******** 2026-04-18 00:45:22.179838 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.179842 | orchestrator | 2026-04-18 00:45:22.179846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.179865 | orchestrator | Saturday 18 April 2026 00:45:15 +0000 (0:00:00.169) 0:00:50.167 ******** 2026-04-18 00:45:22.179869 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.179873 | orchestrator | 2026-04-18 00:45:22.179876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.179880 | orchestrator | Saturday 18 April 2026 00:45:15 +0000 (0:00:00.186) 0:00:50.354 ******** 2026-04-18 00:45:22.179885 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-18 00:45:22.179889 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-18 00:45:22.179986 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-18 00:45:22.179992 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-18 00:45:22.179996 | orchestrator | 2026-04-18 00:45:22.180002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.180009 | orchestrator | Saturday 18 April 2026 00:45:16 +0000 (0:00:00.593) 0:00:50.948 ******** 2026-04-18 00:45:22.180014 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180020 | orchestrator | 2026-04-18 00:45:22.180025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.180031 | orchestrator | Saturday 18 April 2026 00:45:16 +0000 (0:00:00.231) 0:00:51.179 ******** 2026-04-18 00:45:22.180056 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180062 | orchestrator | 2026-04-18 00:45:22.180067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.180073 | orchestrator | Saturday 18 April 2026 00:45:16 +0000 (0:00:00.177) 0:00:51.357 ******** 2026-04-18 00:45:22.180079 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180085 | orchestrator | 2026-04-18 00:45:22.180091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-18 00:45:22.180098 | orchestrator | Saturday 18 April 2026 00:45:16 +0000 (0:00:00.154) 0:00:51.511 ******** 2026-04-18 00:45:22.180103 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180110 | orchestrator | 2026-04-18 00:45:22.180117 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-18 00:45:22.180123 | orchestrator | Saturday 18 April 2026 00:45:17 +0000 (0:00:00.160) 0:00:51.672 ******** 2026-04-18 00:45:22.180129 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180135 | orchestrator | 2026-04-18 00:45:22.180141 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-18 00:45:22.180148 | orchestrator | Saturday 18 April 2026 00:45:17 +0000 (0:00:00.098) 0:00:51.770 ******** 2026-04-18 00:45:22.180157 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fe91ca0a-93bc-5e10-8732-62b62acecb68'}}) 2026-04-18 00:45:22.180162 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a409408a-9332-5b4b-a953-28c1be45fb12'}}) 2026-04-18 00:45:22.180166 | orchestrator | 2026-04-18 00:45:22.180171 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-18 00:45:22.180175 | orchestrator | Saturday 18 April 2026 00:45:17 +0000 (0:00:00.271) 0:00:52.041 ******** 2026-04-18 00:45:22.180182 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'}) 2026-04-18 00:45:22.180187 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'}) 2026-04-18 00:45:22.180192 | orchestrator | 2026-04-18 00:45:22.180196 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-18 00:45:22.180214 | orchestrator | Saturday 18 April 2026 00:45:19 +0000 (0:00:01.947) 0:00:53.988 ******** 2026-04-18 00:45:22.180219 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:22.180224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:22.180229 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180233 | orchestrator | 2026-04-18 00:45:22.180237 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-18 00:45:22.180241 | orchestrator | Saturday 18 April 2026 00:45:19 +0000 (0:00:00.129) 0:00:54.118 ******** 2026-04-18 00:45:22.180246 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'}) 2026-04-18 00:45:22.180265 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'}) 2026-04-18 00:45:22.180270 | orchestrator | 2026-04-18 00:45:22.180274 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-18 00:45:22.180279 | orchestrator | Saturday 18 April 2026 00:45:21 +0000 (0:00:01.592) 0:00:55.710 ******** 2026-04-18 00:45:22.180283 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:22.180287 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:22.180296 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180300 | orchestrator | 2026-04-18 00:45:22.180304 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-18 00:45:22.180309 | orchestrator | Saturday 18 April 2026 00:45:21 +0000 (0:00:00.140) 0:00:55.851 ******** 2026-04-18 00:45:22.180313 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180317 | orchestrator | 2026-04-18 00:45:22.180322 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-18 00:45:22.180326 | orchestrator | Saturday 18 April 2026 00:45:21 +0000 (0:00:00.126) 0:00:55.977 ******** 2026-04-18 00:45:22.180330 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:22.180334 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:22.180339 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180343 | orchestrator | 2026-04-18 00:45:22.180348 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-18 00:45:22.180352 | orchestrator | Saturday 18 April 2026 00:45:21 +0000 (0:00:00.139) 0:00:56.117 ******** 2026-04-18 00:45:22.180356 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180360 | orchestrator | 2026-04-18 00:45:22.180364 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-18 00:45:22.180369 | orchestrator | Saturday 18 April 2026 00:45:21 +0000 (0:00:00.122) 0:00:56.239 ******** 2026-04-18 00:45:22.180373 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:22.180378 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:22.180382 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180386 | orchestrator | 2026-04-18 00:45:22.180390 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-18 00:45:22.180395 | orchestrator | Saturday 18 April 2026 00:45:21 +0000 (0:00:00.129) 0:00:56.368 ******** 2026-04-18 00:45:22.180399 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180403 | orchestrator | 2026-04-18 00:45:22.180407 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-18 00:45:22.180412 | orchestrator | Saturday 18 April 2026 00:45:21 +0000 (0:00:00.106) 0:00:56.474 ******** 2026-04-18 00:45:22.180416 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:22.180420 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:22.180425 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:22.180429 | orchestrator | 2026-04-18 00:45:22.180433 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-18 00:45:22.180438 | orchestrator | Saturday 18 April 2026 00:45:21 +0000 (0:00:00.134) 0:00:56.609 ******** 2026-04-18 00:45:22.180442 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:45:22.180446 | orchestrator | 2026-04-18 00:45:22.180451 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-18 00:45:22.180455 | orchestrator | Saturday 18 April 2026 00:45:22 +0000 (0:00:00.126) 0:00:56.735 ******** 2026-04-18 00:45:22.180462 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:27.731946 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:27.732029 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732036 | orchestrator | 2026-04-18 00:45:27.732041 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-18 00:45:27.732047 | orchestrator | Saturday 18 April 2026 00:45:22 +0000 (0:00:00.270) 0:00:57.006 ******** 2026-04-18 00:45:27.732051 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:27.732055 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:27.732059 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732063 | orchestrator | 2026-04-18 00:45:27.732076 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-18 00:45:27.732080 | orchestrator | Saturday 18 April 2026 00:45:22 +0000 (0:00:00.143) 0:00:57.149 ******** 2026-04-18 00:45:27.732084 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:27.732088 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:27.732091 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732095 | orchestrator | 2026-04-18 00:45:27.732099 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-18 00:45:27.732103 | orchestrator | Saturday 18 April 2026 00:45:22 +0000 (0:00:00.143) 0:00:57.293 ******** 2026-04-18 00:45:27.732106 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732110 | orchestrator | 2026-04-18 00:45:27.732114 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-18 00:45:27.732117 | orchestrator | Saturday 18 April 2026 00:45:22 +0000 (0:00:00.122) 0:00:57.415 ******** 2026-04-18 00:45:27.732121 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732125 | orchestrator | 2026-04-18 00:45:27.732128 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-18 00:45:27.732132 | orchestrator | Saturday 18 April 2026 00:45:22 +0000 (0:00:00.123) 0:00:57.539 ******** 2026-04-18 00:45:27.732136 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732140 | orchestrator | 2026-04-18 00:45:27.732144 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-18 00:45:27.732148 | orchestrator | Saturday 18 April 2026 00:45:23 +0000 (0:00:00.123) 0:00:57.662 ******** 2026-04-18 00:45:27.732152 | orchestrator | ok: [testbed-node-5] => { 2026-04-18 00:45:27.732156 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-18 00:45:27.732160 | orchestrator | } 2026-04-18 00:45:27.732164 | orchestrator | 2026-04-18 00:45:27.732167 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-18 00:45:27.732171 | orchestrator | Saturday 18 April 2026 00:45:23 +0000 (0:00:00.134) 0:00:57.797 ******** 2026-04-18 00:45:27.732175 | orchestrator | ok: [testbed-node-5] => { 2026-04-18 00:45:27.732179 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-18 00:45:27.732182 | orchestrator | } 2026-04-18 00:45:27.732186 | orchestrator | 2026-04-18 00:45:27.732190 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-18 00:45:27.732193 | orchestrator | Saturday 18 April 2026 00:45:23 +0000 (0:00:00.146) 0:00:57.943 ******** 2026-04-18 00:45:27.732197 | orchestrator | ok: [testbed-node-5] => { 2026-04-18 00:45:27.732201 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-18 00:45:27.732205 | orchestrator | } 2026-04-18 00:45:27.732209 | orchestrator | 2026-04-18 00:45:27.732212 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-18 00:45:27.732216 | orchestrator | Saturday 18 April 2026 00:45:23 +0000 (0:00:00.118) 0:00:58.062 ******** 2026-04-18 00:45:27.732224 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:45:27.732228 | orchestrator | 2026-04-18 00:45:27.732232 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-18 00:45:27.732235 | orchestrator | Saturday 18 April 2026 00:45:23 +0000 (0:00:00.510) 0:00:58.572 ******** 2026-04-18 00:45:27.732239 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:45:27.732243 | orchestrator | 2026-04-18 00:45:27.732246 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-18 00:45:27.732250 | orchestrator | Saturday 18 April 2026 00:45:24 +0000 (0:00:00.467) 0:00:59.039 ******** 2026-04-18 00:45:27.732254 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:45:27.732258 | orchestrator | 2026-04-18 00:45:27.732262 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-18 00:45:27.732265 | orchestrator | Saturday 18 April 2026 00:45:24 +0000 (0:00:00.489) 0:00:59.529 ******** 2026-04-18 00:45:27.732269 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:45:27.732273 | orchestrator | 2026-04-18 00:45:27.732276 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-18 00:45:27.732280 | orchestrator | Saturday 18 April 2026 00:45:25 +0000 (0:00:00.287) 0:00:59.816 ******** 2026-04-18 00:45:27.732284 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732287 | orchestrator | 2026-04-18 00:45:27.732291 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-18 00:45:27.732295 | orchestrator | Saturday 18 April 2026 00:45:25 +0000 (0:00:00.097) 0:00:59.913 ******** 2026-04-18 00:45:27.732299 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732302 | orchestrator | 2026-04-18 00:45:27.732306 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-18 00:45:27.732310 | orchestrator | Saturday 18 April 2026 00:45:25 +0000 (0:00:00.109) 0:01:00.023 ******** 2026-04-18 00:45:27.732313 | orchestrator | ok: [testbed-node-5] => { 2026-04-18 00:45:27.732317 | orchestrator |  "vgs_report": { 2026-04-18 00:45:27.732321 | orchestrator |  "vg": [] 2026-04-18 00:45:27.732335 | orchestrator |  } 2026-04-18 00:45:27.732339 | orchestrator | } 2026-04-18 00:45:27.732343 | orchestrator | 2026-04-18 00:45:27.732347 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-18 00:45:27.732351 | orchestrator | Saturday 18 April 2026 00:45:25 +0000 (0:00:00.137) 0:01:00.161 ******** 2026-04-18 00:45:27.732355 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732358 | orchestrator | 2026-04-18 00:45:27.732362 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-18 00:45:27.732366 | orchestrator | Saturday 18 April 2026 00:45:25 +0000 (0:00:00.121) 0:01:00.283 ******** 2026-04-18 00:45:27.732370 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732373 | orchestrator | 2026-04-18 00:45:27.732377 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-18 00:45:27.732381 | orchestrator | Saturday 18 April 2026 00:45:25 +0000 (0:00:00.116) 0:01:00.399 ******** 2026-04-18 00:45:27.732384 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732388 | orchestrator | 2026-04-18 00:45:27.732392 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-18 00:45:27.732395 | orchestrator | Saturday 18 April 2026 00:45:25 +0000 (0:00:00.123) 0:01:00.523 ******** 2026-04-18 00:45:27.732402 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732405 | orchestrator | 2026-04-18 00:45:27.732409 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-18 00:45:27.732413 | orchestrator | Saturday 18 April 2026 00:45:26 +0000 (0:00:00.117) 0:01:00.640 ******** 2026-04-18 00:45:27.732417 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732420 | orchestrator | 2026-04-18 00:45:27.732424 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-18 00:45:27.732428 | orchestrator | Saturday 18 April 2026 00:45:26 +0000 (0:00:00.127) 0:01:00.767 ******** 2026-04-18 00:45:27.732431 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732435 | orchestrator | 2026-04-18 00:45:27.732442 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-18 00:45:27.732446 | orchestrator | Saturday 18 April 2026 00:45:26 +0000 (0:00:00.123) 0:01:00.891 ******** 2026-04-18 00:45:27.732450 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732453 | orchestrator | 2026-04-18 00:45:27.732457 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-18 00:45:27.732461 | orchestrator | Saturday 18 April 2026 00:45:26 +0000 (0:00:00.125) 0:01:01.016 ******** 2026-04-18 00:45:27.732464 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732468 | orchestrator | 2026-04-18 00:45:27.732473 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-18 00:45:27.732477 | orchestrator | Saturday 18 April 2026 00:45:26 +0000 (0:00:00.116) 0:01:01.133 ******** 2026-04-18 00:45:27.732482 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732486 | orchestrator | 2026-04-18 00:45:27.732490 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-18 00:45:27.732495 | orchestrator | Saturday 18 April 2026 00:45:26 +0000 (0:00:00.302) 0:01:01.436 ******** 2026-04-18 00:45:27.732499 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732503 | orchestrator | 2026-04-18 00:45:27.732508 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-18 00:45:27.732512 | orchestrator | Saturday 18 April 2026 00:45:26 +0000 (0:00:00.107) 0:01:01.543 ******** 2026-04-18 00:45:27.732516 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732520 | orchestrator | 2026-04-18 00:45:27.732524 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-18 00:45:27.732529 | orchestrator | Saturday 18 April 2026 00:45:27 +0000 (0:00:00.136) 0:01:01.680 ******** 2026-04-18 00:45:27.732533 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732537 | orchestrator | 2026-04-18 00:45:27.732542 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-18 00:45:27.732546 | orchestrator | Saturday 18 April 2026 00:45:27 +0000 (0:00:00.121) 0:01:01.801 ******** 2026-04-18 00:45:27.732550 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732554 | orchestrator | 2026-04-18 00:45:27.732558 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-18 00:45:27.732563 | orchestrator | Saturday 18 April 2026 00:45:27 +0000 (0:00:00.123) 0:01:01.925 ******** 2026-04-18 00:45:27.732567 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732571 | orchestrator | 2026-04-18 00:45:27.732575 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-18 00:45:27.732580 | orchestrator | Saturday 18 April 2026 00:45:27 +0000 (0:00:00.114) 0:01:02.039 ******** 2026-04-18 00:45:27.732585 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:27.732591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:27.732598 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732604 | orchestrator | 2026-04-18 00:45:27.732610 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-18 00:45:27.732617 | orchestrator | Saturday 18 April 2026 00:45:27 +0000 (0:00:00.140) 0:01:02.180 ******** 2026-04-18 00:45:27.732623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:27.732630 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:27.732636 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:27.732642 | orchestrator | 2026-04-18 00:45:27.732649 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-18 00:45:27.732655 | orchestrator | Saturday 18 April 2026 00:45:27 +0000 (0:00:00.125) 0:01:02.306 ******** 2026-04-18 00:45:27.732672 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:30.676759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:30.676834 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:30.676843 | orchestrator | 2026-04-18 00:45:30.676851 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-18 00:45:30.676858 | orchestrator | Saturday 18 April 2026 00:45:27 +0000 (0:00:00.123) 0:01:02.429 ******** 2026-04-18 00:45:30.676865 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:30.676872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:30.676878 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:30.676884 | orchestrator | 2026-04-18 00:45:30.676890 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-18 00:45:30.676946 | orchestrator | Saturday 18 April 2026 00:45:27 +0000 (0:00:00.131) 0:01:02.561 ******** 2026-04-18 00:45:30.676953 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:30.676960 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:30.676966 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:30.676972 | orchestrator | 2026-04-18 00:45:30.676978 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-18 00:45:30.676986 | orchestrator | Saturday 18 April 2026 00:45:28 +0000 (0:00:00.144) 0:01:02.706 ******** 2026-04-18 00:45:30.676996 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:30.677007 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:30.677035 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:30.677050 | orchestrator | 2026-04-18 00:45:30.677060 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-18 00:45:30.677070 | orchestrator | Saturday 18 April 2026 00:45:28 +0000 (0:00:00.145) 0:01:02.852 ******** 2026-04-18 00:45:30.677080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:30.677090 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:30.677101 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:30.677112 | orchestrator | 2026-04-18 00:45:30.677122 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-18 00:45:30.677133 | orchestrator | Saturday 18 April 2026 00:45:28 +0000 (0:00:00.286) 0:01:03.138 ******** 2026-04-18 00:45:30.677144 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:30.677154 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:30.677164 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:30.677175 | orchestrator | 2026-04-18 00:45:30.677185 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-18 00:45:30.677216 | orchestrator | Saturday 18 April 2026 00:45:28 +0000 (0:00:00.139) 0:01:03.277 ******** 2026-04-18 00:45:30.677226 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:45:30.677237 | orchestrator | 2026-04-18 00:45:30.677247 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-18 00:45:30.677258 | orchestrator | Saturday 18 April 2026 00:45:29 +0000 (0:00:00.594) 0:01:03.872 ******** 2026-04-18 00:45:30.677268 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:45:30.677279 | orchestrator | 2026-04-18 00:45:30.677290 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-18 00:45:30.677300 | orchestrator | Saturday 18 April 2026 00:45:29 +0000 (0:00:00.572) 0:01:04.444 ******** 2026-04-18 00:45:30.677310 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:45:30.677320 | orchestrator | 2026-04-18 00:45:30.677330 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-18 00:45:30.677341 | orchestrator | Saturday 18 April 2026 00:45:29 +0000 (0:00:00.137) 0:01:04.582 ******** 2026-04-18 00:45:30.677352 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'vg_name': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'}) 2026-04-18 00:45:30.677363 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'vg_name': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'}) 2026-04-18 00:45:30.677374 | orchestrator | 2026-04-18 00:45:30.677385 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-18 00:45:30.677395 | orchestrator | Saturday 18 April 2026 00:45:30 +0000 (0:00:00.171) 0:01:04.754 ******** 2026-04-18 00:45:30.677420 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:30.677431 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:30.677441 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:30.677452 | orchestrator | 2026-04-18 00:45:30.677462 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-18 00:45:30.677473 | orchestrator | Saturday 18 April 2026 00:45:30 +0000 (0:00:00.159) 0:01:04.913 ******** 2026-04-18 00:45:30.677488 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:30.677499 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:30.677509 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:30.677520 | orchestrator | 2026-04-18 00:45:30.677530 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-18 00:45:30.677541 | orchestrator | Saturday 18 April 2026 00:45:30 +0000 (0:00:00.122) 0:01:05.036 ******** 2026-04-18 00:45:30.677551 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'})  2026-04-18 00:45:30.677562 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'})  2026-04-18 00:45:30.677572 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:30.677583 | orchestrator | 2026-04-18 00:45:30.677593 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-18 00:45:30.677603 | orchestrator | Saturday 18 April 2026 00:45:30 +0000 (0:00:00.132) 0:01:05.169 ******** 2026-04-18 00:45:30.677613 | orchestrator | ok: [testbed-node-5] => { 2026-04-18 00:45:30.677624 | orchestrator |  "lvm_report": { 2026-04-18 00:45:30.677636 | orchestrator |  "lv": [ 2026-04-18 00:45:30.677646 | orchestrator |  { 2026-04-18 00:45:30.677664 | orchestrator |  "lv_name": "osd-block-a409408a-9332-5b4b-a953-28c1be45fb12", 2026-04-18 00:45:30.677675 | orchestrator |  "vg_name": "ceph-a409408a-9332-5b4b-a953-28c1be45fb12" 2026-04-18 00:45:30.677685 | orchestrator |  }, 2026-04-18 00:45:30.677696 | orchestrator |  { 2026-04-18 00:45:30.677706 | orchestrator |  "lv_name": "osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68", 2026-04-18 00:45:30.677717 | orchestrator |  "vg_name": "ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68" 2026-04-18 00:45:30.677727 | orchestrator |  } 2026-04-18 00:45:30.677738 | orchestrator |  ], 2026-04-18 00:45:30.677748 | orchestrator |  "pv": [ 2026-04-18 00:45:30.677758 | orchestrator |  { 2026-04-18 00:45:30.677768 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-18 00:45:30.677779 | orchestrator |  "vg_name": "ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68" 2026-04-18 00:45:30.677789 | orchestrator |  }, 2026-04-18 00:45:30.677799 | orchestrator |  { 2026-04-18 00:45:30.677810 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-18 00:45:30.677820 | orchestrator |  "vg_name": "ceph-a409408a-9332-5b4b-a953-28c1be45fb12" 2026-04-18 00:45:30.677831 | orchestrator |  } 2026-04-18 00:45:30.677841 | orchestrator |  ] 2026-04-18 00:45:30.677851 | orchestrator |  } 2026-04-18 00:45:30.677862 | orchestrator | } 2026-04-18 00:45:30.677872 | orchestrator | 2026-04-18 00:45:30.677882 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:45:30.677893 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-18 00:45:30.677923 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-18 00:45:30.677933 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-18 00:45:30.677943 | orchestrator | 2026-04-18 00:45:30.677954 | orchestrator | 2026-04-18 00:45:30.677964 | orchestrator | 2026-04-18 00:45:30.677975 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:45:30.677985 | orchestrator | Saturday 18 April 2026 00:45:30 +0000 (0:00:00.116) 0:01:05.286 ******** 2026-04-18 00:45:30.677996 | orchestrator | =============================================================================== 2026-04-18 00:45:30.678006 | orchestrator | Create block VGs -------------------------------------------------------- 5.70s 2026-04-18 00:45:30.678068 | orchestrator | Create block LVs -------------------------------------------------------- 4.35s 2026-04-18 00:45:30.678082 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.84s 2026-04-18 00:45:30.678093 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.68s 2026-04-18 00:45:30.678103 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.64s 2026-04-18 00:45:30.678114 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.63s 2026-04-18 00:45:30.678125 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.52s 2026-04-18 00:45:30.678136 | orchestrator | Add known partitions to the list of available block devices ------------- 1.34s 2026-04-18 00:45:30.678155 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2026-04-18 00:45:30.971110 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-04-18 00:45:30.971206 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-04-18 00:45:30.971221 | orchestrator | Print LVM report data --------------------------------------------------- 0.84s 2026-04-18 00:45:30.971233 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-04-18 00:45:30.971244 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.66s 2026-04-18 00:45:30.971280 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.62s 2026-04-18 00:45:30.971291 | orchestrator | Get initial list of available block devices ----------------------------- 0.61s 2026-04-18 00:45:30.971304 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.60s 2026-04-18 00:45:30.971347 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2026-04-18 00:45:30.971370 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.59s 2026-04-18 00:45:30.971381 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2026-04-18 00:45:42.391208 | orchestrator | 2026-04-18 00:45:42 | INFO  | Prepare task for execution of facts. 2026-04-18 00:45:42.465380 | orchestrator | 2026-04-18 00:45:42 | INFO  | Task a5d91eac-8ad7-4bfa-9404-d899b7577521 (facts) was prepared for execution. 2026-04-18 00:45:42.465463 | orchestrator | 2026-04-18 00:45:42 | INFO  | It takes a moment until task a5d91eac-8ad7-4bfa-9404-d899b7577521 (facts) has been started and output is visible here. 2026-04-18 00:45:53.567892 | orchestrator | 2026-04-18 00:45:53.567960 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-18 00:45:53.567971 | orchestrator | 2026-04-18 00:45:53.567978 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-18 00:45:53.567986 | orchestrator | Saturday 18 April 2026 00:45:45 +0000 (0:00:00.299) 0:00:00.299 ******** 2026-04-18 00:45:53.567993 | orchestrator | ok: [testbed-manager] 2026-04-18 00:45:53.568000 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:45:53.568007 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:45:53.568014 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:45:53.568020 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:45:53.568027 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:45:53.568034 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:45:53.568040 | orchestrator | 2026-04-18 00:45:53.568048 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-18 00:45:53.568055 | orchestrator | Saturday 18 April 2026 00:45:46 +0000 (0:00:01.336) 0:00:01.635 ******** 2026-04-18 00:45:53.568062 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:45:53.568069 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:45:53.568076 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:45:53.568083 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:45:53.568090 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:45:53.568097 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:53.568104 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:53.568111 | orchestrator | 2026-04-18 00:45:53.568118 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-18 00:45:53.568124 | orchestrator | 2026-04-18 00:45:53.568131 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-18 00:45:53.568138 | orchestrator | Saturday 18 April 2026 00:45:47 +0000 (0:00:01.061) 0:00:02.697 ******** 2026-04-18 00:45:53.568145 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:45:53.568152 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:45:53.568159 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:45:53.568166 | orchestrator | ok: [testbed-manager] 2026-04-18 00:45:53.568173 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:45:53.568179 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:45:53.568186 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:45:53.568192 | orchestrator | 2026-04-18 00:45:53.568199 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-18 00:45:53.568206 | orchestrator | 2026-04-18 00:45:53.568213 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-18 00:45:53.568220 | orchestrator | Saturday 18 April 2026 00:45:52 +0000 (0:00:04.940) 0:00:07.638 ******** 2026-04-18 00:45:53.568227 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:45:53.568234 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:45:53.568241 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:45:53.568266 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:45:53.568273 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:45:53.568280 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:45:53.568287 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:45:53.568293 | orchestrator | 2026-04-18 00:45:53.568300 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:45:53.568307 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:45:53.568315 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:45:53.568322 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:45:53.568329 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:45:53.568336 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:45:53.568343 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:45:53.568349 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:45:53.568357 | orchestrator | 2026-04-18 00:45:53.568364 | orchestrator | 2026-04-18 00:45:53.568371 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:45:53.568378 | orchestrator | Saturday 18 April 2026 00:45:53 +0000 (0:00:00.483) 0:00:08.121 ******** 2026-04-18 00:45:53.568386 | orchestrator | =============================================================================== 2026-04-18 00:45:53.568393 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.94s 2026-04-18 00:45:53.568400 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.34s 2026-04-18 00:45:53.568415 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2026-04-18 00:45:53.568422 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-04-18 00:46:05.104060 | orchestrator | 2026-04-18 00:46:05 | INFO  | Prepare task for execution of frr. 2026-04-18 00:46:05.181287 | orchestrator | 2026-04-18 00:46:05 | INFO  | Task 18a77ab8-8a7f-42f8-9cdd-57d139c86ee2 (frr) was prepared for execution. 2026-04-18 00:46:05.181327 | orchestrator | 2026-04-18 00:46:05 | INFO  | It takes a moment until task 18a77ab8-8a7f-42f8-9cdd-57d139c86ee2 (frr) has been started and output is visible here. 2026-04-18 00:46:29.966744 | orchestrator | 2026-04-18 00:46:29.966823 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-18 00:46:29.966833 | orchestrator | 2026-04-18 00:46:29.966840 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-18 00:46:29.966847 | orchestrator | Saturday 18 April 2026 00:46:08 +0000 (0:00:00.301) 0:00:00.301 ******** 2026-04-18 00:46:29.966854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-18 00:46:29.966861 | orchestrator | 2026-04-18 00:46:29.966867 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-18 00:46:29.966874 | orchestrator | Saturday 18 April 2026 00:46:08 +0000 (0:00:00.233) 0:00:00.534 ******** 2026-04-18 00:46:29.966880 | orchestrator | changed: [testbed-manager] 2026-04-18 00:46:29.966887 | orchestrator | 2026-04-18 00:46:29.966893 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-18 00:46:29.966899 | orchestrator | Saturday 18 April 2026 00:46:10 +0000 (0:00:01.523) 0:00:02.058 ******** 2026-04-18 00:46:29.966967 | orchestrator | changed: [testbed-manager] 2026-04-18 00:46:29.966975 | orchestrator | 2026-04-18 00:46:29.966981 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-18 00:46:29.966988 | orchestrator | Saturday 18 April 2026 00:46:19 +0000 (0:00:09.686) 0:00:11.744 ******** 2026-04-18 00:46:29.966994 | orchestrator | ok: [testbed-manager] 2026-04-18 00:46:29.967001 | orchestrator | 2026-04-18 00:46:29.967008 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-18 00:46:29.967015 | orchestrator | Saturday 18 April 2026 00:46:20 +0000 (0:00:01.016) 0:00:12.760 ******** 2026-04-18 00:46:29.967021 | orchestrator | changed: [testbed-manager] 2026-04-18 00:46:29.967027 | orchestrator | 2026-04-18 00:46:29.967033 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-18 00:46:29.967040 | orchestrator | Saturday 18 April 2026 00:46:21 +0000 (0:00:00.919) 0:00:13.680 ******** 2026-04-18 00:46:29.967046 | orchestrator | ok: [testbed-manager] 2026-04-18 00:46:29.967052 | orchestrator | 2026-04-18 00:46:29.967059 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-18 00:46:29.967065 | orchestrator | Saturday 18 April 2026 00:46:23 +0000 (0:00:01.195) 0:00:14.875 ******** 2026-04-18 00:46:29.967072 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:46:29.967078 | orchestrator | 2026-04-18 00:46:29.967084 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-18 00:46:29.967091 | orchestrator | Saturday 18 April 2026 00:46:23 +0000 (0:00:00.152) 0:00:15.027 ******** 2026-04-18 00:46:29.967097 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:46:29.967103 | orchestrator | 2026-04-18 00:46:29.967109 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-18 00:46:29.967116 | orchestrator | Saturday 18 April 2026 00:46:23 +0000 (0:00:00.291) 0:00:15.319 ******** 2026-04-18 00:46:29.967122 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:46:29.967128 | orchestrator | 2026-04-18 00:46:29.967134 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-18 00:46:29.967141 | orchestrator | Saturday 18 April 2026 00:46:23 +0000 (0:00:00.153) 0:00:15.473 ******** 2026-04-18 00:46:29.967148 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:46:29.967154 | orchestrator | 2026-04-18 00:46:29.967161 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-18 00:46:29.967167 | orchestrator | Saturday 18 April 2026 00:46:23 +0000 (0:00:00.127) 0:00:15.600 ******** 2026-04-18 00:46:29.967173 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:46:29.967180 | orchestrator | 2026-04-18 00:46:29.967186 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-18 00:46:29.967192 | orchestrator | Saturday 18 April 2026 00:46:23 +0000 (0:00:00.153) 0:00:15.753 ******** 2026-04-18 00:46:29.967198 | orchestrator | changed: [testbed-manager] 2026-04-18 00:46:29.967204 | orchestrator | 2026-04-18 00:46:29.967211 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-18 00:46:29.967217 | orchestrator | Saturday 18 April 2026 00:46:24 +0000 (0:00:00.942) 0:00:16.696 ******** 2026-04-18 00:46:29.967224 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-18 00:46:29.967230 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-18 00:46:29.967238 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-18 00:46:29.967244 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-18 00:46:29.967250 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-18 00:46:29.967257 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-18 00:46:29.967263 | orchestrator | 2026-04-18 00:46:29.967269 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-18 00:46:29.967280 | orchestrator | Saturday 18 April 2026 00:46:27 +0000 (0:00:02.200) 0:00:18.896 ******** 2026-04-18 00:46:29.967287 | orchestrator | ok: [testbed-manager] 2026-04-18 00:46:29.967293 | orchestrator | 2026-04-18 00:46:29.967300 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-18 00:46:29.967306 | orchestrator | Saturday 18 April 2026 00:46:28 +0000 (0:00:01.191) 0:00:20.088 ******** 2026-04-18 00:46:29.967313 | orchestrator | changed: [testbed-manager] 2026-04-18 00:46:29.967320 | orchestrator | 2026-04-18 00:46:29.967326 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:46:29.967334 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-18 00:46:29.967340 | orchestrator | 2026-04-18 00:46:29.967347 | orchestrator | 2026-04-18 00:46:29.967368 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:46:29.967375 | orchestrator | Saturday 18 April 2026 00:46:29 +0000 (0:00:01.353) 0:00:21.442 ******** 2026-04-18 00:46:29.967381 | orchestrator | =============================================================================== 2026-04-18 00:46:29.967388 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.69s 2026-04-18 00:46:29.967394 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.20s 2026-04-18 00:46:29.967401 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.52s 2026-04-18 00:46:29.967407 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.35s 2026-04-18 00:46:29.967414 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.20s 2026-04-18 00:46:29.967420 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.19s 2026-04-18 00:46:29.967427 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.02s 2026-04-18 00:46:29.967433 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.94s 2026-04-18 00:46:29.967439 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.92s 2026-04-18 00:46:29.967446 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.29s 2026-04-18 00:46:29.967453 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-04-18 00:46:29.967459 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-04-18 00:46:29.967466 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-04-18 00:46:29.967472 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-04-18 00:46:29.967479 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-04-18 00:46:30.145993 | orchestrator | 2026-04-18 00:46:30.147840 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Apr 18 00:46:30 UTC 2026 2026-04-18 00:46:30.147899 | orchestrator | 2026-04-18 00:46:31.333160 | orchestrator | 2026-04-18 00:46:31 | INFO  | Collection nutshell is prepared for execution 2026-04-18 00:46:31.448447 | orchestrator | 2026-04-18 00:46:31 | INFO  | A [0] - dotfiles 2026-04-18 00:46:41.485462 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [0] - homer 2026-04-18 00:46:41.485537 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [0] - netdata 2026-04-18 00:46:41.485544 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [0] - openstackclient 2026-04-18 00:46:41.485549 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [0] - phpmyadmin 2026-04-18 00:46:41.485687 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [0] - common 2026-04-18 00:46:41.489655 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [1] -- loadbalancer 2026-04-18 00:46:41.489741 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [2] --- opensearch 2026-04-18 00:46:41.490248 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [2] --- mariadb-ng 2026-04-18 00:46:41.490478 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [3] ---- horizon 2026-04-18 00:46:41.490576 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [3] ---- keystone 2026-04-18 00:46:41.490850 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [4] ----- neutron 2026-04-18 00:46:41.491194 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [5] ------ wait-for-nova 2026-04-18 00:46:41.491644 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [6] ------- octavia 2026-04-18 00:46:41.492973 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [4] ----- barbican 2026-04-18 00:46:41.493080 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [4] ----- designate 2026-04-18 00:46:41.493090 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [4] ----- ironic 2026-04-18 00:46:41.493469 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [4] ----- placement 2026-04-18 00:46:41.493490 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [4] ----- magnum 2026-04-18 00:46:41.495429 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [1] -- openvswitch 2026-04-18 00:46:41.495476 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [2] --- ovn 2026-04-18 00:46:41.495555 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [1] -- memcached 2026-04-18 00:46:41.495816 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [1] -- redis 2026-04-18 00:46:41.495830 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [1] -- rabbitmq-ng 2026-04-18 00:46:41.496251 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [0] - kubernetes 2026-04-18 00:46:41.498914 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [1] -- kubeconfig 2026-04-18 00:46:41.498975 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [1] -- copy-kubeconfig 2026-04-18 00:46:41.498991 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [0] - ceph 2026-04-18 00:46:41.501480 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [1] -- ceph-pools 2026-04-18 00:46:41.501522 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [2] --- copy-ceph-keys 2026-04-18 00:46:41.501528 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [3] ---- cephclient 2026-04-18 00:46:41.501532 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-18 00:46:41.501643 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [4] ----- wait-for-keystone 2026-04-18 00:46:41.501814 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-18 00:46:41.502786 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [5] ------ glance 2026-04-18 00:46:41.502822 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [5] ------ cinder 2026-04-18 00:46:41.502827 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [5] ------ nova 2026-04-18 00:46:41.502832 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [4] ----- prometheus 2026-04-18 00:46:41.502836 | orchestrator | 2026-04-18 00:46:41 | INFO  | A [5] ------ grafana 2026-04-18 00:46:41.686815 | orchestrator | 2026-04-18 00:46:41 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-18 00:46:41.686882 | orchestrator | 2026-04-18 00:46:41 | INFO  | Tasks are running in the background 2026-04-18 00:46:43.323145 | orchestrator | 2026-04-18 00:46:43 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-18 00:46:45.504981 | orchestrator | 2026-04-18 00:46:45 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:46:45.505756 | orchestrator | 2026-04-18 00:46:45 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:46:45.506142 | orchestrator | 2026-04-18 00:46:45 | INFO  | Task 6ddac335-3afe-43bd-8c57-7e69eae48f97 is in state STARTED 2026-04-18 00:46:45.507261 | orchestrator | 2026-04-18 00:46:45 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:46:45.507979 | orchestrator | 2026-04-18 00:46:45 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:46:45.508627 | orchestrator | 2026-04-18 00:46:45 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:46:45.509444 | orchestrator | 2026-04-18 00:46:45 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:46:45.509481 | orchestrator | 2026-04-18 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:46:48.559497 | orchestrator | 2026-04-18 00:46:48 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:46:48.559886 | orchestrator | 2026-04-18 00:46:48 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:46:48.561820 | orchestrator | 2026-04-18 00:46:48 | INFO  | Task 6ddac335-3afe-43bd-8c57-7e69eae48f97 is in state STARTED 2026-04-18 00:46:48.562324 | orchestrator | 2026-04-18 00:46:48 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:46:48.563093 | orchestrator | 2026-04-18 00:46:48 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:46:48.564019 | orchestrator | 2026-04-18 00:46:48 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:46:48.564429 | orchestrator | 2026-04-18 00:46:48 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:46:48.567709 | orchestrator | 2026-04-18 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:46:51.604561 | orchestrator | 2026-04-18 00:46:51 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:46:51.605475 | orchestrator | 2026-04-18 00:46:51 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:46:51.605981 | orchestrator | 2026-04-18 00:46:51 | INFO  | Task 6ddac335-3afe-43bd-8c57-7e69eae48f97 is in state STARTED 2026-04-18 00:46:51.606639 | orchestrator | 2026-04-18 00:46:51 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:46:51.607239 | orchestrator | 2026-04-18 00:46:51 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:46:51.607821 | orchestrator | 2026-04-18 00:46:51 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:46:51.610629 | orchestrator | 2026-04-18 00:46:51 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:46:51.610656 | orchestrator | 2026-04-18 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:46:54.767886 | orchestrator | 2026-04-18 00:46:54 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:46:54.768010 | orchestrator | 2026-04-18 00:46:54 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:46:54.768025 | orchestrator | 2026-04-18 00:46:54 | INFO  | Task 6ddac335-3afe-43bd-8c57-7e69eae48f97 is in state STARTED 2026-04-18 00:46:54.768032 | orchestrator | 2026-04-18 00:46:54 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:46:54.768040 | orchestrator | 2026-04-18 00:46:54 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:46:54.768046 | orchestrator | 2026-04-18 00:46:54 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:46:54.768053 | orchestrator | 2026-04-18 00:46:54 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:46:54.768079 | orchestrator | 2026-04-18 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:46:57.985251 | orchestrator | 2026-04-18 00:46:57 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:46:57.985389 | orchestrator | 2026-04-18 00:46:57 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:46:57.985403 | orchestrator | 2026-04-18 00:46:57 | INFO  | Task 6ddac335-3afe-43bd-8c57-7e69eae48f97 is in state STARTED 2026-04-18 00:46:57.985411 | orchestrator | 2026-04-18 00:46:57 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:46:57.985418 | orchestrator | 2026-04-18 00:46:57 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:46:57.985425 | orchestrator | 2026-04-18 00:46:57 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:46:57.985431 | orchestrator | 2026-04-18 00:46:57 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:46:57.985438 | orchestrator | 2026-04-18 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:01.162267 | orchestrator | 2026-04-18 00:47:01 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:01.162348 | orchestrator | 2026-04-18 00:47:01 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:47:01.162358 | orchestrator | 2026-04-18 00:47:01 | INFO  | Task 6ddac335-3afe-43bd-8c57-7e69eae48f97 is in state STARTED 2026-04-18 00:47:01.162366 | orchestrator | 2026-04-18 00:47:01 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:01.162370 | orchestrator | 2026-04-18 00:47:01 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:01.162374 | orchestrator | 2026-04-18 00:47:01 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:01.162378 | orchestrator | 2026-04-18 00:47:01 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:01.162382 | orchestrator | 2026-04-18 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:04.285459 | orchestrator | 2026-04-18 00:47:04 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:04.290635 | orchestrator | 2026-04-18 00:47:04 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:47:04.293225 | orchestrator | 2026-04-18 00:47:04 | INFO  | Task 6ddac335-3afe-43bd-8c57-7e69eae48f97 is in state SUCCESS 2026-04-18 00:47:04.294241 | orchestrator | 2026-04-18 00:47:04.294282 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-18 00:47:04.294291 | orchestrator | 2026-04-18 00:47:04.294295 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-18 00:47:04.294300 | orchestrator | Saturday 18 April 2026 00:46:49 +0000 (0:00:00.311) 0:00:00.311 ******** 2026-04-18 00:47:04.294304 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:47:04.294308 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:47:04.294312 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:47:04.294315 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:47:04.294319 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:04.294323 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:47:04.294326 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:47:04.294330 | orchestrator | 2026-04-18 00:47:04.294334 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-18 00:47:04.294337 | orchestrator | Saturday 18 April 2026 00:46:53 +0000 (0:00:03.883) 0:00:04.195 ******** 2026-04-18 00:47:04.294355 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-18 00:47:04.294360 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-18 00:47:04.294367 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-18 00:47:04.294371 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-18 00:47:04.294375 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-18 00:47:04.294378 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-18 00:47:04.294382 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-18 00:47:04.294386 | orchestrator | 2026-04-18 00:47:04.294389 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-18 00:47:04.294394 | orchestrator | Saturday 18 April 2026 00:46:55 +0000 (0:00:02.103) 0:00:06.298 ******** 2026-04-18 00:47:04.294399 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-18 00:46:54.344690', 'end': '2026-04-18 00:46:54.350998', 'delta': '0:00:00.006308', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-18 00:47:04.294404 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-18 00:46:54.342501', 'end': '2026-04-18 00:46:54.355226', 'delta': '0:00:00.012725', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-18 00:47:04.294409 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-18 00:46:54.365007', 'end': '2026-04-18 00:46:54.370146', 'delta': '0:00:00.005139', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-18 00:47:04.294421 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-18 00:46:54.463929', 'end': '2026-04-18 00:46:54.469425', 'delta': '0:00:00.005496', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-18 00:47:04.294448 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-18 00:46:54.452933', 'end': '2026-04-18 00:46:54.459853', 'delta': '0:00:00.006920', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-18 00:47:04.294453 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-18 00:46:55.225615', 'end': '2026-04-18 00:46:55.234698', 'delta': '0:00:00.009083', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-18 00:47:04.294457 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-18 00:46:55.603646', 'end': '2026-04-18 00:46:55.613159', 'delta': '0:00:00.009513', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-18 00:47:04.294461 | orchestrator | 2026-04-18 00:47:04.294464 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-18 00:47:04.294468 | orchestrator | Saturday 18 April 2026 00:46:57 +0000 (0:00:01.513) 0:00:07.811 ******** 2026-04-18 00:47:04.294472 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-18 00:47:04.294476 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-18 00:47:04.294480 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-18 00:47:04.294483 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-18 00:47:04.294487 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-18 00:47:04.294491 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-18 00:47:04.294495 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-18 00:47:04.294499 | orchestrator | 2026-04-18 00:47:04.294503 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-18 00:47:04.294506 | orchestrator | Saturday 18 April 2026 00:46:58 +0000 (0:00:01.284) 0:00:09.095 ******** 2026-04-18 00:47:04.294510 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-18 00:47:04.294514 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-18 00:47:04.294518 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-18 00:47:04.294521 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-18 00:47:04.294528 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-18 00:47:04.294532 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-18 00:47:04.294536 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-18 00:47:04.294539 | orchestrator | 2026-04-18 00:47:04.294543 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:47:04.294551 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:47:04.294556 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:47:04.294559 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:47:04.294563 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:47:04.294567 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:47:04.294571 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:47:04.294580 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:47:04.294588 | orchestrator | 2026-04-18 00:47:04.294592 | orchestrator | 2026-04-18 00:47:04.294596 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:47:04.294600 | orchestrator | Saturday 18 April 2026 00:47:00 +0000 (0:00:02.225) 0:00:11.321 ******** 2026-04-18 00:47:04.294604 | orchestrator | =============================================================================== 2026-04-18 00:47:04.294610 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.88s 2026-04-18 00:47:04.294620 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.23s 2026-04-18 00:47:04.294628 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.10s 2026-04-18 00:47:04.294634 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.51s 2026-04-18 00:47:04.294641 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.28s 2026-04-18 00:47:04.302006 | orchestrator | 2026-04-18 00:47:04 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:04.304767 | orchestrator | 2026-04-18 00:47:04 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:04.317104 | orchestrator | 2026-04-18 00:47:04 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:04.322810 | orchestrator | 2026-04-18 00:47:04 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:04.329157 | orchestrator | 2026-04-18 00:47:04 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:04.329200 | orchestrator | 2026-04-18 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:07.415475 | orchestrator | 2026-04-18 00:47:07 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:07.415556 | orchestrator | 2026-04-18 00:47:07 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:47:07.415564 | orchestrator | 2026-04-18 00:47:07 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:07.415592 | orchestrator | 2026-04-18 00:47:07 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:07.415608 | orchestrator | 2026-04-18 00:47:07 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:07.415613 | orchestrator | 2026-04-18 00:47:07 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:07.415617 | orchestrator | 2026-04-18 00:47:07 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:07.415621 | orchestrator | 2026-04-18 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:10.465643 | orchestrator | 2026-04-18 00:47:10 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:10.465702 | orchestrator | 2026-04-18 00:47:10 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:47:10.466697 | orchestrator | 2026-04-18 00:47:10 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:10.466742 | orchestrator | 2026-04-18 00:47:10 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:10.466748 | orchestrator | 2026-04-18 00:47:10 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:10.466753 | orchestrator | 2026-04-18 00:47:10 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:10.466758 | orchestrator | 2026-04-18 00:47:10 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:10.466764 | orchestrator | 2026-04-18 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:13.501705 | orchestrator | 2026-04-18 00:47:13 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:13.501751 | orchestrator | 2026-04-18 00:47:13 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:47:13.501756 | orchestrator | 2026-04-18 00:47:13 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:13.501759 | orchestrator | 2026-04-18 00:47:13 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:13.501763 | orchestrator | 2026-04-18 00:47:13 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:13.501888 | orchestrator | 2026-04-18 00:47:13 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:13.502086 | orchestrator | 2026-04-18 00:47:13 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:13.502104 | orchestrator | 2026-04-18 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:16.556778 | orchestrator | 2026-04-18 00:47:16 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:16.560956 | orchestrator | 2026-04-18 00:47:16 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:47:16.563641 | orchestrator | 2026-04-18 00:47:16 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:16.563671 | orchestrator | 2026-04-18 00:47:16 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:16.564303 | orchestrator | 2026-04-18 00:47:16 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:16.566913 | orchestrator | 2026-04-18 00:47:16 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:16.568115 | orchestrator | 2026-04-18 00:47:16 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:16.568144 | orchestrator | 2026-04-18 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:19.619913 | orchestrator | 2026-04-18 00:47:19 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:19.620250 | orchestrator | 2026-04-18 00:47:19 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:47:19.621638 | orchestrator | 2026-04-18 00:47:19 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:19.622127 | orchestrator | 2026-04-18 00:47:19 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:19.623333 | orchestrator | 2026-04-18 00:47:19 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:19.625635 | orchestrator | 2026-04-18 00:47:19 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:19.627110 | orchestrator | 2026-04-18 00:47:19 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:19.627784 | orchestrator | 2026-04-18 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:22.845855 | orchestrator | 2026-04-18 00:47:22 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:22.846082 | orchestrator | 2026-04-18 00:47:22 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:47:22.846112 | orchestrator | 2026-04-18 00:47:22 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:22.846132 | orchestrator | 2026-04-18 00:47:22 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:22.846139 | orchestrator | 2026-04-18 00:47:22 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:22.846145 | orchestrator | 2026-04-18 00:47:22 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:22.846151 | orchestrator | 2026-04-18 00:47:22 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:22.846166 | orchestrator | 2026-04-18 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:25.882294 | orchestrator | 2026-04-18 00:47:25 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:25.882384 | orchestrator | 2026-04-18 00:47:25 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:47:25.882394 | orchestrator | 2026-04-18 00:47:25 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:25.882402 | orchestrator | 2026-04-18 00:47:25 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:25.882408 | orchestrator | 2026-04-18 00:47:25 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:25.882415 | orchestrator | 2026-04-18 00:47:25 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:25.882422 | orchestrator | 2026-04-18 00:47:25 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:25.882427 | orchestrator | 2026-04-18 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:29.073789 | orchestrator | 2026-04-18 00:47:28 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:29.073844 | orchestrator | 2026-04-18 00:47:28 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:47:29.073861 | orchestrator | 2026-04-18 00:47:28 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:29.073867 | orchestrator | 2026-04-18 00:47:28 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:29.073886 | orchestrator | 2026-04-18 00:47:28 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:29.073891 | orchestrator | 2026-04-18 00:47:28 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:29.073896 | orchestrator | 2026-04-18 00:47:28 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:29.073901 | orchestrator | 2026-04-18 00:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:31.949158 | orchestrator | 2026-04-18 00:47:31 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:31.949206 | orchestrator | 2026-04-18 00:47:31 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state STARTED 2026-04-18 00:47:31.949211 | orchestrator | 2026-04-18 00:47:31 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:31.949214 | orchestrator | 2026-04-18 00:47:31 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:31.949218 | orchestrator | 2026-04-18 00:47:31 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:31.949221 | orchestrator | 2026-04-18 00:47:31 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:31.949224 | orchestrator | 2026-04-18 00:47:31 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:31.949227 | orchestrator | 2026-04-18 00:47:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:34.963524 | orchestrator | 2026-04-18 00:47:34 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:34.963577 | orchestrator | 2026-04-18 00:47:34 | INFO  | Task 97d68f51-c905-4563-9865-ac6944ffbd2d is in state SUCCESS 2026-04-18 00:47:34.963584 | orchestrator | 2026-04-18 00:47:34 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:34.963589 | orchestrator | 2026-04-18 00:47:34 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:34.963594 | orchestrator | 2026-04-18 00:47:34 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:34.963600 | orchestrator | 2026-04-18 00:47:34 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:34.963605 | orchestrator | 2026-04-18 00:47:34 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:34.963611 | orchestrator | 2026-04-18 00:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:38.247893 | orchestrator | 2026-04-18 00:47:38 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:38.248047 | orchestrator | 2026-04-18 00:47:38 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:38.248070 | orchestrator | 2026-04-18 00:47:38 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state STARTED 2026-04-18 00:47:38.248084 | orchestrator | 2026-04-18 00:47:38 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:38.248102 | orchestrator | 2026-04-18 00:47:38 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:38.248115 | orchestrator | 2026-04-18 00:47:38 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:38.248130 | orchestrator | 2026-04-18 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:41.090797 | orchestrator | 2026-04-18 00:47:41 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:41.102689 | orchestrator | 2026-04-18 00:47:41 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:41.102761 | orchestrator | 2026-04-18 00:47:41 | INFO  | Task 5c945055-2698-4c07-8b10-132faabe3208 is in state SUCCESS 2026-04-18 00:47:41.102767 | orchestrator | 2026-04-18 00:47:41 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:41.102772 | orchestrator | 2026-04-18 00:47:41 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:41.102776 | orchestrator | 2026-04-18 00:47:41 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:41.102797 | orchestrator | 2026-04-18 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:44.151575 | orchestrator | 2026-04-18 00:47:44 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:44.153605 | orchestrator | 2026-04-18 00:47:44 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:44.155429 | orchestrator | 2026-04-18 00:47:44 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:44.157053 | orchestrator | 2026-04-18 00:47:44 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:44.158906 | orchestrator | 2026-04-18 00:47:44 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:44.159010 | orchestrator | 2026-04-18 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:47.214510 | orchestrator | 2026-04-18 00:47:47 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:47.214585 | orchestrator | 2026-04-18 00:47:47 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:47.220858 | orchestrator | 2026-04-18 00:47:47 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:47.221001 | orchestrator | 2026-04-18 00:47:47 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:47.221712 | orchestrator | 2026-04-18 00:47:47 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:47.221772 | orchestrator | 2026-04-18 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:50.254231 | orchestrator | 2026-04-18 00:47:50 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:50.255717 | orchestrator | 2026-04-18 00:47:50 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:50.257624 | orchestrator | 2026-04-18 00:47:50 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:50.259094 | orchestrator | 2026-04-18 00:47:50 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:50.260071 | orchestrator | 2026-04-18 00:47:50 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:50.260242 | orchestrator | 2026-04-18 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:53.350511 | orchestrator | 2026-04-18 00:47:53 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:53.350559 | orchestrator | 2026-04-18 00:47:53 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:53.351385 | orchestrator | 2026-04-18 00:47:53 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state STARTED 2026-04-18 00:47:53.351982 | orchestrator | 2026-04-18 00:47:53 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:53.353917 | orchestrator | 2026-04-18 00:47:53 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:53.354252 | orchestrator | 2026-04-18 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:56.386577 | orchestrator | 2026-04-18 00:47:56 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:56.389635 | orchestrator | 2026-04-18 00:47:56 | INFO  | Task d8185a23-f7b6-4720-b56a-40094c01a58a is in state STARTED 2026-04-18 00:47:56.389676 | orchestrator | 2026-04-18 00:47:56 | INFO  | Task b0bf6705-b009-49c4-9532-1f8fd8996195 is in state STARTED 2026-04-18 00:47:56.389682 | orchestrator | 2026-04-18 00:47:56 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:47:56.389686 | orchestrator | 2026-04-18 00:47:56 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:56.395645 | orchestrator | 2026-04-18 00:47:56 | INFO  | Task 570a48eb-e3c6-41b2-9a08-827bd275e940 is in state SUCCESS 2026-04-18 00:47:56.397222 | orchestrator | 2026-04-18 00:47:56.397254 | orchestrator | 2026-04-18 00:47:56.397261 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-18 00:47:56.397268 | orchestrator | 2026-04-18 00:47:56.397283 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-18 00:47:56.397289 | orchestrator | Saturday 18 April 2026 00:46:51 +0000 (0:00:00.813) 0:00:00.813 ******** 2026-04-18 00:47:56.397295 | orchestrator | ok: [testbed-manager] => { 2026-04-18 00:47:56.397302 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-18 00:47:56.397308 | orchestrator | } 2026-04-18 00:47:56.397314 | orchestrator | 2026-04-18 00:47:56.397319 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-18 00:47:56.397328 | orchestrator | Saturday 18 April 2026 00:46:51 +0000 (0:00:00.292) 0:00:01.105 ******** 2026-04-18 00:47:56.397334 | orchestrator | ok: [testbed-manager] 2026-04-18 00:47:56.397339 | orchestrator | 2026-04-18 00:47:56.397345 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-18 00:47:56.397350 | orchestrator | Saturday 18 April 2026 00:46:54 +0000 (0:00:02.184) 0:00:03.290 ******** 2026-04-18 00:47:56.397556 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-18 00:47:56.397562 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-18 00:47:56.397567 | orchestrator | 2026-04-18 00:47:56.397572 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-18 00:47:56.397578 | orchestrator | Saturday 18 April 2026 00:46:55 +0000 (0:00:01.335) 0:00:04.625 ******** 2026-04-18 00:47:56.397584 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.397589 | orchestrator | 2026-04-18 00:47:56.397594 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-18 00:47:56.397600 | orchestrator | Saturday 18 April 2026 00:46:57 +0000 (0:00:01.992) 0:00:06.618 ******** 2026-04-18 00:47:56.397606 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.397611 | orchestrator | 2026-04-18 00:47:56.397616 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-18 00:47:56.397621 | orchestrator | Saturday 18 April 2026 00:47:00 +0000 (0:00:02.880) 0:00:09.498 ******** 2026-04-18 00:47:56.397627 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-18 00:47:56.397632 | orchestrator | ok: [testbed-manager] 2026-04-18 00:47:56.397637 | orchestrator | 2026-04-18 00:47:56.397642 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-18 00:47:56.397648 | orchestrator | Saturday 18 April 2026 00:47:28 +0000 (0:00:27.977) 0:00:37.476 ******** 2026-04-18 00:47:56.397653 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.397658 | orchestrator | 2026-04-18 00:47:56.397663 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:47:56.397679 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:47:56.397685 | orchestrator | 2026-04-18 00:47:56.397690 | orchestrator | 2026-04-18 00:47:56.397695 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:47:56.397701 | orchestrator | Saturday 18 April 2026 00:47:31 +0000 (0:00:03.626) 0:00:41.103 ******** 2026-04-18 00:47:56.397706 | orchestrator | =============================================================================== 2026-04-18 00:47:56.397711 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.98s 2026-04-18 00:47:56.397716 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.63s 2026-04-18 00:47:56.397721 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.88s 2026-04-18 00:47:56.397727 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.18s 2026-04-18 00:47:56.397732 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.99s 2026-04-18 00:47:56.397737 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.34s 2026-04-18 00:47:56.397742 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.29s 2026-04-18 00:47:56.397747 | orchestrator | 2026-04-18 00:47:56.397752 | orchestrator | 2026-04-18 00:47:56.397758 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-18 00:47:56.397763 | orchestrator | 2026-04-18 00:47:56.397769 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-18 00:47:56.397774 | orchestrator | Saturday 18 April 2026 00:46:51 +0000 (0:00:00.509) 0:00:00.509 ******** 2026-04-18 00:47:56.397780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-18 00:47:56.397786 | orchestrator | 2026-04-18 00:47:56.397791 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-18 00:47:56.397796 | orchestrator | Saturday 18 April 2026 00:46:51 +0000 (0:00:00.342) 0:00:00.851 ******** 2026-04-18 00:47:56.397802 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-18 00:47:56.397807 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-18 00:47:56.397813 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-18 00:47:56.397818 | orchestrator | 2026-04-18 00:47:56.397823 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-18 00:47:56.397829 | orchestrator | Saturday 18 April 2026 00:46:53 +0000 (0:00:01.837) 0:00:02.688 ******** 2026-04-18 00:47:56.397834 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.397839 | orchestrator | 2026-04-18 00:47:56.397844 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-18 00:47:56.397858 | orchestrator | Saturday 18 April 2026 00:46:56 +0000 (0:00:03.158) 0:00:05.846 ******** 2026-04-18 00:47:56.397870 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-18 00:47:56.397875 | orchestrator | ok: [testbed-manager] 2026-04-18 00:47:56.397881 | orchestrator | 2026-04-18 00:47:56.397886 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-18 00:47:56.397891 | orchestrator | Saturday 18 April 2026 00:47:30 +0000 (0:00:33.777) 0:00:39.624 ******** 2026-04-18 00:47:56.397975 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.397984 | orchestrator | 2026-04-18 00:47:56.397990 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-18 00:47:56.397995 | orchestrator | Saturday 18 April 2026 00:47:33 +0000 (0:00:02.772) 0:00:42.397 ******** 2026-04-18 00:47:56.398000 | orchestrator | ok: [testbed-manager] 2026-04-18 00:47:56.398006 | orchestrator | 2026-04-18 00:47:56.398042 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-18 00:47:56.398421 | orchestrator | Saturday 18 April 2026 00:47:34 +0000 (0:00:00.957) 0:00:43.355 ******** 2026-04-18 00:47:56.398433 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.398439 | orchestrator | 2026-04-18 00:47:56.398445 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-18 00:47:56.398451 | orchestrator | Saturday 18 April 2026 00:47:36 +0000 (0:00:02.664) 0:00:46.019 ******** 2026-04-18 00:47:56.398491 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.398498 | orchestrator | 2026-04-18 00:47:56.398504 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-18 00:47:56.398509 | orchestrator | Saturday 18 April 2026 00:47:39 +0000 (0:00:02.337) 0:00:48.356 ******** 2026-04-18 00:47:56.398514 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.398520 | orchestrator | 2026-04-18 00:47:56.398525 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-18 00:47:56.398531 | orchestrator | Saturday 18 April 2026 00:47:40 +0000 (0:00:00.922) 0:00:49.279 ******** 2026-04-18 00:47:56.398536 | orchestrator | ok: [testbed-manager] 2026-04-18 00:47:56.398541 | orchestrator | 2026-04-18 00:47:56.398547 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:47:56.398552 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:47:56.398558 | orchestrator | 2026-04-18 00:47:56.398563 | orchestrator | 2026-04-18 00:47:56.398569 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:47:56.398575 | orchestrator | Saturday 18 April 2026 00:47:40 +0000 (0:00:00.664) 0:00:49.943 ******** 2026-04-18 00:47:56.398738 | orchestrator | =============================================================================== 2026-04-18 00:47:56.398744 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.78s 2026-04-18 00:47:56.398749 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.16s 2026-04-18 00:47:56.398755 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.77s 2026-04-18 00:47:56.398760 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.66s 2026-04-18 00:47:56.398765 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.34s 2026-04-18 00:47:56.398771 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.84s 2026-04-18 00:47:56.398776 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.96s 2026-04-18 00:47:56.398781 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.92s 2026-04-18 00:47:56.398787 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.66s 2026-04-18 00:47:56.398792 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.34s 2026-04-18 00:47:56.398797 | orchestrator | 2026-04-18 00:47:56.398803 | orchestrator | 2026-04-18 00:47:56.398808 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-18 00:47:56.398813 | orchestrator | 2026-04-18 00:47:56.398818 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-18 00:47:56.398823 | orchestrator | Saturday 18 April 2026 00:46:44 +0000 (0:00:00.278) 0:00:00.278 ******** 2026-04-18 00:47:56.398829 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:47:56.398834 | orchestrator | 2026-04-18 00:47:56.398839 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-18 00:47:56.398874 | orchestrator | Saturday 18 April 2026 00:46:45 +0000 (0:00:01.108) 0:00:01.386 ******** 2026-04-18 00:47:56.398881 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-18 00:47:56.398887 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-18 00:47:56.398892 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-18 00:47:56.398903 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-18 00:47:56.398908 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-18 00:47:56.398913 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-18 00:47:56.398927 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-18 00:47:56.398933 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-18 00:47:56.398938 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-18 00:47:56.398944 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-18 00:47:56.398949 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-18 00:47:56.398973 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-18 00:47:56.398979 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-18 00:47:56.398985 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-18 00:47:56.398990 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-18 00:47:56.398995 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-18 00:47:56.399000 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-18 00:47:56.399006 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-18 00:47:56.399011 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-18 00:47:56.399016 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-18 00:47:56.399021 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-18 00:47:56.399027 | orchestrator | 2026-04-18 00:47:56.399032 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-18 00:47:56.399037 | orchestrator | Saturday 18 April 2026 00:46:50 +0000 (0:00:04.588) 0:00:05.978 ******** 2026-04-18 00:47:56.399042 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:47:56.399048 | orchestrator | 2026-04-18 00:47:56.399053 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-18 00:47:56.399058 | orchestrator | Saturday 18 April 2026 00:46:51 +0000 (0:00:01.514) 0:00:07.492 ******** 2026-04-18 00:47:56.399066 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.399073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.399079 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.399102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.399123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.399131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.399137 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.399143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399163 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399208 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399256 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399262 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.399267 | orchestrator | 2026-04-18 00:47:56.399273 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-18 00:47:56.399281 | orchestrator | Saturday 18 April 2026 00:46:56 +0000 (0:00:05.172) 0:00:12.665 ******** 2026-04-18 00:47:56.399286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399292 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399298 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399349 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399368 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:47:56.399373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399404 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:47:56.399409 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:47:56.399417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399422 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:47:56.399428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399433 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:47:56.399442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399464 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:47:56.399469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399489 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:47:56.399494 | orchestrator | 2026-04-18 00:47:56.399499 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-18 00:47:56.399505 | orchestrator | Saturday 18 April 2026 00:47:00 +0000 (0:00:03.326) 0:00:15.992 ******** 2026-04-18 00:47:56.399514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399520 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399540 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399576 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:47:56.399582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399603 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:47:56.399609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399614 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:47:56.399620 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399626 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:47:56.399632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399673 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:47:56.399679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399690 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:47:56.399696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.399701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.399713 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:47:56.399718 | orchestrator | 2026-04-18 00:47:56.399723 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-18 00:47:56.399728 | orchestrator | Saturday 18 April 2026 00:47:05 +0000 (0:00:04.906) 0:00:20.899 ******** 2026-04-18 00:47:56.399733 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:47:56.399739 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:47:56.399744 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:47:56.399750 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:47:56.399755 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:47:56.399761 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:47:56.399766 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:47:56.399775 | orchestrator | 2026-04-18 00:47:56.399794 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-18 00:47:56.399800 | orchestrator | Saturday 18 April 2026 00:47:06 +0000 (0:00:01.547) 0:00:22.446 ******** 2026-04-18 00:47:56.399805 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:47:56.399811 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:47:56.399816 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:47:56.399821 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:47:56.399826 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:47:56.399832 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:47:56.399837 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:47:56.399842 | orchestrator | 2026-04-18 00:47:56.399847 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-18 00:47:56.399852 | orchestrator | Saturday 18 April 2026 00:47:07 +0000 (0:00:01.111) 0:00:23.558 ******** 2026-04-18 00:47:56.399857 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:47:56.399865 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:47:56.399871 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:47:56.399876 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:47:56.399882 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:47:56.399888 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:47:56.399892 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:47:56.399896 | orchestrator | 2026-04-18 00:47:56.399899 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-18 00:47:56.399902 | orchestrator | Saturday 18 April 2026 00:47:09 +0000 (0:00:01.222) 0:00:24.780 ******** 2026-04-18 00:47:56.399905 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:47:56.399908 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:47:56.399911 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:47:56.399914 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:47:56.399952 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:47:56.399956 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:47:56.399959 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.399962 | orchestrator | 2026-04-18 00:47:56.399965 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-18 00:47:56.399969 | orchestrator | Saturday 18 April 2026 00:47:11 +0000 (0:00:02.322) 0:00:27.102 ******** 2026-04-18 00:47:56.399972 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.399975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.399979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.399985 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400010 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400017 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400025 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400043 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400047 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400050 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400068 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400074 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400077 | orchestrator | 2026-04-18 00:47:56.400080 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-18 00:47:56.400085 | orchestrator | Saturday 18 April 2026 00:47:15 +0000 (0:00:03.988) 0:00:31.090 ******** 2026-04-18 00:47:56.400088 | orchestrator | [WARNING]: Skipped 2026-04-18 00:47:56.400091 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-18 00:47:56.400095 | orchestrator | to this access issue: 2026-04-18 00:47:56.400098 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-18 00:47:56.400101 | orchestrator | directory 2026-04-18 00:47:56.400104 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-18 00:47:56.400107 | orchestrator | 2026-04-18 00:47:56.400110 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-18 00:47:56.400113 | orchestrator | Saturday 18 April 2026 00:47:16 +0000 (0:00:00.947) 0:00:32.038 ******** 2026-04-18 00:47:56.400116 | orchestrator | [WARNING]: Skipped 2026-04-18 00:47:56.400119 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-18 00:47:56.400122 | orchestrator | to this access issue: 2026-04-18 00:47:56.400125 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-18 00:47:56.400128 | orchestrator | directory 2026-04-18 00:47:56.400131 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-18 00:47:56.400135 | orchestrator | 2026-04-18 00:47:56.400138 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-18 00:47:56.400141 | orchestrator | Saturday 18 April 2026 00:47:17 +0000 (0:00:00.809) 0:00:32.847 ******** 2026-04-18 00:47:56.400144 | orchestrator | [WARNING]: Skipped 2026-04-18 00:47:56.400147 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-18 00:47:56.400150 | orchestrator | to this access issue: 2026-04-18 00:47:56.400153 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-18 00:47:56.400158 | orchestrator | directory 2026-04-18 00:47:56.400161 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-18 00:47:56.400164 | orchestrator | 2026-04-18 00:47:56.400167 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-18 00:47:56.400170 | orchestrator | Saturday 18 April 2026 00:47:18 +0000 (0:00:01.195) 0:00:34.043 ******** 2026-04-18 00:47:56.400173 | orchestrator | [WARNING]: Skipped 2026-04-18 00:47:56.400176 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-18 00:47:56.400179 | orchestrator | to this access issue: 2026-04-18 00:47:56.400182 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-18 00:47:56.400185 | orchestrator | directory 2026-04-18 00:47:56.400188 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-18 00:47:56.400191 | orchestrator | 2026-04-18 00:47:56.400195 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-18 00:47:56.400198 | orchestrator | Saturday 18 April 2026 00:47:19 +0000 (0:00:01.178) 0:00:35.222 ******** 2026-04-18 00:47:56.400201 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:47:56.400204 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:47:56.400207 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:47:56.400210 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.400213 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:47:56.400216 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:47:56.400219 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:47:56.400222 | orchestrator | 2026-04-18 00:47:56.400227 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-18 00:47:56.400232 | orchestrator | Saturday 18 April 2026 00:47:24 +0000 (0:00:05.410) 0:00:40.632 ******** 2026-04-18 00:47:56.400237 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-18 00:47:56.400243 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-18 00:47:56.400248 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-18 00:47:56.400254 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-18 00:47:56.400259 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-18 00:47:56.400263 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-18 00:47:56.400266 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-18 00:47:56.400269 | orchestrator | 2026-04-18 00:47:56.400272 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-18 00:47:56.400275 | orchestrator | Saturday 18 April 2026 00:47:28 +0000 (0:00:03.209) 0:00:43.841 ******** 2026-04-18 00:47:56.400278 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:47:56.400281 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:47:56.400284 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:47:56.400287 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:47:56.400290 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:47:56.400293 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.400299 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:47:56.400302 | orchestrator | 2026-04-18 00:47:56.400305 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-18 00:47:56.400308 | orchestrator | Saturday 18 April 2026 00:47:30 +0000 (0:00:02.677) 0:00:46.519 ******** 2026-04-18 00:47:56.400315 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400324 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400331 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400334 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400342 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400347 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400351 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400354 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400362 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400365 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400370 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400380 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400384 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400390 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400393 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400396 | orchestrator | 2026-04-18 00:47:56.400400 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-18 00:47:56.400403 | orchestrator | Saturday 18 April 2026 00:47:33 +0000 (0:00:02.881) 0:00:49.400 ******** 2026-04-18 00:47:56.400406 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-18 00:47:56.400409 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-18 00:47:56.400412 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-18 00:47:56.400415 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-18 00:47:56.400420 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-18 00:47:56.400423 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-18 00:47:56.400428 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-18 00:47:56.400431 | orchestrator | 2026-04-18 00:47:56.400434 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-18 00:47:56.400437 | orchestrator | Saturday 18 April 2026 00:47:36 +0000 (0:00:03.072) 0:00:52.472 ******** 2026-04-18 00:47:56.400440 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-18 00:47:56.400443 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-18 00:47:56.400446 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-18 00:47:56.400450 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-18 00:47:56.400457 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-18 00:47:56.400462 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-18 00:47:56.400468 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-18 00:47:56.400474 | orchestrator | 2026-04-18 00:47:56.400479 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-18 00:47:56.400484 | orchestrator | Saturday 18 April 2026 00:47:39 +0000 (0:00:02.845) 0:00:55.318 ******** 2026-04-18 00:47:56.400490 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400533 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400539 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400563 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-18 00:47:56.400597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400632 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:47:56.400651 | orchestrator | 2026-04-18 00:47:56.400654 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-18 00:47:56.400657 | orchestrator | Saturday 18 April 2026 00:47:44 +0000 (0:00:04.738) 0:01:00.056 ******** 2026-04-18 00:47:56.400660 | orchestrator | changed: [testbed-manager] => { 2026-04-18 00:47:56.400664 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:47:56.400667 | orchestrator | } 2026-04-18 00:47:56.400670 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 00:47:56.400673 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:47:56.400676 | orchestrator | } 2026-04-18 00:47:56.400679 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 00:47:56.400682 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:47:56.400685 | orchestrator | } 2026-04-18 00:47:56.400688 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 00:47:56.400691 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:47:56.400696 | orchestrator | } 2026-04-18 00:47:56.400699 | orchestrator | changed: [testbed-node-3] => { 2026-04-18 00:47:56.400702 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:47:56.400705 | orchestrator | } 2026-04-18 00:47:56.400708 | orchestrator | changed: [testbed-node-4] => { 2026-04-18 00:47:56.400711 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:47:56.400714 | orchestrator | } 2026-04-18 00:47:56.400717 | orchestrator | changed: [testbed-node-5] => { 2026-04-18 00:47:56.400720 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:47:56.400723 | orchestrator | } 2026-04-18 00:47:56.400727 | orchestrator | 2026-04-18 00:47:56.400730 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 00:47:56.400733 | orchestrator | Saturday 18 April 2026 00:47:44 +0000 (0:00:00.695) 0:01:00.752 ******** 2026-04-18 00:47:56.400736 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.400739 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400745 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400748 | orchestrator | skipping: [testbed-manager] 2026-04-18 00:47:56.400751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.400754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400763 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:47:56.400769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.400772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.400784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400790 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:47:56.400793 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:47:56.400798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.400803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400810 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:47:56.400813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.400818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400824 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:47:56.400827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-18 00:47:56.400831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:47:56.400839 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:47:56.400842 | orchestrator | 2026-04-18 00:47:56.400845 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-18 00:47:56.400849 | orchestrator | Saturday 18 April 2026 00:47:46 +0000 (0:00:01.630) 0:01:02.383 ******** 2026-04-18 00:47:56.400852 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.400855 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:47:56.400858 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:47:56.400861 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:47:56.400864 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:47:56.400867 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:47:56.400870 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:47:56.400874 | orchestrator | 2026-04-18 00:47:56.400877 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-18 00:47:56.400880 | orchestrator | Saturday 18 April 2026 00:47:48 +0000 (0:00:01.981) 0:01:04.365 ******** 2026-04-18 00:47:56.400883 | orchestrator | changed: [testbed-manager] 2026-04-18 00:47:56.400886 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:47:56.400889 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:47:56.400892 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:47:56.400895 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:47:56.400898 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:47:56.400901 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:47:56.400904 | orchestrator | 2026-04-18 00:47:56.400907 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-18 00:47:56.400911 | orchestrator | Saturday 18 April 2026 00:47:50 +0000 (0:00:01.473) 0:01:05.839 ******** 2026-04-18 00:47:56.400914 | orchestrator | 2026-04-18 00:47:56.400926 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-18 00:47:56.400930 | orchestrator | Saturday 18 April 2026 00:47:50 +0000 (0:00:00.063) 0:01:05.902 ******** 2026-04-18 00:47:56.400933 | orchestrator | 2026-04-18 00:47:56.400936 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-18 00:47:56.400939 | orchestrator | Saturday 18 April 2026 00:47:50 +0000 (0:00:00.058) 0:01:05.960 ******** 2026-04-18 00:47:56.400942 | orchestrator | 2026-04-18 00:47:56.400945 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-18 00:47:56.400948 | orchestrator | Saturday 18 April 2026 00:47:50 +0000 (0:00:00.058) 0:01:06.019 ******** 2026-04-18 00:47:56.400951 | orchestrator | 2026-04-18 00:47:56.400954 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-18 00:47:56.400957 | orchestrator | Saturday 18 April 2026 00:47:50 +0000 (0:00:00.049) 0:01:06.068 ******** 2026-04-18 00:47:56.400960 | orchestrator | 2026-04-18 00:47:56.400964 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-18 00:47:56.400967 | orchestrator | Saturday 18 April 2026 00:47:50 +0000 (0:00:00.049) 0:01:06.118 ******** 2026-04-18 00:47:56.400970 | orchestrator | 2026-04-18 00:47:56.400973 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-18 00:47:56.400976 | orchestrator | Saturday 18 April 2026 00:47:50 +0000 (0:00:00.060) 0:01:06.179 ******** 2026-04-18 00:47:56.400979 | orchestrator | 2026-04-18 00:47:56.400982 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-18 00:47:56.400985 | orchestrator | Saturday 18 April 2026 00:47:50 +0000 (0:00:00.093) 0:01:06.272 ******** 2026-04-18 00:47:56.400995 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_wt51vq0c/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_wt51vq0c/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_wt51vq0c/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_wt51vq0c/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-18 00:47:56.401002 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_0mjhi077/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_0mjhi077/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_0mjhi077/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_0mjhi077/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-18 00:47:56.401011 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_88vpsexc/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_88vpsexc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_88vpsexc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_88vpsexc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-18 00:47:56.401019 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_em1q05xo/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_em1q05xo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_em1q05xo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_em1q05xo/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-18 00:47:56.401027 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_8lith88y/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_8lith88y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_8lith88y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_8lith88y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-18 00:47:56.401035 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_5j_6yics/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_5j_6yics/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_5j_6yics/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_5j_6yics/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-18 00:47:56.401042 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_gqee8l_q/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_gqee8l_q/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_gqee8l_q/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_gqee8l_q/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Ffluentd: Internal Server Error (\"unknown: repository kolla/release/2024.2/fluentd not found\")\\n'"} 2026-04-18 00:47:56.401045 | orchestrator | 2026-04-18 00:47:56.401048 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:47:56.401054 | orchestrator | testbed-manager : ok=20  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-18 00:47:56.401057 | orchestrator | testbed-node-0 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-18 00:47:56.401062 | orchestrator | testbed-node-1 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-18 00:47:56.401065 | orchestrator | testbed-node-2 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-18 00:47:56.401068 | orchestrator | testbed-node-3 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-18 00:47:56.401074 | orchestrator | testbed-node-4 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-18 00:47:56.401077 | orchestrator | testbed-node-5 : ok=16  changed=13  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-18 00:47:56.401080 | orchestrator | 2026-04-18 00:47:56.401083 | orchestrator | 2026-04-18 00:47:56.401086 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:47:56.401089 | orchestrator | Saturday 18 April 2026 00:47:53 +0000 (0:00:02.923) 0:01:09.197 ******** 2026-04-18 00:47:56.401093 | orchestrator | =============================================================================== 2026-04-18 00:47:56.401096 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.41s 2026-04-18 00:47:56.401099 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.17s 2026-04-18 00:47:56.401102 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.91s 2026-04-18 00:47:56.401105 | orchestrator | service-check-containers : common | Check containers -------------------- 4.74s 2026-04-18 00:47:56.401108 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.59s 2026-04-18 00:47:56.401111 | orchestrator | common : Copying over config.json files for services -------------------- 3.99s 2026-04-18 00:47:56.401114 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.33s 2026-04-18 00:47:56.401117 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.21s 2026-04-18 00:47:56.401120 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.07s 2026-04-18 00:47:56.401123 | orchestrator | common : Restart fluentd container -------------------------------------- 2.92s 2026-04-18 00:47:56.401126 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.88s 2026-04-18 00:47:56.401130 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.85s 2026-04-18 00:47:56.401133 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.68s 2026-04-18 00:47:56.401136 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.32s 2026-04-18 00:47:56.401139 | orchestrator | common : Creating log volume -------------------------------------------- 1.98s 2026-04-18 00:47:56.401142 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.63s 2026-04-18 00:47:56.401145 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 1.55s 2026-04-18 00:47:56.401148 | orchestrator | common : include_tasks -------------------------------------------------- 1.51s 2026-04-18 00:47:56.401151 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.47s 2026-04-18 00:47:56.401154 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.22s 2026-04-18 00:47:56.401159 | orchestrator | 2026-04-18 00:47:56 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:56.401163 | orchestrator | 2026-04-18 00:47:56 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:56.401446 | orchestrator | 2026-04-18 00:47:56 | INFO  | Task 15246ffc-1925-400e-9e45-e926fc9e7e3e is in state STARTED 2026-04-18 00:47:56.401457 | orchestrator | 2026-04-18 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:47:59.420251 | orchestrator | 2026-04-18 00:47:59 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:47:59.422087 | orchestrator | 2026-04-18 00:47:59 | INFO  | Task d8185a23-f7b6-4720-b56a-40094c01a58a is in state STARTED 2026-04-18 00:47:59.423658 | orchestrator | 2026-04-18 00:47:59 | INFO  | Task b0bf6705-b009-49c4-9532-1f8fd8996195 is in state STARTED 2026-04-18 00:47:59.426514 | orchestrator | 2026-04-18 00:47:59 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:47:59.428655 | orchestrator | 2026-04-18 00:47:59 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:47:59.430883 | orchestrator | 2026-04-18 00:47:59 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:47:59.435148 | orchestrator | 2026-04-18 00:47:59 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:47:59.438386 | orchestrator | 2026-04-18 00:47:59 | INFO  | Task 15246ffc-1925-400e-9e45-e926fc9e7e3e is in state STARTED 2026-04-18 00:47:59.438454 | orchestrator | 2026-04-18 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:02.482248 | orchestrator | 2026-04-18 00:48:02 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:02.483026 | orchestrator | 2026-04-18 00:48:02 | INFO  | Task d8185a23-f7b6-4720-b56a-40094c01a58a is in state STARTED 2026-04-18 00:48:02.483624 | orchestrator | 2026-04-18 00:48:02 | INFO  | Task b0bf6705-b009-49c4-9532-1f8fd8996195 is in state STARTED 2026-04-18 00:48:02.484333 | orchestrator | 2026-04-18 00:48:02 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:02.484843 | orchestrator | 2026-04-18 00:48:02 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state STARTED 2026-04-18 00:48:02.486384 | orchestrator | 2026-04-18 00:48:02 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:02.488450 | orchestrator | 2026-04-18 00:48:02 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:48:02.490362 | orchestrator | 2026-04-18 00:48:02 | INFO  | Task 15246ffc-1925-400e-9e45-e926fc9e7e3e is in state STARTED 2026-04-18 00:48:02.490504 | orchestrator | 2026-04-18 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:05.529731 | orchestrator | 2026-04-18 00:48:05 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:05.530253 | orchestrator | 2026-04-18 00:48:05 | INFO  | Task d8185a23-f7b6-4720-b56a-40094c01a58a is in state STARTED 2026-04-18 00:48:05.531190 | orchestrator | 2026-04-18 00:48:05 | INFO  | Task b0bf6705-b009-49c4-9532-1f8fd8996195 is in state STARTED 2026-04-18 00:48:05.531523 | orchestrator | 2026-04-18 00:48:05 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:05.532491 | orchestrator | 2026-04-18 00:48:05 | INFO  | Task 69346bca-86db-4659-978b-27319704f24b is in state SUCCESS 2026-04-18 00:48:05.533214 | orchestrator | 2026-04-18 00:48:05 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:05.533700 | orchestrator | 2026-04-18 00:48:05 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state STARTED 2026-04-18 00:48:05.534705 | orchestrator | 2026-04-18 00:48:05 | INFO  | Task 15246ffc-1925-400e-9e45-e926fc9e7e3e is in state STARTED 2026-04-18 00:48:05.534734 | orchestrator | 2026-04-18 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:08.576777 | orchestrator | 2026-04-18 00:48:08 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:08.579075 | orchestrator | 2026-04-18 00:48:08 | INFO  | Task d8185a23-f7b6-4720-b56a-40094c01a58a is in state STARTED 2026-04-18 00:48:08.579750 | orchestrator | 2026-04-18 00:48:08 | INFO  | Task b0bf6705-b009-49c4-9532-1f8fd8996195 is in state STARTED 2026-04-18 00:48:08.580618 | orchestrator | 2026-04-18 00:48:08 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:08.582058 | orchestrator | 2026-04-18 00:48:08 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:08.582661 | orchestrator | 2026-04-18 00:48:08 | INFO  | Task 158efb6d-0b26-4cef-9d84-fc653e344cd8 is in state SUCCESS 2026-04-18 00:48:08.582899 | orchestrator | 2026-04-18 00:48:08.582926 | orchestrator | 2026-04-18 00:48:08.582931 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-18 00:48:08.582935 | orchestrator | 2026-04-18 00:48:08.582938 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-18 00:48:08.582942 | orchestrator | Saturday 18 April 2026 00:47:05 +0000 (0:00:00.351) 0:00:00.351 ******** 2026-04-18 00:48:08.582945 | orchestrator | ok: [testbed-manager] 2026-04-18 00:48:08.582949 | orchestrator | 2026-04-18 00:48:08.582953 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-18 00:48:08.582956 | orchestrator | Saturday 18 April 2026 00:47:07 +0000 (0:00:01.659) 0:00:02.011 ******** 2026-04-18 00:48:08.582959 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-18 00:48:08.582963 | orchestrator | 2026-04-18 00:48:08.582966 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-18 00:48:08.582969 | orchestrator | Saturday 18 April 2026 00:47:08 +0000 (0:00:00.872) 0:00:02.883 ******** 2026-04-18 00:48:08.582972 | orchestrator | changed: [testbed-manager] 2026-04-18 00:48:08.582976 | orchestrator | 2026-04-18 00:48:08.582979 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-18 00:48:08.582982 | orchestrator | Saturday 18 April 2026 00:47:09 +0000 (0:00:01.828) 0:00:04.712 ******** 2026-04-18 00:48:08.582985 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-18 00:48:08.582988 | orchestrator | ok: [testbed-manager] 2026-04-18 00:48:08.582991 | orchestrator | 2026-04-18 00:48:08.582994 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-18 00:48:08.582997 | orchestrator | Saturday 18 April 2026 00:48:01 +0000 (0:00:51.145) 0:00:55.857 ******** 2026-04-18 00:48:08.583000 | orchestrator | changed: [testbed-manager] 2026-04-18 00:48:08.583003 | orchestrator | 2026-04-18 00:48:08.583007 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:48:08.583010 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:48:08.583014 | orchestrator | 2026-04-18 00:48:08.583017 | orchestrator | 2026-04-18 00:48:08.583020 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:48:08.583023 | orchestrator | Saturday 18 April 2026 00:48:04 +0000 (0:00:03.131) 0:00:58.989 ******** 2026-04-18 00:48:08.583027 | orchestrator | =============================================================================== 2026-04-18 00:48:08.583039 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 51.15s 2026-04-18 00:48:08.583043 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.13s 2026-04-18 00:48:08.583058 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.83s 2026-04-18 00:48:08.583061 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.66s 2026-04-18 00:48:08.583065 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.87s 2026-04-18 00:48:08.583068 | orchestrator | 2026-04-18 00:48:08.583353 | orchestrator | 2026-04-18 00:48:08.583369 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:48:08.583373 | orchestrator | 2026-04-18 00:48:08.583376 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:48:08.583379 | orchestrator | Saturday 18 April 2026 00:46:51 +0000 (0:00:00.596) 0:00:00.596 ******** 2026-04-18 00:48:08.583382 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-18 00:48:08.583385 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-18 00:48:08.583388 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-18 00:48:08.583391 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-18 00:48:08.583394 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-18 00:48:08.583397 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-18 00:48:08.583400 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-18 00:48:08.583403 | orchestrator | 2026-04-18 00:48:08.583406 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-18 00:48:08.583409 | orchestrator | 2026-04-18 00:48:08.583412 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-18 00:48:08.583416 | orchestrator | Saturday 18 April 2026 00:46:52 +0000 (0:00:00.834) 0:00:01.430 ******** 2026-04-18 00:48:08.583419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:48:08.583424 | orchestrator | 2026-04-18 00:48:08.583427 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-18 00:48:08.583430 | orchestrator | Saturday 18 April 2026 00:46:53 +0000 (0:00:01.197) 0:00:02.628 ******** 2026-04-18 00:48:08.583433 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:48:08.583436 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:48:08.583439 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:48:08.583442 | orchestrator | ok: [testbed-manager] 2026-04-18 00:48:08.583445 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:48:08.583448 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:48:08.583451 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:48:08.583454 | orchestrator | 2026-04-18 00:48:08.583457 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-18 00:48:08.583461 | orchestrator | Saturday 18 April 2026 00:46:56 +0000 (0:00:03.261) 0:00:05.890 ******** 2026-04-18 00:48:08.583464 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:48:08.583467 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:48:08.583470 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:48:08.583473 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:48:08.583475 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:48:08.583479 | orchestrator | ok: [testbed-manager] 2026-04-18 00:48:08.583482 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:48:08.583484 | orchestrator | 2026-04-18 00:48:08.583488 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-18 00:48:08.583491 | orchestrator | Saturday 18 April 2026 00:47:00 +0000 (0:00:03.064) 0:00:08.954 ******** 2026-04-18 00:48:08.583494 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:48:08.583497 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:48:08.583500 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:48:08.583503 | orchestrator | changed: [testbed-manager] 2026-04-18 00:48:08.583506 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:48:08.583509 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:48:08.583518 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:48:08.583522 | orchestrator | 2026-04-18 00:48:08.583525 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-18 00:48:08.583528 | orchestrator | Saturday 18 April 2026 00:47:01 +0000 (0:00:01.909) 0:00:10.864 ******** 2026-04-18 00:48:08.583531 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:48:08.583534 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:48:08.583537 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:48:08.583540 | orchestrator | changed: [testbed-manager] 2026-04-18 00:48:08.583543 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:48:08.583546 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:48:08.583549 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:48:08.583552 | orchestrator | 2026-04-18 00:48:08.583555 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-18 00:48:08.583558 | orchestrator | Saturday 18 April 2026 00:47:13 +0000 (0:00:11.206) 0:00:22.071 ******** 2026-04-18 00:48:08.583561 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:48:08.583564 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:48:08.583567 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:48:08.583570 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:48:08.583573 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:48:08.583576 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:48:08.583579 | orchestrator | changed: [testbed-manager] 2026-04-18 00:48:08.583582 | orchestrator | 2026-04-18 00:48:08.583586 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-18 00:48:08.583589 | orchestrator | Saturday 18 April 2026 00:47:39 +0000 (0:00:26.308) 0:00:48.379 ******** 2026-04-18 00:48:08.583592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:48:08.583596 | orchestrator | 2026-04-18 00:48:08.583599 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-18 00:48:08.583607 | orchestrator | Saturday 18 April 2026 00:47:41 +0000 (0:00:01.574) 0:00:49.953 ******** 2026-04-18 00:48:08.583611 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-18 00:48:08.583614 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-18 00:48:08.583617 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-18 00:48:08.583620 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-18 00:48:08.583628 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-18 00:48:08.583632 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-18 00:48:08.583635 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-18 00:48:08.583638 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-18 00:48:08.583641 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-18 00:48:08.583644 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-18 00:48:08.583647 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-18 00:48:08.583650 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-18 00:48:08.583653 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-18 00:48:08.583656 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-18 00:48:08.583659 | orchestrator | 2026-04-18 00:48:08.583662 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-18 00:48:08.583666 | orchestrator | Saturday 18 April 2026 00:47:44 +0000 (0:00:03.868) 0:00:53.822 ******** 2026-04-18 00:48:08.583669 | orchestrator | ok: [testbed-manager] 2026-04-18 00:48:08.583672 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:48:08.583675 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:48:08.583678 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:48:08.583681 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:48:08.583686 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:48:08.583689 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:48:08.583692 | orchestrator | 2026-04-18 00:48:08.583696 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-18 00:48:08.583699 | orchestrator | Saturday 18 April 2026 00:47:46 +0000 (0:00:01.295) 0:00:55.117 ******** 2026-04-18 00:48:08.583702 | orchestrator | changed: [testbed-manager] 2026-04-18 00:48:08.583705 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:48:08.583708 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:48:08.583711 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:48:08.583714 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:48:08.583717 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:48:08.583720 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:48:08.583723 | orchestrator | 2026-04-18 00:48:08.583726 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-18 00:48:08.583729 | orchestrator | Saturday 18 April 2026 00:47:47 +0000 (0:00:01.555) 0:00:56.672 ******** 2026-04-18 00:48:08.583732 | orchestrator | ok: [testbed-manager] 2026-04-18 00:48:08.583735 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:48:08.583738 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:48:08.583741 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:48:08.583744 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:48:08.583747 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:48:08.583750 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:48:08.583753 | orchestrator | 2026-04-18 00:48:08.583756 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-18 00:48:08.583759 | orchestrator | Saturday 18 April 2026 00:47:49 +0000 (0:00:01.383) 0:00:58.056 ******** 2026-04-18 00:48:08.583762 | orchestrator | ok: [testbed-manager] 2026-04-18 00:48:08.583765 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:48:08.583768 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:48:08.583771 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:48:08.583774 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:48:08.583777 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:48:08.583780 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:48:08.583783 | orchestrator | 2026-04-18 00:48:08.583786 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-18 00:48:08.583789 | orchestrator | Saturday 18 April 2026 00:47:51 +0000 (0:00:01.893) 0:00:59.950 ******** 2026-04-18 00:48:08.583792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-18 00:48:08.583797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:48:08.583800 | orchestrator | 2026-04-18 00:48:08.583803 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-18 00:48:08.583806 | orchestrator | Saturday 18 April 2026 00:47:52 +0000 (0:00:01.583) 0:01:01.534 ******** 2026-04-18 00:48:08.583809 | orchestrator | changed: [testbed-manager] 2026-04-18 00:48:08.583812 | orchestrator | 2026-04-18 00:48:08.583815 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-18 00:48:08.583819 | orchestrator | Saturday 18 April 2026 00:47:54 +0000 (0:00:02.077) 0:01:03.612 ******** 2026-04-18 00:48:08.583822 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:48:08.583825 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:48:08.583828 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:48:08.583831 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:48:08.583834 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:48:08.583837 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:48:08.583840 | orchestrator | changed: [testbed-manager] 2026-04-18 00:48:08.583843 | orchestrator | 2026-04-18 00:48:08.583846 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:48:08.583849 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:48:08.583854 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:48:08.583860 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:48:08.583863 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:48:08.583868 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:48:08.583871 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:48:08.583874 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:48:08.583877 | orchestrator | 2026-04-18 00:48:08.583880 | orchestrator | 2026-04-18 00:48:08.583883 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:48:08.583886 | orchestrator | Saturday 18 April 2026 00:48:05 +0000 (0:00:11.048) 0:01:14.660 ******** 2026-04-18 00:48:08.583890 | orchestrator | =============================================================================== 2026-04-18 00:48:08.583893 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 26.31s 2026-04-18 00:48:08.583896 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.21s 2026-04-18 00:48:08.583899 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.05s 2026-04-18 00:48:08.583902 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.87s 2026-04-18 00:48:08.583905 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.26s 2026-04-18 00:48:08.583908 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.06s 2026-04-18 00:48:08.583911 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.08s 2026-04-18 00:48:08.583914 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.91s 2026-04-18 00:48:08.583927 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.89s 2026-04-18 00:48:08.583930 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.58s 2026-04-18 00:48:08.583933 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.57s 2026-04-18 00:48:08.583936 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.56s 2026-04-18 00:48:08.583939 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.38s 2026-04-18 00:48:08.583942 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.30s 2026-04-18 00:48:08.583945 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.20s 2026-04-18 00:48:08.583948 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-04-18 00:48:08.584480 | orchestrator | 2026-04-18 00:48:08 | INFO  | Task 15246ffc-1925-400e-9e45-e926fc9e7e3e is in state STARTED 2026-04-18 00:48:08.584660 | orchestrator | 2026-04-18 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:11.615331 | orchestrator | 2026-04-18 00:48:11 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:11.617120 | orchestrator | 2026-04-18 00:48:11 | INFO  | Task d8185a23-f7b6-4720-b56a-40094c01a58a is in state STARTED 2026-04-18 00:48:11.618313 | orchestrator | 2026-04-18 00:48:11 | INFO  | Task b0bf6705-b009-49c4-9532-1f8fd8996195 is in state SUCCESS 2026-04-18 00:48:11.620095 | orchestrator | 2026-04-18 00:48:11.620134 | orchestrator | 2026-04-18 00:48:11.620138 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:48:11.620142 | orchestrator | 2026-04-18 00:48:11.620145 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:48:11.620149 | orchestrator | Saturday 18 April 2026 00:47:58 +0000 (0:00:00.585) 0:00:00.585 ******** 2026-04-18 00:48:11.620152 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:48:11.620156 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:48:11.620159 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:48:11.620162 | orchestrator | 2026-04-18 00:48:11.620165 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:48:11.620168 | orchestrator | Saturday 18 April 2026 00:47:59 +0000 (0:00:00.344) 0:00:00.930 ******** 2026-04-18 00:48:11.620172 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-18 00:48:11.620175 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-18 00:48:11.620178 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-18 00:48:11.620181 | orchestrator | 2026-04-18 00:48:11.620184 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-18 00:48:11.620187 | orchestrator | 2026-04-18 00:48:11.620190 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-18 00:48:11.620193 | orchestrator | Saturday 18 April 2026 00:47:59 +0000 (0:00:00.640) 0:00:01.571 ******** 2026-04-18 00:48:11.620196 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:48:11.620200 | orchestrator | 2026-04-18 00:48:11.620203 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-18 00:48:11.620206 | orchestrator | Saturday 18 April 2026 00:48:00 +0000 (0:00:00.846) 0:00:02.417 ******** 2026-04-18 00:48:11.620209 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-18 00:48:11.620212 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-18 00:48:11.620215 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-18 00:48:11.620218 | orchestrator | 2026-04-18 00:48:11.620222 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-18 00:48:11.620225 | orchestrator | Saturday 18 April 2026 00:48:02 +0000 (0:00:02.017) 0:00:04.434 ******** 2026-04-18 00:48:11.620228 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-18 00:48:11.620231 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-18 00:48:11.620234 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-18 00:48:11.620237 | orchestrator | 2026-04-18 00:48:11.620240 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-18 00:48:11.620243 | orchestrator | Saturday 18 April 2026 00:48:04 +0000 (0:00:01.569) 0:00:06.004 ******** 2026-04-18 00:48:11.620249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-18 00:48:11.620261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-18 00:48:11.620273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-18 00:48:11.620276 | orchestrator | 2026-04-18 00:48:11.620279 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-18 00:48:11.620283 | orchestrator | Saturday 18 April 2026 00:48:05 +0000 (0:00:01.477) 0:00:07.481 ******** 2026-04-18 00:48:11.620286 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 00:48:11.620289 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:11.620292 | orchestrator | } 2026-04-18 00:48:11.620295 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 00:48:11.620298 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:11.620301 | orchestrator | } 2026-04-18 00:48:11.620304 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 00:48:11.620307 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:11.620310 | orchestrator | } 2026-04-18 00:48:11.620313 | orchestrator | 2026-04-18 00:48:11.620316 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 00:48:11.620319 | orchestrator | Saturday 18 April 2026 00:48:06 +0000 (0:00:00.590) 0:00:08.072 ******** 2026-04-18 00:48:11.620325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-18 00:48:11.620328 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:11.620331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-18 00:48:11.620334 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:48:11.620338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-18 00:48:11.620343 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:48:11.620346 | orchestrator | 2026-04-18 00:48:11.620349 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-18 00:48:11.620352 | orchestrator | Saturday 18 April 2026 00:48:07 +0000 (0:00:01.505) 0:00:09.577 ******** 2026-04-18 00:48:11.620361 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_2jotbs1f/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_2jotbs1f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_2jotbs1f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_2jotbs1f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached: Internal Server Error (\"unknown: repository kolla/release/2024.2/memcached not found\")\\n'"} 2026-04-18 00:48:11.620368 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_316ctw_m/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_316ctw_m/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_316ctw_m/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_316ctw_m/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached: Internal Server Error (\"unknown: repository kolla/release/2024.2/memcached not found\")\\n'"} 2026-04-18 00:48:11.620376 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_44tvj5el/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_44tvj5el/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_44tvj5el/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_44tvj5el/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.6.24.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmemcached: Internal Server Error (\"unknown: repository kolla/release/2024.2/memcached not found\")\\n'"} 2026-04-18 00:48:11.620382 | orchestrator | 2026-04-18 00:48:11.620385 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:48:11.620388 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-18 00:48:11.620392 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-18 00:48:11.620395 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-18 00:48:11.620398 | orchestrator | 2026-04-18 00:48:11.620401 | orchestrator | 2026-04-18 00:48:11.620404 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:48:11.620407 | orchestrator | Saturday 18 April 2026 00:48:09 +0000 (0:00:01.585) 0:00:11.162 ******** 2026-04-18 00:48:11.620411 | orchestrator | =============================================================================== 2026-04-18 00:48:11.620414 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.02s 2026-04-18 00:48:11.620417 | orchestrator | memcached : Restart memcached container --------------------------------- 1.59s 2026-04-18 00:48:11.620420 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.57s 2026-04-18 00:48:11.620423 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.51s 2026-04-18 00:48:11.620426 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.48s 2026-04-18 00:48:11.620428 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.85s 2026-04-18 00:48:11.620432 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-04-18 00:48:11.620436 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.59s 2026-04-18 00:48:11.620439 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-04-18 00:48:11.622788 | orchestrator | 2026-04-18 00:48:11 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:11.623500 | orchestrator | 2026-04-18 00:48:11 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state STARTED 2026-04-18 00:48:11.624233 | orchestrator | 2026-04-18 00:48:11 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:11.625052 | orchestrator | 2026-04-18 00:48:11 | INFO  | Task 15246ffc-1925-400e-9e45-e926fc9e7e3e is in state STARTED 2026-04-18 00:48:11.625256 | orchestrator | 2026-04-18 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:14.657048 | orchestrator | 2026-04-18 00:48:14 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:14.657112 | orchestrator | 2026-04-18 00:48:14.657118 | orchestrator | 2026-04-18 00:48:14 | INFO  | Task d8185a23-f7b6-4720-b56a-40094c01a58a is in state SUCCESS 2026-04-18 00:48:14.659056 | orchestrator | 2026-04-18 00:48:14.659094 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:48:14.659099 | orchestrator | 2026-04-18 00:48:14.659103 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:48:14.659106 | orchestrator | Saturday 18 April 2026 00:47:59 +0000 (0:00:00.617) 0:00:00.618 ******** 2026-04-18 00:48:14.659109 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:48:14.659115 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:48:14.659137 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:48:14.659144 | orchestrator | 2026-04-18 00:48:14.659158 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:48:14.659164 | orchestrator | Saturday 18 April 2026 00:47:59 +0000 (0:00:00.320) 0:00:00.938 ******** 2026-04-18 00:48:14.659170 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-18 00:48:14.659175 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-18 00:48:14.659178 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-18 00:48:14.659181 | orchestrator | 2026-04-18 00:48:14.659185 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-18 00:48:14.659188 | orchestrator | 2026-04-18 00:48:14.659191 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-18 00:48:14.659196 | orchestrator | Saturday 18 April 2026 00:48:00 +0000 (0:00:00.237) 0:00:01.176 ******** 2026-04-18 00:48:14.659201 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:48:14.659208 | orchestrator | 2026-04-18 00:48:14.659213 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-18 00:48:14.659218 | orchestrator | Saturday 18 April 2026 00:48:00 +0000 (0:00:00.727) 0:00:01.903 ******** 2026-04-18 00:48:14.659225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659274 | orchestrator | 2026-04-18 00:48:14.659278 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-18 00:48:14.659281 | orchestrator | Saturday 18 April 2026 00:48:03 +0000 (0:00:02.597) 0:00:04.501 ******** 2026-04-18 00:48:14.659284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659316 | orchestrator | 2026-04-18 00:48:14.659319 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-18 00:48:14.659322 | orchestrator | Saturday 18 April 2026 00:48:05 +0000 (0:00:02.521) 0:00:07.023 ******** 2026-04-18 00:48:14.659325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659352 | orchestrator | 2026-04-18 00:48:14.659356 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-18 00:48:14.659359 | orchestrator | Saturday 18 April 2026 00:48:08 +0000 (0:00:02.767) 0:00:09.791 ******** 2026-04-18 00:48:14.659362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-18 00:48:14.659388 | orchestrator | 2026-04-18 00:48:14.659391 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-18 00:48:14.659395 | orchestrator | Saturday 18 April 2026 00:48:10 +0000 (0:00:01.822) 0:00:11.614 ******** 2026-04-18 00:48:14.659398 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 00:48:14.659402 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:14.659405 | orchestrator | } 2026-04-18 00:48:14.659408 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 00:48:14.659411 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:14.659414 | orchestrator | } 2026-04-18 00:48:14.659418 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 00:48:14.659421 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:14.659424 | orchestrator | } 2026-04-18 00:48:14.659427 | orchestrator | 2026-04-18 00:48:14.659430 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 00:48:14.659434 | orchestrator | Saturday 18 April 2026 00:48:11 +0000 (0:00:00.593) 0:00:12.207 ******** 2026-04-18 00:48:14.659439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-18 00:48:14.659445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-18 00:48:14.659450 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:14.659454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-18 00:48:14.659457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-18 00:48:14.659460 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:48:14.659468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-18 00:48:14.659472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-18 00:48:14.659475 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:48:14.659478 | orchestrator | 2026-04-18 00:48:14.659481 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-18 00:48:14.659485 | orchestrator | Saturday 18 April 2026 00:48:12 +0000 (0:00:01.040) 0:00:13.248 ******** 2026-04-18 00:48:14.659488 | orchestrator | 2026-04-18 00:48:14.659491 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-18 00:48:14.659494 | orchestrator | Saturday 18 April 2026 00:48:12 +0000 (0:00:00.123) 0:00:13.372 ******** 2026-04-18 00:48:14.659498 | orchestrator | 2026-04-18 00:48:14.659503 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-18 00:48:14.659507 | orchestrator | Saturday 18 April 2026 00:48:12 +0000 (0:00:00.182) 0:00:13.555 ******** 2026-04-18 00:48:14.659513 | orchestrator | 2026-04-18 00:48:14.659518 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-18 00:48:14.659523 | orchestrator | Saturday 18 April 2026 00:48:12 +0000 (0:00:00.106) 0:00:13.662 ******** 2026-04-18 00:48:14.659537 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_fnmvdx6h/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_fnmvdx6h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_fnmvdx6h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_fnmvdx6h/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis: Internal Server Error (\"unknown: repository kolla/release/2024.2/redis not found\")\\n'"} 2026-04-18 00:48:14.659547 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_6kye41g2/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_6kye41g2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_6kye41g2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_6kye41g2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis: Internal Server Error (\"unknown: repository kolla/release/2024.2/redis not found\")\\n'"} 2026-04-18 00:48:14.659565 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ftkxmz0l/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ftkxmz0l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_ftkxmz0l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ftkxmz0l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=7.0.15.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fredis: Internal Server Error (\"unknown: repository kolla/release/2024.2/redis not found\")\\n'"} 2026-04-18 00:48:14.659570 | orchestrator | 2026-04-18 00:48:14.659573 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:48:14.659576 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-18 00:48:14.659580 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-18 00:48:14.659584 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-18 00:48:14.659589 | orchestrator | 2026-04-18 00:48:14.659592 | orchestrator | 2026-04-18 00:48:14.659596 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:48:14.659599 | orchestrator | Saturday 18 April 2026 00:48:14 +0000 (0:00:01.485) 0:00:15.147 ******** 2026-04-18 00:48:14.659602 | orchestrator | =============================================================================== 2026-04-18 00:48:14.659605 | orchestrator | redis : Copying over redis config files --------------------------------- 2.77s 2026-04-18 00:48:14.659608 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.60s 2026-04-18 00:48:14.659611 | orchestrator | redis : Copying over default config.json files -------------------------- 2.52s 2026-04-18 00:48:14.659614 | orchestrator | service-check-containers : redis | Check containers --------------------- 1.82s 2026-04-18 00:48:14.659618 | orchestrator | redis : Restart redis container ----------------------------------------- 1.49s 2026-04-18 00:48:14.659623 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.04s 2026-04-18 00:48:14.659628 | orchestrator | redis : include_tasks --------------------------------------------------- 0.73s 2026-04-18 00:48:14.659637 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.59s 2026-04-18 00:48:14.659643 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.41s 2026-04-18 00:48:14.659649 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-04-18 00:48:14.659654 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.24s 2026-04-18 00:48:14.659659 | orchestrator | 2026-04-18 00:48:14 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:14.659853 | orchestrator | 2026-04-18 00:48:14 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state STARTED 2026-04-18 00:48:14.660672 | orchestrator | 2026-04-18 00:48:14 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:14.661372 | orchestrator | 2026-04-18 00:48:14 | INFO  | Task 15246ffc-1925-400e-9e45-e926fc9e7e3e is in state STARTED 2026-04-18 00:48:14.661399 | orchestrator | 2026-04-18 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:17.684752 | orchestrator | 2026-04-18 00:48:17 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:17.686508 | orchestrator | 2026-04-18 00:48:17 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:17.687800 | orchestrator | 2026-04-18 00:48:17 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state STARTED 2026-04-18 00:48:17.687821 | orchestrator | 2026-04-18 00:48:17 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:17.688954 | orchestrator | 2026-04-18 00:48:17 | INFO  | Task 15246ffc-1925-400e-9e45-e926fc9e7e3e is in state STARTED 2026-04-18 00:48:17.689417 | orchestrator | 2026-04-18 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:20.742066 | orchestrator | 2026-04-18 00:48:20 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:20.748364 | orchestrator | 2026-04-18 00:48:20 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:20.755117 | orchestrator | 2026-04-18 00:48:20 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state STARTED 2026-04-18 00:48:20.762878 | orchestrator | 2026-04-18 00:48:20 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:20.772115 | orchestrator | 2026-04-18 00:48:20 | INFO  | Task 15246ffc-1925-400e-9e45-e926fc9e7e3e is in state STARTED 2026-04-18 00:48:20.772168 | orchestrator | 2026-04-18 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:23.805270 | orchestrator | 2026-04-18 00:48:23 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:23.806182 | orchestrator | 2026-04-18 00:48:23 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:23.808418 | orchestrator | 2026-04-18 00:48:23 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state STARTED 2026-04-18 00:48:23.810899 | orchestrator | 2026-04-18 00:48:23 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:23.813158 | orchestrator | 2026-04-18 00:48:23 | INFO  | Task 15246ffc-1925-400e-9e45-e926fc9e7e3e is in state SUCCESS 2026-04-18 00:48:23.813193 | orchestrator | 2026-04-18 00:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:23.814548 | orchestrator | 2026-04-18 00:48:23.814638 | orchestrator | 2026-04-18 00:48:23.814644 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:48:23.814648 | orchestrator | 2026-04-18 00:48:23.814651 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:48:23.814654 | orchestrator | Saturday 18 April 2026 00:47:58 +0000 (0:00:00.318) 0:00:00.318 ******** 2026-04-18 00:48:23.814658 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:48:23.814661 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:48:23.814665 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:48:23.814668 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:48:23.814671 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:48:23.814674 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:48:23.814677 | orchestrator | 2026-04-18 00:48:23.814680 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:48:23.814684 | orchestrator | Saturday 18 April 2026 00:47:59 +0000 (0:00:00.595) 0:00:00.913 ******** 2026-04-18 00:48:23.814687 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-18 00:48:23.814691 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-18 00:48:23.814694 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-18 00:48:23.814697 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-18 00:48:23.814700 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-18 00:48:23.814703 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-18 00:48:23.814706 | orchestrator | 2026-04-18 00:48:23.814709 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-18 00:48:23.814713 | orchestrator | 2026-04-18 00:48:23.814716 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-18 00:48:23.814719 | orchestrator | Saturday 18 April 2026 00:48:00 +0000 (0:00:01.271) 0:00:02.184 ******** 2026-04-18 00:48:23.814723 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:48:23.814726 | orchestrator | 2026-04-18 00:48:23.814730 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-18 00:48:23.814733 | orchestrator | Saturday 18 April 2026 00:48:02 +0000 (0:00:02.138) 0:00:04.323 ******** 2026-04-18 00:48:23.814736 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-18 00:48:23.814739 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-18 00:48:23.814743 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-18 00:48:23.814746 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-18 00:48:23.814749 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-18 00:48:23.814752 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-18 00:48:23.814755 | orchestrator | 2026-04-18 00:48:23.814767 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-18 00:48:23.814770 | orchestrator | Saturday 18 April 2026 00:48:04 +0000 (0:00:02.011) 0:00:06.334 ******** 2026-04-18 00:48:23.814773 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-18 00:48:23.814776 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-18 00:48:23.814779 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-18 00:48:23.814782 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-18 00:48:23.814785 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-18 00:48:23.814789 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-18 00:48:23.814792 | orchestrator | 2026-04-18 00:48:23.814795 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-18 00:48:23.814798 | orchestrator | Saturday 18 April 2026 00:48:06 +0000 (0:00:02.231) 0:00:08.565 ******** 2026-04-18 00:48:23.814803 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-18 00:48:23.814807 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:23.814829 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-18 00:48:23.814833 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:48:23.814840 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-18 00:48:23.814843 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:48:23.814847 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-18 00:48:23.814850 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:48:23.814853 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-18 00:48:23.814857 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:48:23.814860 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-18 00:48:23.814863 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:48:23.814866 | orchestrator | 2026-04-18 00:48:23.814870 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-18 00:48:23.814873 | orchestrator | Saturday 18 April 2026 00:48:07 +0000 (0:00:01.002) 0:00:09.568 ******** 2026-04-18 00:48:23.814876 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:23.814879 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:48:23.814883 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:48:23.814886 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:48:23.814889 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:48:23.814892 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:48:23.814896 | orchestrator | 2026-04-18 00:48:23.814899 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-18 00:48:23.814902 | orchestrator | Saturday 18 April 2026 00:48:08 +0000 (0:00:01.069) 0:00:10.637 ******** 2026-04-18 00:48:23.814941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.814947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.814954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.814957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.814962 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.814969 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.814973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.814976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.814982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.814987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815108 | orchestrator | 2026-04-18 00:48:23.815113 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-18 00:48:23.815118 | orchestrator | Saturday 18 April 2026 00:48:10 +0000 (0:00:01.621) 0:00:12.258 ******** 2026-04-18 00:48:23.815123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815158 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815193 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815216 | orchestrator | 2026-04-18 00:48:23.815232 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-18 00:48:23.815237 | orchestrator | Saturday 18 April 2026 00:48:13 +0000 (0:00:02.967) 0:00:15.226 ******** 2026-04-18 00:48:23.815242 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:23.815248 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:48:23.815253 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:48:23.815258 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:48:23.815263 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:48:23.815268 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:48:23.815273 | orchestrator | 2026-04-18 00:48:23.815277 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-18 00:48:23.815282 | orchestrator | Saturday 18 April 2026 00:48:13 +0000 (0:00:00.616) 0:00:15.843 ******** 2026-04-18 00:48:23.815287 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815332 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815339 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-18 00:48:23.815363 | orchestrator | 2026-04-18 00:48:23.815366 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-18 00:48:23.815370 | orchestrator | Saturday 18 April 2026 00:48:16 +0000 (0:00:02.168) 0:00:18.012 ******** 2026-04-18 00:48:23.815373 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 00:48:23.815376 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:23.815379 | orchestrator | } 2026-04-18 00:48:23.815382 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 00:48:23.815385 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:23.815388 | orchestrator | } 2026-04-18 00:48:23.815391 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 00:48:23.815394 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:23.815398 | orchestrator | } 2026-04-18 00:48:23.815401 | orchestrator | changed: [testbed-node-3] => { 2026-04-18 00:48:23.815404 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:23.815407 | orchestrator | } 2026-04-18 00:48:23.815410 | orchestrator | changed: [testbed-node-4] => { 2026-04-18 00:48:23.815412 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:23.815415 | orchestrator | } 2026-04-18 00:48:23.815419 | orchestrator | changed: [testbed-node-5] => { 2026-04-18 00:48:23.815422 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:23.815424 | orchestrator | } 2026-04-18 00:48:23.815427 | orchestrator | 2026-04-18 00:48:23.815431 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 00:48:23.815435 | orchestrator | Saturday 18 April 2026 00:48:16 +0000 (0:00:00.733) 0:00:18.745 ******** 2026-04-18 00:48:23.815441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-18 00:48:23.815448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-18 00:48:23.815451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-18 00:48:23.815454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-18 00:48:23.815457 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:23.815461 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:48:23.815464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-18 00:48:23.815469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-18 00:48:23.815474 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:48:23.815478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-18 00:48:23.815484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-18 00:48:23.815487 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:48:23.815490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-18 00:48:23.815493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-18 00:48:23.815497 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:48:23.815501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-18 00:48:23.815507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-18 00:48:23.815510 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:48:23.815513 | orchestrator | 2026-04-18 00:48:23.815516 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-18 00:48:23.815519 | orchestrator | Saturday 18 April 2026 00:48:18 +0000 (0:00:01.614) 0:00:20.359 ******** 2026-04-18 00:48:23.815522 | orchestrator | 2026-04-18 00:48:23.815526 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-18 00:48:23.815529 | orchestrator | Saturday 18 April 2026 00:48:18 +0000 (0:00:00.322) 0:00:20.682 ******** 2026-04-18 00:48:23.815532 | orchestrator | 2026-04-18 00:48:23.815537 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-18 00:48:23.815540 | orchestrator | Saturday 18 April 2026 00:48:18 +0000 (0:00:00.141) 0:00:20.823 ******** 2026-04-18 00:48:23.815543 | orchestrator | 2026-04-18 00:48:23.815546 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-18 00:48:23.815549 | orchestrator | Saturday 18 April 2026 00:48:19 +0000 (0:00:00.181) 0:00:21.004 ******** 2026-04-18 00:48:23.815552 | orchestrator | 2026-04-18 00:48:23.815555 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-18 00:48:23.815558 | orchestrator | Saturday 18 April 2026 00:48:19 +0000 (0:00:00.224) 0:00:21.229 ******** 2026-04-18 00:48:23.815561 | orchestrator | 2026-04-18 00:48:23.815565 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-18 00:48:23.815568 | orchestrator | Saturday 18 April 2026 00:48:19 +0000 (0:00:00.218) 0:00:21.447 ******** 2026-04-18 00:48:23.815571 | orchestrator | 2026-04-18 00:48:23.815574 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-18 00:48:23.815577 | orchestrator | Saturday 18 April 2026 00:48:19 +0000 (0:00:00.234) 0:00:21.682 ******** 2026-04-18 00:48:23.815809 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_onb_6v_f/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_onb_6v_f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_onb_6v_f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_onb_6v_f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-18 00:48:23.815826 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_qsdk7101/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_qsdk7101/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_qsdk7101/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_qsdk7101/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-18 00:48:23.815834 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_79_muysl/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_79_muysl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_79_muysl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_79_muysl/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-18 00:48:23.815843 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_qin7rkhc/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_qin7rkhc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_qin7rkhc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_qin7rkhc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-18 00:48:23.815854 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_nqdma7m1/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_nqdma7m1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_nqdma7m1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_nqdma7m1/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-18 00:48:23.815862 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_3ru8f1t0/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_3ru8f1t0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_3ru8f1t0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_3ru8f1t0/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.5.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopenvswitch-db-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/openvswitch-db-server not found\")\\n'"} 2026-04-18 00:48:23.815868 | orchestrator | 2026-04-18 00:48:23.815872 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:48:23.815876 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-18 00:48:23.815882 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-18 00:48:23.815886 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-18 00:48:23.815890 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-18 00:48:23.815894 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-18 00:48:23.815897 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-18 00:48:23.815901 | orchestrator | 2026-04-18 00:48:23.815904 | orchestrator | 2026-04-18 00:48:23.815908 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:48:23.815941 | orchestrator | Saturday 18 April 2026 00:48:22 +0000 (0:00:03.101) 0:00:24.783 ******** 2026-04-18 00:48:23.815946 | orchestrator | =============================================================================== 2026-04-18 00:48:23.815949 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 3.10s 2026-04-18 00:48:23.815953 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.97s 2026-04-18 00:48:23.815957 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.23s 2026-04-18 00:48:23.815963 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.17s 2026-04-18 00:48:23.815967 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.14s 2026-04-18 00:48:23.815970 | orchestrator | module-load : Load modules ---------------------------------------------- 2.01s 2026-04-18 00:48:23.815976 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.62s 2026-04-18 00:48:23.815980 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.61s 2026-04-18 00:48:23.815983 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.32s 2026-04-18 00:48:23.815987 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.27s 2026-04-18 00:48:23.815990 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.07s 2026-04-18 00:48:23.815994 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.00s 2026-04-18 00:48:23.815997 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.73s 2026-04-18 00:48:23.816001 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.62s 2026-04-18 00:48:23.816005 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2026-04-18 00:48:26.869494 | orchestrator | 2026-04-18 00:48:26 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:26.871863 | orchestrator | 2026-04-18 00:48:26 | INFO  | Task dbab34af-d2a0-4cf3-8591-fa2ea7d712a1 is in state STARTED 2026-04-18 00:48:26.874681 | orchestrator | 2026-04-18 00:48:26 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:26.877966 | orchestrator | 2026-04-18 00:48:26 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state STARTED 2026-04-18 00:48:26.880101 | orchestrator | 2026-04-18 00:48:26 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:26.880432 | orchestrator | 2026-04-18 00:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:29.911558 | orchestrator | 2026-04-18 00:48:29 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:29.911890 | orchestrator | 2026-04-18 00:48:29 | INFO  | Task dbab34af-d2a0-4cf3-8591-fa2ea7d712a1 is in state STARTED 2026-04-18 00:48:29.912790 | orchestrator | 2026-04-18 00:48:29 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:29.913504 | orchestrator | 2026-04-18 00:48:29 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state STARTED 2026-04-18 00:48:29.914219 | orchestrator | 2026-04-18 00:48:29 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:29.914347 | orchestrator | 2026-04-18 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:32.968758 | orchestrator | 2026-04-18 00:48:32 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:32.969896 | orchestrator | 2026-04-18 00:48:32 | INFO  | Task dbab34af-d2a0-4cf3-8591-fa2ea7d712a1 is in state STARTED 2026-04-18 00:48:32.970140 | orchestrator | 2026-04-18 00:48:32 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:32.971034 | orchestrator | 2026-04-18 00:48:32 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state STARTED 2026-04-18 00:48:32.973794 | orchestrator | 2026-04-18 00:48:32 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:32.973844 | orchestrator | 2026-04-18 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:36.005855 | orchestrator | 2026-04-18 00:48:36 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:36.008288 | orchestrator | 2026-04-18 00:48:36 | INFO  | Task dbab34af-d2a0-4cf3-8591-fa2ea7d712a1 is in state STARTED 2026-04-18 00:48:36.008976 | orchestrator | 2026-04-18 00:48:36 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:36.009649 | orchestrator | 2026-04-18 00:48:36 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state STARTED 2026-04-18 00:48:36.010287 | orchestrator | 2026-04-18 00:48:36 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:36.010339 | orchestrator | 2026-04-18 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:39.040561 | orchestrator | 2026-04-18 00:48:39 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:39.041077 | orchestrator | 2026-04-18 00:48:39 | INFO  | Task dbab34af-d2a0-4cf3-8591-fa2ea7d712a1 is in state STARTED 2026-04-18 00:48:39.041960 | orchestrator | 2026-04-18 00:48:39 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:39.042747 | orchestrator | 2026-04-18 00:48:39 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state STARTED 2026-04-18 00:48:39.043514 | orchestrator | 2026-04-18 00:48:39 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:39.043630 | orchestrator | 2026-04-18 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:42.073898 | orchestrator | 2026-04-18 00:48:42 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:42.078923 | orchestrator | 2026-04-18 00:48:42 | INFO  | Task dbab34af-d2a0-4cf3-8591-fa2ea7d712a1 is in state STARTED 2026-04-18 00:48:42.080585 | orchestrator | 2026-04-18 00:48:42 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:42.081515 | orchestrator | 2026-04-18 00:48:42 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state STARTED 2026-04-18 00:48:42.082488 | orchestrator | 2026-04-18 00:48:42 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:42.082510 | orchestrator | 2026-04-18 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:45.123227 | orchestrator | 2026-04-18 00:48:45 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:45.124487 | orchestrator | 2026-04-18 00:48:45 | INFO  | Task dbab34af-d2a0-4cf3-8591-fa2ea7d712a1 is in state SUCCESS 2026-04-18 00:48:45.125288 | orchestrator | 2026-04-18 00:48:45.125320 | orchestrator | 2026-04-18 00:48:45.125329 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:48:45.125336 | orchestrator | 2026-04-18 00:48:45.125343 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:48:45.125350 | orchestrator | Saturday 18 April 2026 00:48:27 +0000 (0:00:00.244) 0:00:00.244 ******** 2026-04-18 00:48:45.125356 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:48:45.125363 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:48:45.125370 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:48:45.125376 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:48:45.125382 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:48:45.125389 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:48:45.125395 | orchestrator | 2026-04-18 00:48:45.125402 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:48:45.125408 | orchestrator | Saturday 18 April 2026 00:48:28 +0000 (0:00:00.542) 0:00:00.786 ******** 2026-04-18 00:48:45.125414 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-18 00:48:45.125421 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-18 00:48:45.125427 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-18 00:48:45.125433 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-18 00:48:45.125456 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-18 00:48:45.125463 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-18 00:48:45.125469 | orchestrator | 2026-04-18 00:48:45.125475 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-18 00:48:45.125482 | orchestrator | 2026-04-18 00:48:45.125488 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-18 00:48:45.125495 | orchestrator | Saturday 18 April 2026 00:48:29 +0000 (0:00:00.927) 0:00:01.714 ******** 2026-04-18 00:48:45.125502 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:48:45.125510 | orchestrator | 2026-04-18 00:48:45.125514 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-18 00:48:45.125518 | orchestrator | Saturday 18 April 2026 00:48:30 +0000 (0:00:01.050) 0:00:02.765 ******** 2026-04-18 00:48:45.125523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125548 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125564 | orchestrator | 2026-04-18 00:48:45.125568 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-18 00:48:45.125572 | orchestrator | Saturday 18 April 2026 00:48:31 +0000 (0:00:01.607) 0:00:04.373 ******** 2026-04-18 00:48:45.125580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125600 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125603 | orchestrator | 2026-04-18 00:48:45.125607 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-18 00:48:45.125613 | orchestrator | Saturday 18 April 2026 00:48:34 +0000 (0:00:02.431) 0:00:06.804 ******** 2026-04-18 00:48:45.125617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125659 | orchestrator | 2026-04-18 00:48:45.125663 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-18 00:48:45.125667 | orchestrator | Saturday 18 April 2026 00:48:35 +0000 (0:00:01.388) 0:00:08.193 ******** 2026-04-18 00:48:45.125670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125694 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125702 | orchestrator | 2026-04-18 00:48:45.125706 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-18 00:48:45.125710 | orchestrator | Saturday 18 April 2026 00:48:37 +0000 (0:00:01.937) 0:00:10.130 ******** 2026-04-18 00:48:45.125714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125725 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125735 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 00:48:45.125742 | orchestrator | 2026-04-18 00:48:45.125746 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-18 00:48:45.125750 | orchestrator | Saturday 18 April 2026 00:48:39 +0000 (0:00:01.496) 0:00:11.626 ******** 2026-04-18 00:48:45.125754 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 00:48:45.125758 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:45.125762 | orchestrator | } 2026-04-18 00:48:45.125766 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 00:48:45.125770 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:45.125774 | orchestrator | } 2026-04-18 00:48:45.125778 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 00:48:45.125781 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:45.125785 | orchestrator | } 2026-04-18 00:48:45.125789 | orchestrator | changed: [testbed-node-3] => { 2026-04-18 00:48:45.125792 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:45.125796 | orchestrator | } 2026-04-18 00:48:45.125800 | orchestrator | changed: [testbed-node-4] => { 2026-04-18 00:48:45.125804 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:45.125808 | orchestrator | } 2026-04-18 00:48:45.125814 | orchestrator | changed: [testbed-node-5] => { 2026-04-18 00:48:45.125818 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:45.125821 | orchestrator | } 2026-04-18 00:48:45.125825 | orchestrator | 2026-04-18 00:48:45.125829 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 00:48:45.125833 | orchestrator | Saturday 18 April 2026 00:48:39 +0000 (0:00:00.521) 0:00:12.148 ******** 2026-04-18 00:48:45.125837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:48:45.125840 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:45.125844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:48:45.125848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:48:45.125852 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:48:45.125861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:48:45.125865 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:48:45.125874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:48:45.125881 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:48:45.125885 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:48:45.125891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:48:45.125894 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:48:45.125898 | orchestrator | 2026-04-18 00:48:45.125920 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-18 00:48:45.125925 | orchestrator | Saturday 18 April 2026 00:48:41 +0000 (0:00:01.442) 0:00:13.591 ******** 2026-04-18 00:48:45.125930 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:48:45.125934 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:48:45.125938 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:48:45.125942 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:48:45.125947 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:48:45.125953 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:48:45.125958 | orchestrator | 2026-04-18 00:48:45.125962 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:48:45.125967 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-18 00:48:45.125971 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-18 00:48:45.125976 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-18 00:48:45.125980 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-18 00:48:45.125985 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-18 00:48:45.125989 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2026-04-18 00:48:45.125993 | orchestrator | 2026-04-18 00:48:45.125997 | orchestrator | 2026-04-18 00:48:45.126001 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:48:45.126006 | orchestrator | Saturday 18 April 2026 00:48:42 +0000 (0:00:01.129) 0:00:14.720 ******** 2026-04-18 00:48:45.126010 | orchestrator | =============================================================================== 2026-04-18 00:48:45.126041 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.43s 2026-04-18 00:48:45.126046 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.94s 2026-04-18 00:48:45.126050 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.61s 2026-04-18 00:48:45.126058 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 1.50s 2026-04-18 00:48:45.126063 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.44s 2026-04-18 00:48:45.126067 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.39s 2026-04-18 00:48:45.126071 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 1.13s 2026-04-18 00:48:45.126076 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.05s 2026-04-18 00:48:45.126080 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2026-04-18 00:48:45.126084 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2026-04-18 00:48:45.126088 | orchestrator | service-check-containers : ovn_controller | Notify handlers to restart containers --- 0.52s 2026-04-18 00:48:45.127827 | orchestrator | 2026-04-18 00:48:45 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:45.132765 | orchestrator | 2026-04-18 00:48:45 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state STARTED 2026-04-18 00:48:45.135545 | orchestrator | 2026-04-18 00:48:45 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:45.135590 | orchestrator | 2026-04-18 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:48.169364 | orchestrator | 2026-04-18 00:48:48 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:48.170298 | orchestrator | 2026-04-18 00:48:48 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:48.171158 | orchestrator | 2026-04-18 00:48:48 | INFO  | Task 5af498e9-c4c9-45fd-8eb2-19d0f5cf7e62 is in state SUCCESS 2026-04-18 00:48:48.172380 | orchestrator | 2026-04-18 00:48:48.172412 | orchestrator | 2026-04-18 00:48:48.172515 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-18 00:48:48.172525 | orchestrator | 2026-04-18 00:48:48.172530 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-18 00:48:48.172535 | orchestrator | Saturday 18 April 2026 00:48:13 +0000 (0:00:00.116) 0:00:00.116 ******** 2026-04-18 00:48:48.172540 | orchestrator | ok: [localhost] => { 2026-04-18 00:48:48.172547 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-18 00:48:48.172552 | orchestrator | } 2026-04-18 00:48:48.172557 | orchestrator | 2026-04-18 00:48:48.172561 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-18 00:48:48.172567 | orchestrator | Saturday 18 April 2026 00:48:13 +0000 (0:00:00.033) 0:00:00.150 ******** 2026-04-18 00:48:48.172573 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-18 00:48:48.172579 | orchestrator | ...ignoring 2026-04-18 00:48:48.172584 | orchestrator | 2026-04-18 00:48:48.172588 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-18 00:48:48.172593 | orchestrator | Saturday 18 April 2026 00:48:17 +0000 (0:00:03.466) 0:00:03.617 ******** 2026-04-18 00:48:48.172597 | orchestrator | skipping: [localhost] 2026-04-18 00:48:48.172602 | orchestrator | 2026-04-18 00:48:48.172606 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-18 00:48:48.172611 | orchestrator | Saturday 18 April 2026 00:48:17 +0000 (0:00:00.103) 0:00:03.721 ******** 2026-04-18 00:48:48.172615 | orchestrator | ok: [localhost] 2026-04-18 00:48:48.172620 | orchestrator | 2026-04-18 00:48:48.172625 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:48:48.172629 | orchestrator | 2026-04-18 00:48:48.172634 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:48:48.172638 | orchestrator | Saturday 18 April 2026 00:48:17 +0000 (0:00:00.244) 0:00:03.965 ******** 2026-04-18 00:48:48.172657 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:48:48.172662 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:48:48.172666 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:48:48.172671 | orchestrator | 2026-04-18 00:48:48.172675 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:48:48.172680 | orchestrator | Saturday 18 April 2026 00:48:18 +0000 (0:00:00.583) 0:00:04.549 ******** 2026-04-18 00:48:48.172684 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-18 00:48:48.172689 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-18 00:48:48.172694 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-18 00:48:48.172698 | orchestrator | 2026-04-18 00:48:48.172703 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-18 00:48:48.172707 | orchestrator | 2026-04-18 00:48:48.172712 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-18 00:48:48.172716 | orchestrator | Saturday 18 April 2026 00:48:18 +0000 (0:00:00.564) 0:00:05.113 ******** 2026-04-18 00:48:48.172721 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:48:48.172725 | orchestrator | 2026-04-18 00:48:48.172730 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-18 00:48:48.172735 | orchestrator | Saturday 18 April 2026 00:48:20 +0000 (0:00:01.517) 0:00:06.630 ******** 2026-04-18 00:48:48.172739 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:48:48.172744 | orchestrator | 2026-04-18 00:48:48.172748 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-18 00:48:48.172753 | orchestrator | Saturday 18 April 2026 00:48:22 +0000 (0:00:02.239) 0:00:08.870 ******** 2026-04-18 00:48:48.172757 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:48.172762 | orchestrator | 2026-04-18 00:48:48.172767 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-18 00:48:48.172771 | orchestrator | Saturday 18 April 2026 00:48:23 +0000 (0:00:00.423) 0:00:09.294 ******** 2026-04-18 00:48:48.172776 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:48.172780 | orchestrator | 2026-04-18 00:48:48.172785 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-18 00:48:48.172789 | orchestrator | Saturday 18 April 2026 00:48:23 +0000 (0:00:00.341) 0:00:09.635 ******** 2026-04-18 00:48:48.172794 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:48.172798 | orchestrator | 2026-04-18 00:48:48.172803 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-18 00:48:48.172807 | orchestrator | Saturday 18 April 2026 00:48:23 +0000 (0:00:00.427) 0:00:10.062 ******** 2026-04-18 00:48:48.172812 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:48.172816 | orchestrator | 2026-04-18 00:48:48.172821 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-18 00:48:48.172826 | orchestrator | Saturday 18 April 2026 00:48:24 +0000 (0:00:00.356) 0:00:10.419 ******** 2026-04-18 00:48:48.172830 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:48:48.172835 | orchestrator | 2026-04-18 00:48:48.172840 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-18 00:48:48.172844 | orchestrator | Saturday 18 April 2026 00:48:24 +0000 (0:00:00.567) 0:00:10.986 ******** 2026-04-18 00:48:48.172849 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:48:48.172853 | orchestrator | 2026-04-18 00:48:48.172858 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-18 00:48:48.172862 | orchestrator | Saturday 18 April 2026 00:48:25 +0000 (0:00:00.765) 0:00:11.752 ******** 2026-04-18 00:48:48.172867 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:48.172871 | orchestrator | 2026-04-18 00:48:48.172876 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-18 00:48:48.172880 | orchestrator | Saturday 18 April 2026 00:48:26 +0000 (0:00:01.227) 0:00:12.979 ******** 2026-04-18 00:48:48.172889 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:48.172893 | orchestrator | 2026-04-18 00:48:48.172949 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-18 00:48:48.172956 | orchestrator | Saturday 18 April 2026 00:48:27 +0000 (0:00:00.401) 0:00:13.381 ******** 2026-04-18 00:48:48.172965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:48:48.172974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:48:48.172980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:48:48.172985 | orchestrator | 2026-04-18 00:48:48.172989 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-18 00:48:48.172994 | orchestrator | Saturday 18 April 2026 00:48:28 +0000 (0:00:01.051) 0:00:14.432 ******** 2026-04-18 00:48:48.173005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:48:48.173016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:48:48.173022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:48:48.173027 | orchestrator | 2026-04-18 00:48:48.173031 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-18 00:48:48.173036 | orchestrator | Saturday 18 April 2026 00:48:29 +0000 (0:00:01.359) 0:00:15.791 ******** 2026-04-18 00:48:48.173040 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-18 00:48:48.173091 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-18 00:48:48.173102 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-18 00:48:48.173106 | orchestrator | 2026-04-18 00:48:48.173111 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-18 00:48:48.173116 | orchestrator | Saturday 18 April 2026 00:48:31 +0000 (0:00:01.556) 0:00:17.348 ******** 2026-04-18 00:48:48.173120 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-18 00:48:48.173129 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-18 00:48:48.173133 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-18 00:48:48.173138 | orchestrator | 2026-04-18 00:48:48.173142 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-18 00:48:48.173148 | orchestrator | Saturday 18 April 2026 00:48:34 +0000 (0:00:03.131) 0:00:20.479 ******** 2026-04-18 00:48:48.173153 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-18 00:48:48.173158 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-18 00:48:48.173163 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-18 00:48:48.173168 | orchestrator | 2026-04-18 00:48:48.173181 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-18 00:48:48.173187 | orchestrator | Saturday 18 April 2026 00:48:35 +0000 (0:00:01.416) 0:00:21.896 ******** 2026-04-18 00:48:48.173192 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-18 00:48:48.173197 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-18 00:48:48.173203 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-18 00:48:48.173208 | orchestrator | 2026-04-18 00:48:48.173213 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-18 00:48:48.173218 | orchestrator | Saturday 18 April 2026 00:48:37 +0000 (0:00:01.760) 0:00:23.657 ******** 2026-04-18 00:48:48.173223 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-18 00:48:48.173228 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-18 00:48:48.173233 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-18 00:48:48.173238 | orchestrator | 2026-04-18 00:48:48.173244 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-18 00:48:48.173249 | orchestrator | Saturday 18 April 2026 00:48:38 +0000 (0:00:01.193) 0:00:24.850 ******** 2026-04-18 00:48:48.173254 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-18 00:48:48.173259 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-18 00:48:48.173264 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-18 00:48:48.173269 | orchestrator | 2026-04-18 00:48:48.173275 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-18 00:48:48.173280 | orchestrator | Saturday 18 April 2026 00:48:39 +0000 (0:00:01.142) 0:00:25.993 ******** 2026-04-18 00:48:48.173285 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:48:48.173290 | orchestrator | 2026-04-18 00:48:48.173295 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-18 00:48:48.173300 | orchestrator | Saturday 18 April 2026 00:48:40 +0000 (0:00:00.773) 0:00:26.767 ******** 2026-04-18 00:48:48.173306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:48:48.173316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:48:48.173329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:48:48.173336 | orchestrator | 2026-04-18 00:48:48.173341 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-18 00:48:48.173346 | orchestrator | Saturday 18 April 2026 00:48:41 +0000 (0:00:01.353) 0:00:28.121 ******** 2026-04-18 00:48:48.173352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-18 00:48:48.173362 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:48.173368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-18 00:48:48.173373 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:48:48.173390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-18 00:48:48.173396 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:48:48.173402 | orchestrator | 2026-04-18 00:48:48.173407 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-18 00:48:48.173413 | orchestrator | Saturday 18 April 2026 00:48:42 +0000 (0:00:00.515) 0:00:28.637 ******** 2026-04-18 00:48:48.173418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-18 00:48:48.173423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-18 00:48:48.173432 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:48.173436 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:48:48.173441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-18 00:48:48.173446 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:48:48.173451 | orchestrator | 2026-04-18 00:48:48.173455 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-18 00:48:48.173460 | orchestrator | Saturday 18 April 2026 00:48:43 +0000 (0:00:00.755) 0:00:29.392 ******** 2026-04-18 00:48:48.173471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:48:48.173477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:48:48.173485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:48:48.173490 | orchestrator | 2026-04-18 00:48:48.173495 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-18 00:48:48.173499 | orchestrator | Saturday 18 April 2026 00:48:44 +0000 (0:00:00.897) 0:00:30.290 ******** 2026-04-18 00:48:48.173504 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 00:48:48.173508 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:48.173513 | orchestrator | } 2026-04-18 00:48:48.173518 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 00:48:48.173522 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:48.173527 | orchestrator | } 2026-04-18 00:48:48.173531 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 00:48:48.173535 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:48:48.173540 | orchestrator | } 2026-04-18 00:48:48.173544 | orchestrator | 2026-04-18 00:48:48.173549 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 00:48:48.173553 | orchestrator | Saturday 18 April 2026 00:48:44 +0000 (0:00:00.459) 0:00:30.750 ******** 2026-04-18 00:48:48.173565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-18 00:48:48.173571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-18 00:48:48.173582 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:48:48.173590 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:48:48.173598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-18 00:48:48.173611 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:48:48.173618 | orchestrator | 2026-04-18 00:48:48.173625 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-18 00:48:48.173631 | orchestrator | Saturday 18 April 2026 00:48:45 +0000 (0:00:00.769) 0:00:31.519 ******** 2026-04-18 00:48:48.173639 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:48:48.173646 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:48:48.173653 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:48:48.173660 | orchestrator | 2026-04-18 00:48:48.173667 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-18 00:48:48.173674 | orchestrator | Saturday 18 April 2026 00:48:46 +0000 (0:00:00.769) 0:00:32.288 ******** 2026-04-18 00:48:48.173692 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_f9mktpwa/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_f9mktpwa/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_f9mktpwa/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq: Internal Server Error (\"unknown: repository kolla/release/2024.2/rabbitmq not found\")\\n'"} 2026-04-18 00:48:48.173706 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_o0j1hnwz/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_o0j1hnwz/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_o0j1hnwz/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq: Internal Server Error (\"unknown: repository kolla/release/2024.2/rabbitmq not found\")\\n'"} 2026-04-18 00:48:48.173725 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_vy83hvub/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_vy83hvub/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_vy83hvub/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=4.1.8.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Frabbitmq: Internal Server Error (\"unknown: repository kolla/release/2024.2/rabbitmq not found\")\\n'"} 2026-04-18 00:48:48.173739 | orchestrator | 2026-04-18 00:48:48.173746 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:48:48.173754 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-18 00:48:48.173762 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=1  skipped=9  rescued=0 ignored=0 2026-04-18 00:48:48.173770 | orchestrator | testbed-node-1 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-18 00:48:48.173777 | orchestrator | testbed-node-2 : ok=17  changed=12  unreachable=0 failed=1  skipped=3  rescued=0 ignored=0 2026-04-18 00:48:48.173784 | orchestrator | 2026-04-18 00:48:48.173790 | orchestrator | 2026-04-18 00:48:48.173797 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:48:48.173804 | orchestrator | Saturday 18 April 2026 00:48:47 +0000 (0:00:01.285) 0:00:33.574 ******** 2026-04-18 00:48:48.173812 | orchestrator | =============================================================================== 2026-04-18 00:48:48.173819 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.47s 2026-04-18 00:48:48.173826 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.13s 2026-04-18 00:48:48.173832 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.24s 2026-04-18 00:48:48.173839 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.76s 2026-04-18 00:48:48.173846 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.56s 2026-04-18 00:48:48.173853 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.52s 2026-04-18 00:48:48.173860 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.42s 2026-04-18 00:48:48.173867 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.36s 2026-04-18 00:48:48.173875 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.35s 2026-04-18 00:48:48.173882 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 1.29s 2026-04-18 00:48:48.173889 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.23s 2026-04-18 00:48:48.173922 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.19s 2026-04-18 00:48:48.173936 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.14s 2026-04-18 00:48:48.173943 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.05s 2026-04-18 00:48:48.173949 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 0.90s 2026-04-18 00:48:48.173954 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.77s 2026-04-18 00:48:48.173959 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.77s 2026-04-18 00:48:48.173963 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.77s 2026-04-18 00:48:48.173967 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.77s 2026-04-18 00:48:48.173972 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 0.76s 2026-04-18 00:48:48.173976 | orchestrator | 2026-04-18 00:48:48 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:48.173982 | orchestrator | 2026-04-18 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:51.207818 | orchestrator | 2026-04-18 00:48:51 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:51.209221 | orchestrator | 2026-04-18 00:48:51 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:51.211432 | orchestrator | 2026-04-18 00:48:51 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:51.211482 | orchestrator | 2026-04-18 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:54.257038 | orchestrator | 2026-04-18 00:48:54 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:54.258215 | orchestrator | 2026-04-18 00:48:54 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:54.259515 | orchestrator | 2026-04-18 00:48:54 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:54.259577 | orchestrator | 2026-04-18 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:48:57.308144 | orchestrator | 2026-04-18 00:48:57 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:48:57.310341 | orchestrator | 2026-04-18 00:48:57 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:48:57.312537 | orchestrator | 2026-04-18 00:48:57 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:48:57.315855 | orchestrator | 2026-04-18 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:00.416840 | orchestrator | 2026-04-18 00:49:00 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:00.416891 | orchestrator | 2026-04-18 00:49:00 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:00.416897 | orchestrator | 2026-04-18 00:49:00 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:00.416931 | orchestrator | 2026-04-18 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:03.452871 | orchestrator | 2026-04-18 00:49:03 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:03.455063 | orchestrator | 2026-04-18 00:49:03 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:03.457173 | orchestrator | 2026-04-18 00:49:03 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:03.459036 | orchestrator | 2026-04-18 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:06.509798 | orchestrator | 2026-04-18 00:49:06 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:06.510596 | orchestrator | 2026-04-18 00:49:06 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:06.515278 | orchestrator | 2026-04-18 00:49:06 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:06.515346 | orchestrator | 2026-04-18 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:09.540550 | orchestrator | 2026-04-18 00:49:09 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:09.540791 | orchestrator | 2026-04-18 00:49:09 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:09.541643 | orchestrator | 2026-04-18 00:49:09 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:09.541695 | orchestrator | 2026-04-18 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:12.571709 | orchestrator | 2026-04-18 00:49:12 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:12.571971 | orchestrator | 2026-04-18 00:49:12 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:12.573798 | orchestrator | 2026-04-18 00:49:12 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:12.576578 | orchestrator | 2026-04-18 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:15.610117 | orchestrator | 2026-04-18 00:49:15 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:15.610214 | orchestrator | 2026-04-18 00:49:15 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:15.610597 | orchestrator | 2026-04-18 00:49:15 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:15.611771 | orchestrator | 2026-04-18 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:18.676325 | orchestrator | 2026-04-18 00:49:18 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:18.676436 | orchestrator | 2026-04-18 00:49:18 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:18.676791 | orchestrator | 2026-04-18 00:49:18 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:18.676820 | orchestrator | 2026-04-18 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:21.714779 | orchestrator | 2026-04-18 00:49:21 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:21.714863 | orchestrator | 2026-04-18 00:49:21 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:21.717023 | orchestrator | 2026-04-18 00:49:21 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:21.717122 | orchestrator | 2026-04-18 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:24.747790 | orchestrator | 2026-04-18 00:49:24 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:24.751828 | orchestrator | 2026-04-18 00:49:24 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:24.752943 | orchestrator | 2026-04-18 00:49:24 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:24.753124 | orchestrator | 2026-04-18 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:27.789694 | orchestrator | 2026-04-18 00:49:27 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:27.789844 | orchestrator | 2026-04-18 00:49:27 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:27.789863 | orchestrator | 2026-04-18 00:49:27 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:27.789876 | orchestrator | 2026-04-18 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:30.818435 | orchestrator | 2026-04-18 00:49:30 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:30.819088 | orchestrator | 2026-04-18 00:49:30 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:30.823793 | orchestrator | 2026-04-18 00:49:30 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:30.823859 | orchestrator | 2026-04-18 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:33.858476 | orchestrator | 2026-04-18 00:49:33 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:33.859030 | orchestrator | 2026-04-18 00:49:33 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:33.860242 | orchestrator | 2026-04-18 00:49:33 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:33.860289 | orchestrator | 2026-04-18 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:36.886905 | orchestrator | 2026-04-18 00:49:36 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:36.889897 | orchestrator | 2026-04-18 00:49:36 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:36.891423 | orchestrator | 2026-04-18 00:49:36 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:36.891463 | orchestrator | 2026-04-18 00:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:39.930778 | orchestrator | 2026-04-18 00:49:39 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:39.931109 | orchestrator | 2026-04-18 00:49:39 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:39.932242 | orchestrator | 2026-04-18 00:49:39 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:39.932313 | orchestrator | 2026-04-18 00:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:42.968500 | orchestrator | 2026-04-18 00:49:42 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:42.972525 | orchestrator | 2026-04-18 00:49:42 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:42.975118 | orchestrator | 2026-04-18 00:49:42 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:42.975234 | orchestrator | 2026-04-18 00:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:46.014594 | orchestrator | 2026-04-18 00:49:46 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:46.014681 | orchestrator | 2026-04-18 00:49:46 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:46.016171 | orchestrator | 2026-04-18 00:49:46 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:46.016246 | orchestrator | 2026-04-18 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:49.054523 | orchestrator | 2026-04-18 00:49:49 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:49.054624 | orchestrator | 2026-04-18 00:49:49 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:49.057942 | orchestrator | 2026-04-18 00:49:49 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:49.059112 | orchestrator | 2026-04-18 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:52.085151 | orchestrator | 2026-04-18 00:49:52 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:52.087432 | orchestrator | 2026-04-18 00:49:52 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:52.087656 | orchestrator | 2026-04-18 00:49:52 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:52.087672 | orchestrator | 2026-04-18 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:55.116511 | orchestrator | 2026-04-18 00:49:55 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:55.116607 | orchestrator | 2026-04-18 00:49:55 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:55.116616 | orchestrator | 2026-04-18 00:49:55 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:55.116627 | orchestrator | 2026-04-18 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:49:58.150960 | orchestrator | 2026-04-18 00:49:58 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:49:58.153773 | orchestrator | 2026-04-18 00:49:58 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:49:58.154875 | orchestrator | 2026-04-18 00:49:58 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:49:58.155267 | orchestrator | 2026-04-18 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:01.182715 | orchestrator | 2026-04-18 00:50:01 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:01.182789 | orchestrator | 2026-04-18 00:50:01 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:01.184929 | orchestrator | 2026-04-18 00:50:01 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:01.184979 | orchestrator | 2026-04-18 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:04.221221 | orchestrator | 2026-04-18 00:50:04 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:04.223290 | orchestrator | 2026-04-18 00:50:04 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:04.228168 | orchestrator | 2026-04-18 00:50:04 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:04.228221 | orchestrator | 2026-04-18 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:07.265819 | orchestrator | 2026-04-18 00:50:07 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:07.265969 | orchestrator | 2026-04-18 00:50:07 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:07.266951 | orchestrator | 2026-04-18 00:50:07 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:07.267012 | orchestrator | 2026-04-18 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:10.308604 | orchestrator | 2026-04-18 00:50:10 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:10.308745 | orchestrator | 2026-04-18 00:50:10 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:10.308754 | orchestrator | 2026-04-18 00:50:10 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:10.308792 | orchestrator | 2026-04-18 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:13.343041 | orchestrator | 2026-04-18 00:50:13 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:13.345000 | orchestrator | 2026-04-18 00:50:13 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:13.345696 | orchestrator | 2026-04-18 00:50:13 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:13.345743 | orchestrator | 2026-04-18 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:16.393512 | orchestrator | 2026-04-18 00:50:16 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:16.396727 | orchestrator | 2026-04-18 00:50:16 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:16.399331 | orchestrator | 2026-04-18 00:50:16 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:16.399395 | orchestrator | 2026-04-18 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:19.435261 | orchestrator | 2026-04-18 00:50:19 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:19.438232 | orchestrator | 2026-04-18 00:50:19 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:19.440287 | orchestrator | 2026-04-18 00:50:19 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:19.440359 | orchestrator | 2026-04-18 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:22.491547 | orchestrator | 2026-04-18 00:50:22 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:22.491612 | orchestrator | 2026-04-18 00:50:22 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:22.492190 | orchestrator | 2026-04-18 00:50:22 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:22.492213 | orchestrator | 2026-04-18 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:25.526589 | orchestrator | 2026-04-18 00:50:25 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:25.530972 | orchestrator | 2026-04-18 00:50:25 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:25.532250 | orchestrator | 2026-04-18 00:50:25 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:25.532383 | orchestrator | 2026-04-18 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:28.559222 | orchestrator | 2026-04-18 00:50:28 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:28.561459 | orchestrator | 2026-04-18 00:50:28 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:28.562107 | orchestrator | 2026-04-18 00:50:28 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:28.562155 | orchestrator | 2026-04-18 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:31.591007 | orchestrator | 2026-04-18 00:50:31 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:31.592614 | orchestrator | 2026-04-18 00:50:31 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:31.593798 | orchestrator | 2026-04-18 00:50:31 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:31.593845 | orchestrator | 2026-04-18 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:34.626167 | orchestrator | 2026-04-18 00:50:34 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:34.626266 | orchestrator | 2026-04-18 00:50:34 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:34.627273 | orchestrator | 2026-04-18 00:50:34 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:34.627325 | orchestrator | 2026-04-18 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:37.652556 | orchestrator | 2026-04-18 00:50:37 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:37.653056 | orchestrator | 2026-04-18 00:50:37 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:37.655165 | orchestrator | 2026-04-18 00:50:37 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:37.655221 | orchestrator | 2026-04-18 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:40.679815 | orchestrator | 2026-04-18 00:50:40 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:40.679910 | orchestrator | 2026-04-18 00:50:40 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:40.681712 | orchestrator | 2026-04-18 00:50:40 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:40.681750 | orchestrator | 2026-04-18 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:43.716411 | orchestrator | 2026-04-18 00:50:43 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:43.721269 | orchestrator | 2026-04-18 00:50:43 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:43.723550 | orchestrator | 2026-04-18 00:50:43 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state STARTED 2026-04-18 00:50:43.724094 | orchestrator | 2026-04-18 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:46.761028 | orchestrator | 2026-04-18 00:50:46 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:46.761084 | orchestrator | 2026-04-18 00:50:46 | INFO  | Task c8b4e3dd-4e6a-4067-a4ae-25297700df2c is in state STARTED 2026-04-18 00:50:46.761331 | orchestrator | 2026-04-18 00:50:46 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:46.762049 | orchestrator | 2026-04-18 00:50:46 | INFO  | Task 466415e4-b7c5-4e8e-8dca-3893ce0557cc is in state STARTED 2026-04-18 00:50:46.764021 | orchestrator | 2026-04-18 00:50:46 | INFO  | Task 46160e16-4ca7-4ae1-a9f3-ae1763ffbc7f is in state SUCCESS 2026-04-18 00:50:46.765095 | orchestrator | 2026-04-18 00:50:46.765120 | orchestrator | 2026-04-18 00:50:46.765125 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-18 00:50:46.765129 | orchestrator | 2026-04-18 00:50:46.765133 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-18 00:50:46.765139 | orchestrator | Saturday 18 April 2026 00:46:45 +0000 (0:00:00.366) 0:00:00.366 ******** 2026-04-18 00:50:46.765145 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:50:46.765165 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:50:46.765178 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:50:46.765184 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.765190 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.765196 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.765201 | orchestrator | 2026-04-18 00:50:46.765207 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-18 00:50:46.765230 | orchestrator | Saturday 18 April 2026 00:46:46 +0000 (0:00:00.558) 0:00:00.925 ******** 2026-04-18 00:50:46.765238 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.765245 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.765251 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.765258 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.765263 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.765267 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.765271 | orchestrator | 2026-04-18 00:50:46.765277 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-18 00:50:46.765286 | orchestrator | Saturday 18 April 2026 00:46:47 +0000 (0:00:00.847) 0:00:01.773 ******** 2026-04-18 00:50:46.765293 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.765299 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.765305 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.765311 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.765332 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.765341 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.765347 | orchestrator | 2026-04-18 00:50:46.765354 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-18 00:50:46.765360 | orchestrator | Saturday 18 April 2026 00:46:47 +0000 (0:00:00.480) 0:00:02.254 ******** 2026-04-18 00:50:46.765365 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:50:46.765369 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:50:46.765373 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:50:46.765376 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.765380 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.765384 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.765387 | orchestrator | 2026-04-18 00:50:46.765399 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-18 00:50:46.765403 | orchestrator | Saturday 18 April 2026 00:46:49 +0000 (0:00:02.372) 0:00:04.626 ******** 2026-04-18 00:50:46.765407 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:50:46.765411 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:50:46.765414 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.765418 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:50:46.765422 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.765425 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.765429 | orchestrator | 2026-04-18 00:50:46.765433 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-18 00:50:46.765437 | orchestrator | Saturday 18 April 2026 00:46:52 +0000 (0:00:02.156) 0:00:06.782 ******** 2026-04-18 00:50:46.765441 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:50:46.765445 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.765448 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:50:46.765452 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.765455 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.765459 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:50:46.765463 | orchestrator | 2026-04-18 00:50:46.765466 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-18 00:50:46.765470 | orchestrator | Saturday 18 April 2026 00:46:53 +0000 (0:00:01.925) 0:00:08.708 ******** 2026-04-18 00:50:46.765474 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.765477 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.765481 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.765485 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.765488 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.765492 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.765496 | orchestrator | 2026-04-18 00:50:46.765499 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-18 00:50:46.765503 | orchestrator | Saturday 18 April 2026 00:46:55 +0000 (0:00:01.481) 0:00:10.189 ******** 2026-04-18 00:50:46.765507 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.765515 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.765519 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.765522 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.765526 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.765530 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.765533 | orchestrator | 2026-04-18 00:50:46.765537 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-18 00:50:46.765541 | orchestrator | Saturday 18 April 2026 00:46:56 +0000 (0:00:00.562) 0:00:10.751 ******** 2026-04-18 00:50:46.765545 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-18 00:50:46.765548 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-18 00:50:46.765552 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.765556 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-18 00:50:46.765559 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-18 00:50:46.765563 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.765567 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-18 00:50:46.765571 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-18 00:50:46.765574 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.765578 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-18 00:50:46.765589 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-18 00:50:46.765593 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.765596 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-18 00:50:46.765600 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-18 00:50:46.765604 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-18 00:50:46.765607 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-18 00:50:46.765611 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.765617 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.765623 | orchestrator | 2026-04-18 00:50:46.765628 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-18 00:50:46.765633 | orchestrator | Saturday 18 April 2026 00:46:56 +0000 (0:00:00.880) 0:00:11.632 ******** 2026-04-18 00:50:46.765638 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.765649 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.765656 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.765663 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.765669 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.765675 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.765680 | orchestrator | 2026-04-18 00:50:46.765687 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-18 00:50:46.765694 | orchestrator | Saturday 18 April 2026 00:46:58 +0000 (0:00:01.713) 0:00:13.345 ******** 2026-04-18 00:50:46.765700 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:50:46.765707 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:50:46.765714 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:50:46.765721 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.765727 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.765733 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.765739 | orchestrator | 2026-04-18 00:50:46.765743 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-18 00:50:46.765748 | orchestrator | Saturday 18 April 2026 00:46:59 +0000 (0:00:00.896) 0:00:14.241 ******** 2026-04-18 00:50:46.765752 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:50:46.765756 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:50:46.765761 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.765769 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.765774 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:50:46.765778 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.765783 | orchestrator | 2026-04-18 00:50:46.765790 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-18 00:50:46.765795 | orchestrator | Saturday 18 April 2026 00:47:06 +0000 (0:00:07.322) 0:00:21.564 ******** 2026-04-18 00:50:46.765799 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.765803 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.765807 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.765812 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.765816 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.765820 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.765825 | orchestrator | 2026-04-18 00:50:46.765829 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-18 00:50:46.765833 | orchestrator | Saturday 18 April 2026 00:47:08 +0000 (0:00:01.177) 0:00:22.741 ******** 2026-04-18 00:50:46.765838 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.765842 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.765846 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.765897 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.765902 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.765906 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.765910 | orchestrator | 2026-04-18 00:50:46.765915 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-18 00:50:46.765920 | orchestrator | Saturday 18 April 2026 00:47:09 +0000 (0:00:01.325) 0:00:24.066 ******** 2026-04-18 00:50:46.765924 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.765928 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.765933 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.765937 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.765941 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.765945 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.765950 | orchestrator | 2026-04-18 00:50:46.765954 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-18 00:50:46.765958 | orchestrator | Saturday 18 April 2026 00:47:10 +0000 (0:00:00.925) 0:00:24.992 ******** 2026-04-18 00:50:46.765963 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-18 00:50:46.765967 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-18 00:50:46.765971 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.765975 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-18 00:50:46.765980 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-18 00:50:46.765984 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.765988 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-18 00:50:46.765992 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-18 00:50:46.765997 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.766001 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-18 00:50:46.766005 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-18 00:50:46.766010 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.766052 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-18 00:50:46.766060 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-18 00:50:46.766066 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.766073 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-18 00:50:46.766080 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-18 00:50:46.766087 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.766094 | orchestrator | 2026-04-18 00:50:46.766100 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-18 00:50:46.766113 | orchestrator | Saturday 18 April 2026 00:47:10 +0000 (0:00:00.631) 0:00:25.623 ******** 2026-04-18 00:50:46.766122 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.766125 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.766129 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.766133 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.766136 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.766140 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.766144 | orchestrator | 2026-04-18 00:50:46.766147 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-18 00:50:46.766151 | orchestrator | Saturday 18 April 2026 00:47:11 +0000 (0:00:00.674) 0:00:26.297 ******** 2026-04-18 00:50:46.766155 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.766159 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.766162 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.766166 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.766170 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.766173 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.766177 | orchestrator | 2026-04-18 00:50:46.766181 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-18 00:50:46.766184 | orchestrator | 2026-04-18 00:50:46.766188 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-18 00:50:46.766192 | orchestrator | Saturday 18 April 2026 00:47:12 +0000 (0:00:01.101) 0:00:27.399 ******** 2026-04-18 00:50:46.766196 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.766200 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.766203 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.766207 | orchestrator | 2026-04-18 00:50:46.766211 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-18 00:50:46.766214 | orchestrator | Saturday 18 April 2026 00:47:13 +0000 (0:00:00.780) 0:00:28.180 ******** 2026-04-18 00:50:46.766218 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.766222 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.766225 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.766229 | orchestrator | 2026-04-18 00:50:46.766233 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-18 00:50:46.766236 | orchestrator | Saturday 18 April 2026 00:47:14 +0000 (0:00:01.309) 0:00:29.489 ******** 2026-04-18 00:50:46.766240 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.766244 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.766247 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.766251 | orchestrator | 2026-04-18 00:50:46.766255 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-18 00:50:46.766259 | orchestrator | Saturday 18 April 2026 00:47:15 +0000 (0:00:00.897) 0:00:30.387 ******** 2026-04-18 00:50:46.766263 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.766266 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.766270 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.766274 | orchestrator | 2026-04-18 00:50:46.766278 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-18 00:50:46.766281 | orchestrator | Saturday 18 April 2026 00:47:16 +0000 (0:00:00.806) 0:00:31.194 ******** 2026-04-18 00:50:46.766285 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.766289 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.766293 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.766297 | orchestrator | 2026-04-18 00:50:46.766300 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-18 00:50:46.766304 | orchestrator | Saturday 18 April 2026 00:47:16 +0000 (0:00:00.416) 0:00:31.610 ******** 2026-04-18 00:50:46.766308 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.766312 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.766315 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.766319 | orchestrator | 2026-04-18 00:50:46.766323 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-18 00:50:46.766326 | orchestrator | Saturday 18 April 2026 00:47:17 +0000 (0:00:00.945) 0:00:32.556 ******** 2026-04-18 00:50:46.766333 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.766337 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.766340 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.766344 | orchestrator | 2026-04-18 00:50:46.766348 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-18 00:50:46.766351 | orchestrator | Saturday 18 April 2026 00:47:19 +0000 (0:00:01.869) 0:00:34.425 ******** 2026-04-18 00:50:46.766355 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:50:46.766359 | orchestrator | 2026-04-18 00:50:46.766362 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-18 00:50:46.766366 | orchestrator | Saturday 18 April 2026 00:47:20 +0000 (0:00:01.146) 0:00:35.572 ******** 2026-04-18 00:50:46.766370 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.766373 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.766377 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.766381 | orchestrator | 2026-04-18 00:50:46.766385 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-18 00:50:46.766388 | orchestrator | Saturday 18 April 2026 00:47:23 +0000 (0:00:02.636) 0:00:38.209 ******** 2026-04-18 00:50:46.766392 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.766396 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.766399 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.766403 | orchestrator | 2026-04-18 00:50:46.766407 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-18 00:50:46.766410 | orchestrator | Saturday 18 April 2026 00:47:24 +0000 (0:00:00.695) 0:00:38.904 ******** 2026-04-18 00:50:46.766414 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.766418 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.766421 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.766425 | orchestrator | 2026-04-18 00:50:46.766429 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-18 00:50:46.766432 | orchestrator | Saturday 18 April 2026 00:47:25 +0000 (0:00:01.194) 0:00:40.099 ******** 2026-04-18 00:50:46.766436 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.766440 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.766443 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.766447 | orchestrator | 2026-04-18 00:50:46.766451 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-18 00:50:46.766457 | orchestrator | Saturday 18 April 2026 00:47:26 +0000 (0:00:01.389) 0:00:41.488 ******** 2026-04-18 00:50:46.766461 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.766465 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.766468 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.766472 | orchestrator | 2026-04-18 00:50:46.766476 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-18 00:50:46.766479 | orchestrator | Saturday 18 April 2026 00:47:27 +0000 (0:00:00.449) 0:00:41.938 ******** 2026-04-18 00:50:46.766483 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.766486 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.766490 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.766494 | orchestrator | 2026-04-18 00:50:46.766514 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-18 00:50:46.766518 | orchestrator | Saturday 18 April 2026 00:47:27 +0000 (0:00:00.343) 0:00:42.281 ******** 2026-04-18 00:50:46.766522 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.766526 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.766529 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.766533 | orchestrator | 2026-04-18 00:50:46.766537 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-18 00:50:46.766540 | orchestrator | Saturday 18 April 2026 00:47:29 +0000 (0:00:02.048) 0:00:44.330 ******** 2026-04-18 00:50:46.766544 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.766551 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.766554 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.766558 | orchestrator | 2026-04-18 00:50:46.766562 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-18 00:50:46.766565 | orchestrator | Saturday 18 April 2026 00:47:32 +0000 (0:00:02.551) 0:00:46.881 ******** 2026-04-18 00:50:46.766569 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.766573 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.766577 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.766580 | orchestrator | 2026-04-18 00:50:46.766584 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-18 00:50:46.766588 | orchestrator | Saturday 18 April 2026 00:47:32 +0000 (0:00:00.627) 0:00:47.509 ******** 2026-04-18 00:50:46.766592 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-18 00:50:46.766598 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-18 00:50:46.766602 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-18 00:50:46.766605 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-18 00:50:46.766609 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-18 00:50:46.766614 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-18 00:50:46.766620 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-18 00:50:46.766629 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-18 00:50:46.766636 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-18 00:50:46.766642 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-18 00:50:46.766648 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-18 00:50:46.766654 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-18 00:50:46.766660 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.766665 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.766671 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.766677 | orchestrator | 2026-04-18 00:50:46.766683 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-18 00:50:46.766690 | orchestrator | Saturday 18 April 2026 00:48:16 +0000 (0:00:43.612) 0:01:31.121 ******** 2026-04-18 00:50:46.766696 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.766702 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.766708 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.766715 | orchestrator | 2026-04-18 00:50:46.766718 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-18 00:50:46.766722 | orchestrator | Saturday 18 April 2026 00:48:16 +0000 (0:00:00.424) 0:01:31.546 ******** 2026-04-18 00:50:46.766726 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.766730 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.766733 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.766737 | orchestrator | 2026-04-18 00:50:46.766740 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-18 00:50:46.766749 | orchestrator | Saturday 18 April 2026 00:48:17 +0000 (0:00:01.147) 0:01:32.694 ******** 2026-04-18 00:50:46.766753 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.766756 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.766760 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.766764 | orchestrator | 2026-04-18 00:50:46.766771 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-18 00:50:46.766775 | orchestrator | Saturday 18 April 2026 00:48:19 +0000 (0:00:01.269) 0:01:33.964 ******** 2026-04-18 00:50:46.766778 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.766782 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.766786 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.766789 | orchestrator | 2026-04-18 00:50:46.766793 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-18 00:50:46.766797 | orchestrator | Saturday 18 April 2026 00:48:44 +0000 (0:00:24.796) 0:01:58.760 ******** 2026-04-18 00:50:46.766801 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.766804 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.766808 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.766812 | orchestrator | 2026-04-18 00:50:46.766815 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-18 00:50:46.766819 | orchestrator | Saturday 18 April 2026 00:48:44 +0000 (0:00:00.734) 0:01:59.495 ******** 2026-04-18 00:50:46.766823 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.766826 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.766830 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.766834 | orchestrator | 2026-04-18 00:50:46.766837 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-18 00:50:46.766841 | orchestrator | Saturday 18 April 2026 00:48:45 +0000 (0:00:00.831) 0:02:00.326 ******** 2026-04-18 00:50:46.766845 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.766879 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.766884 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.766888 | orchestrator | 2026-04-18 00:50:46.766892 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-18 00:50:46.766896 | orchestrator | Saturday 18 April 2026 00:48:46 +0000 (0:00:00.595) 0:02:00.922 ******** 2026-04-18 00:50:46.766900 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.766904 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.766907 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.766911 | orchestrator | 2026-04-18 00:50:46.766915 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-18 00:50:46.766918 | orchestrator | Saturday 18 April 2026 00:48:46 +0000 (0:00:00.690) 0:02:01.612 ******** 2026-04-18 00:50:46.766922 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.766926 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.766929 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.766933 | orchestrator | 2026-04-18 00:50:46.766937 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-18 00:50:46.766944 | orchestrator | Saturday 18 April 2026 00:48:47 +0000 (0:00:00.322) 0:02:01.934 ******** 2026-04-18 00:50:46.766947 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.766951 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.766955 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.766959 | orchestrator | 2026-04-18 00:50:46.766962 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-18 00:50:46.766966 | orchestrator | Saturday 18 April 2026 00:48:47 +0000 (0:00:00.664) 0:02:02.598 ******** 2026-04-18 00:50:46.766970 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.766973 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.766977 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.766981 | orchestrator | 2026-04-18 00:50:46.766987 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-18 00:50:46.766994 | orchestrator | Saturday 18 April 2026 00:48:48 +0000 (0:00:00.950) 0:02:03.549 ******** 2026-04-18 00:50:46.767004 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.767010 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.767016 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.767021 | orchestrator | 2026-04-18 00:50:46.767027 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-18 00:50:46.767033 | orchestrator | Saturday 18 April 2026 00:48:49 +0000 (0:00:00.896) 0:02:04.446 ******** 2026-04-18 00:50:46.767039 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:50:46.767045 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:50:46.767051 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:50:46.767057 | orchestrator | 2026-04-18 00:50:46.767063 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-18 00:50:46.767068 | orchestrator | Saturday 18 April 2026 00:48:50 +0000 (0:00:00.946) 0:02:05.393 ******** 2026-04-18 00:50:46.767074 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.767080 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.767086 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.767093 | orchestrator | 2026-04-18 00:50:46.767099 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-18 00:50:46.767106 | orchestrator | Saturday 18 April 2026 00:48:50 +0000 (0:00:00.267) 0:02:05.661 ******** 2026-04-18 00:50:46.767112 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.767118 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.767124 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.767130 | orchestrator | 2026-04-18 00:50:46.767134 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-18 00:50:46.767138 | orchestrator | Saturday 18 April 2026 00:48:51 +0000 (0:00:00.457) 0:02:06.118 ******** 2026-04-18 00:50:46.767144 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.767150 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.767157 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.767162 | orchestrator | 2026-04-18 00:50:46.767169 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-18 00:50:46.767175 | orchestrator | Saturday 18 April 2026 00:48:52 +0000 (0:00:00.689) 0:02:06.807 ******** 2026-04-18 00:50:46.767182 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.767188 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.767195 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.767201 | orchestrator | 2026-04-18 00:50:46.767207 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-18 00:50:46.767215 | orchestrator | Saturday 18 April 2026 00:48:52 +0000 (0:00:00.712) 0:02:07.520 ******** 2026-04-18 00:50:46.767219 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-18 00:50:46.767227 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-18 00:50:46.767231 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-18 00:50:46.767236 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-18 00:50:46.767243 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-18 00:50:46.767249 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-18 00:50:46.767255 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-18 00:50:46.767262 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-18 00:50:46.767268 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-18 00:50:46.767275 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-18 00:50:46.767281 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-18 00:50:46.767300 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-18 00:50:46.767307 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-18 00:50:46.767313 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-18 00:50:46.767318 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-18 00:50:46.767322 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-18 00:50:46.767326 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-18 00:50:46.767332 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-18 00:50:46.767343 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-18 00:50:46.767349 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-18 00:50:46.767356 | orchestrator | 2026-04-18 00:50:46.767363 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-18 00:50:46.767369 | orchestrator | 2026-04-18 00:50:46.767375 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-18 00:50:46.767381 | orchestrator | Saturday 18 April 2026 00:48:56 +0000 (0:00:03.673) 0:02:11.194 ******** 2026-04-18 00:50:46.767388 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:50:46.767394 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:50:46.767400 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:50:46.767406 | orchestrator | 2026-04-18 00:50:46.767412 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-18 00:50:46.767419 | orchestrator | Saturday 18 April 2026 00:48:56 +0000 (0:00:00.327) 0:02:11.521 ******** 2026-04-18 00:50:46.767425 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:50:46.767431 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:50:46.767438 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:50:46.767443 | orchestrator | 2026-04-18 00:50:46.767450 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-18 00:50:46.767456 | orchestrator | Saturday 18 April 2026 00:48:57 +0000 (0:00:00.721) 0:02:12.242 ******** 2026-04-18 00:50:46.767463 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:50:46.767470 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:50:46.767476 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:50:46.767482 | orchestrator | 2026-04-18 00:50:46.767488 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-18 00:50:46.767494 | orchestrator | Saturday 18 April 2026 00:48:57 +0000 (0:00:00.306) 0:02:12.548 ******** 2026-04-18 00:50:46.767501 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:50:46.767507 | orchestrator | 2026-04-18 00:50:46.767513 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-18 00:50:46.767520 | orchestrator | Saturday 18 April 2026 00:48:58 +0000 (0:00:00.634) 0:02:13.183 ******** 2026-04-18 00:50:46.767526 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.767532 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.767538 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.767544 | orchestrator | 2026-04-18 00:50:46.767551 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-18 00:50:46.767557 | orchestrator | Saturday 18 April 2026 00:48:58 +0000 (0:00:00.279) 0:02:13.463 ******** 2026-04-18 00:50:46.767563 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.767569 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.767576 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.767582 | orchestrator | 2026-04-18 00:50:46.767588 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-18 00:50:46.767599 | orchestrator | Saturday 18 April 2026 00:48:59 +0000 (0:00:00.325) 0:02:13.789 ******** 2026-04-18 00:50:46.767606 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.767612 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.767618 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.767624 | orchestrator | 2026-04-18 00:50:46.767631 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-18 00:50:46.767637 | orchestrator | Saturday 18 April 2026 00:48:59 +0000 (0:00:00.481) 0:02:14.270 ******** 2026-04-18 00:50:46.767643 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:50:46.767649 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:50:46.767655 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:50:46.767662 | orchestrator | 2026-04-18 00:50:46.767672 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-18 00:50:46.767678 | orchestrator | Saturday 18 April 2026 00:49:00 +0000 (0:00:00.722) 0:02:14.992 ******** 2026-04-18 00:50:46.767685 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:50:46.767691 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:50:46.767697 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:50:46.767703 | orchestrator | 2026-04-18 00:50:46.767709 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-18 00:50:46.767716 | orchestrator | Saturday 18 April 2026 00:49:01 +0000 (0:00:01.106) 0:02:16.099 ******** 2026-04-18 00:50:46.767722 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:50:46.767729 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:50:46.767735 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:50:46.767741 | orchestrator | 2026-04-18 00:50:46.767747 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-18 00:50:46.767753 | orchestrator | Saturday 18 April 2026 00:49:02 +0000 (0:00:01.269) 0:02:17.369 ******** 2026-04-18 00:50:46.767760 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:50:46.767766 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:50:46.767772 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:50:46.767778 | orchestrator | 2026-04-18 00:50:46.767784 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-18 00:50:46.767790 | orchestrator | 2026-04-18 00:50:46.767796 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-18 00:50:46.767803 | orchestrator | Saturday 18 April 2026 00:49:12 +0000 (0:00:10.121) 0:02:27.491 ******** 2026-04-18 00:50:46.767809 | orchestrator | ok: [testbed-manager] 2026-04-18 00:50:46.767815 | orchestrator | 2026-04-18 00:50:46.767821 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-18 00:50:46.767828 | orchestrator | Saturday 18 April 2026 00:49:13 +0000 (0:00:00.742) 0:02:28.233 ******** 2026-04-18 00:50:46.767834 | orchestrator | changed: [testbed-manager] 2026-04-18 00:50:46.767840 | orchestrator | 2026-04-18 00:50:46.767846 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-18 00:50:46.767866 | orchestrator | Saturday 18 April 2026 00:49:13 +0000 (0:00:00.390) 0:02:28.623 ******** 2026-04-18 00:50:46.767873 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-18 00:50:46.767879 | orchestrator | 2026-04-18 00:50:46.767886 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-18 00:50:46.767895 | orchestrator | Saturday 18 April 2026 00:49:14 +0000 (0:00:00.568) 0:02:29.192 ******** 2026-04-18 00:50:46.767902 | orchestrator | changed: [testbed-manager] 2026-04-18 00:50:46.767908 | orchestrator | 2026-04-18 00:50:46.767912 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-18 00:50:46.767916 | orchestrator | Saturday 18 April 2026 00:49:15 +0000 (0:00:00.674) 0:02:29.866 ******** 2026-04-18 00:50:46.767919 | orchestrator | changed: [testbed-manager] 2026-04-18 00:50:46.767923 | orchestrator | 2026-04-18 00:50:46.767927 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-18 00:50:46.767931 | orchestrator | Saturday 18 April 2026 00:49:15 +0000 (0:00:00.501) 0:02:30.368 ******** 2026-04-18 00:50:46.767938 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-18 00:50:46.767942 | orchestrator | 2026-04-18 00:50:46.767946 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-18 00:50:46.767949 | orchestrator | Saturday 18 April 2026 00:49:17 +0000 (0:00:01.501) 0:02:31.869 ******** 2026-04-18 00:50:46.767953 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-18 00:50:46.767957 | orchestrator | 2026-04-18 00:50:46.767961 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-18 00:50:46.767964 | orchestrator | Saturday 18 April 2026 00:49:18 +0000 (0:00:00.868) 0:02:32.738 ******** 2026-04-18 00:50:46.767968 | orchestrator | changed: [testbed-manager] 2026-04-18 00:50:46.767972 | orchestrator | 2026-04-18 00:50:46.767976 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-18 00:50:46.767980 | orchestrator | Saturday 18 April 2026 00:49:18 +0000 (0:00:00.378) 0:02:33.117 ******** 2026-04-18 00:50:46.767983 | orchestrator | changed: [testbed-manager] 2026-04-18 00:50:46.767987 | orchestrator | 2026-04-18 00:50:46.767991 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-18 00:50:46.767994 | orchestrator | 2026-04-18 00:50:46.767998 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-18 00:50:46.768002 | orchestrator | Saturday 18 April 2026 00:49:18 +0000 (0:00:00.373) 0:02:33.490 ******** 2026-04-18 00:50:46.768006 | orchestrator | ok: [testbed-manager] 2026-04-18 00:50:46.768009 | orchestrator | 2026-04-18 00:50:46.768013 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-18 00:50:46.768017 | orchestrator | Saturday 18 April 2026 00:49:18 +0000 (0:00:00.225) 0:02:33.715 ******** 2026-04-18 00:50:46.768021 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-18 00:50:46.768024 | orchestrator | 2026-04-18 00:50:46.768028 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-18 00:50:46.768032 | orchestrator | Saturday 18 April 2026 00:49:19 +0000 (0:00:00.246) 0:02:33.962 ******** 2026-04-18 00:50:46.768035 | orchestrator | ok: [testbed-manager] 2026-04-18 00:50:46.768039 | orchestrator | 2026-04-18 00:50:46.768043 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-18 00:50:46.768047 | orchestrator | Saturday 18 April 2026 00:49:20 +0000 (0:00:00.787) 0:02:34.749 ******** 2026-04-18 00:50:46.768050 | orchestrator | ok: [testbed-manager] 2026-04-18 00:50:46.768054 | orchestrator | 2026-04-18 00:50:46.768058 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-18 00:50:46.768062 | orchestrator | Saturday 18 April 2026 00:49:21 +0000 (0:00:01.292) 0:02:36.042 ******** 2026-04-18 00:50:46.768065 | orchestrator | changed: [testbed-manager] 2026-04-18 00:50:46.768069 | orchestrator | 2026-04-18 00:50:46.768073 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-18 00:50:46.768077 | orchestrator | Saturday 18 April 2026 00:49:22 +0000 (0:00:00.810) 0:02:36.852 ******** 2026-04-18 00:50:46.768080 | orchestrator | ok: [testbed-manager] 2026-04-18 00:50:46.768084 | orchestrator | 2026-04-18 00:50:46.768091 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-18 00:50:46.768095 | orchestrator | Saturday 18 April 2026 00:49:22 +0000 (0:00:00.367) 0:02:37.219 ******** 2026-04-18 00:50:46.768099 | orchestrator | changed: [testbed-manager] 2026-04-18 00:50:46.768102 | orchestrator | 2026-04-18 00:50:46.768106 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-18 00:50:46.768110 | orchestrator | Saturday 18 April 2026 00:49:28 +0000 (0:00:05.512) 0:02:42.732 ******** 2026-04-18 00:50:46.768114 | orchestrator | changed: [testbed-manager] 2026-04-18 00:50:46.768117 | orchestrator | 2026-04-18 00:50:46.768121 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-18 00:50:46.768125 | orchestrator | Saturday 18 April 2026 00:49:38 +0000 (0:00:10.735) 0:02:53.468 ******** 2026-04-18 00:50:46.768128 | orchestrator | ok: [testbed-manager] 2026-04-18 00:50:46.768136 | orchestrator | 2026-04-18 00:50:46.768140 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-18 00:50:46.768143 | orchestrator | 2026-04-18 00:50:46.768147 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-18 00:50:46.768151 | orchestrator | Saturday 18 April 2026 00:49:39 +0000 (0:00:00.560) 0:02:54.028 ******** 2026-04-18 00:50:46.768154 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.768158 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.768162 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.768165 | orchestrator | 2026-04-18 00:50:46.768169 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-18 00:50:46.768173 | orchestrator | Saturday 18 April 2026 00:49:39 +0000 (0:00:00.327) 0:02:54.355 ******** 2026-04-18 00:50:46.768177 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.768180 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.768184 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.768188 | orchestrator | 2026-04-18 00:50:46.768191 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-18 00:50:46.768195 | orchestrator | Saturday 18 April 2026 00:49:40 +0000 (0:00:00.480) 0:02:54.836 ******** 2026-04-18 00:50:46.768199 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:50:46.768203 | orchestrator | 2026-04-18 00:50:46.768206 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-18 00:50:46.768210 | orchestrator | Saturday 18 April 2026 00:49:40 +0000 (0:00:00.505) 0:02:55.341 ******** 2026-04-18 00:50:46.768224 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-18 00:50:46.768229 | orchestrator | 2026-04-18 00:50:46.768232 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-18 00:50:46.768236 | orchestrator | Saturday 18 April 2026 00:49:41 +0000 (0:00:00.896) 0:02:56.238 ******** 2026-04-18 00:50:46.768240 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-18 00:50:46.768244 | orchestrator | 2026-04-18 00:50:46.768247 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-18 00:50:46.768251 | orchestrator | Saturday 18 April 2026 00:49:42 +0000 (0:00:00.900) 0:02:57.138 ******** 2026-04-18 00:50:46.768255 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.768259 | orchestrator | 2026-04-18 00:50:46.768262 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-18 00:50:46.768266 | orchestrator | Saturday 18 April 2026 00:49:42 +0000 (0:00:00.102) 0:02:57.241 ******** 2026-04-18 00:50:46.768270 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-18 00:50:46.768273 | orchestrator | 2026-04-18 00:50:46.768277 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-18 00:50:46.768281 | orchestrator | Saturday 18 April 2026 00:49:43 +0000 (0:00:00.986) 0:02:58.227 ******** 2026-04-18 00:50:46.768285 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.768288 | orchestrator | 2026-04-18 00:50:46.768292 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-18 00:50:46.768296 | orchestrator | Saturday 18 April 2026 00:49:43 +0000 (0:00:00.114) 0:02:58.341 ******** 2026-04-18 00:50:46.768300 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.768303 | orchestrator | 2026-04-18 00:50:46.768307 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-18 00:50:46.768311 | orchestrator | Saturday 18 April 2026 00:49:43 +0000 (0:00:00.109) 0:02:58.450 ******** 2026-04-18 00:50:46.768314 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.768318 | orchestrator | 2026-04-18 00:50:46.768322 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-18 00:50:46.768326 | orchestrator | Saturday 18 April 2026 00:49:44 +0000 (0:00:00.272) 0:02:58.723 ******** 2026-04-18 00:50:46.768329 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.768333 | orchestrator | 2026-04-18 00:50:46.768337 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-18 00:50:46.768344 | orchestrator | Saturday 18 April 2026 00:49:44 +0000 (0:00:00.122) 0:02:58.845 ******** 2026-04-18 00:50:46.768347 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-18 00:50:46.768351 | orchestrator | 2026-04-18 00:50:46.768355 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-18 00:50:46.768358 | orchestrator | Saturday 18 April 2026 00:49:48 +0000 (0:00:04.226) 0:03:03.072 ******** 2026-04-18 00:50:46.768362 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-18 00:50:46.768366 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-18 00:50:46.768370 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-18 00:50:46.768374 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-18 00:50:46.768377 | orchestrator | 2026-04-18 00:50:46.768381 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-18 00:50:46.768385 | orchestrator | Saturday 18 April 2026 00:50:21 +0000 (0:00:33.202) 0:03:36.274 ******** 2026-04-18 00:50:46.768389 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-18 00:50:46.768392 | orchestrator | 2026-04-18 00:50:46.768398 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-18 00:50:46.768402 | orchestrator | Saturday 18 April 2026 00:50:22 +0000 (0:00:01.096) 0:03:37.370 ******** 2026-04-18 00:50:46.768406 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-18 00:50:46.768410 | orchestrator | 2026-04-18 00:50:46.768413 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-18 00:50:46.768417 | orchestrator | Saturday 18 April 2026 00:50:24 +0000 (0:00:01.471) 0:03:38.842 ******** 2026-04-18 00:50:46.768421 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-18 00:50:46.768425 | orchestrator | 2026-04-18 00:50:46.768428 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-18 00:50:46.768432 | orchestrator | Saturday 18 April 2026 00:50:25 +0000 (0:00:00.971) 0:03:39.814 ******** 2026-04-18 00:50:46.768436 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.768440 | orchestrator | 2026-04-18 00:50:46.768443 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-18 00:50:46.768447 | orchestrator | Saturday 18 April 2026 00:50:25 +0000 (0:00:00.115) 0:03:39.929 ******** 2026-04-18 00:50:46.768451 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-18 00:50:46.768454 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-18 00:50:46.768458 | orchestrator | 2026-04-18 00:50:46.768462 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-18 00:50:46.768466 | orchestrator | Saturday 18 April 2026 00:50:26 +0000 (0:00:01.648) 0:03:41.578 ******** 2026-04-18 00:50:46.768469 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.768473 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.768477 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.768480 | orchestrator | 2026-04-18 00:50:46.768484 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-18 00:50:46.768488 | orchestrator | Saturday 18 April 2026 00:50:27 +0000 (0:00:00.273) 0:03:41.852 ******** 2026-04-18 00:50:46.768492 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.768495 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.768499 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.768503 | orchestrator | 2026-04-18 00:50:46.768506 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-18 00:50:46.768510 | orchestrator | 2026-04-18 00:50:46.768514 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-18 00:50:46.768520 | orchestrator | Saturday 18 April 2026 00:50:28 +0000 (0:00:01.021) 0:03:42.874 ******** 2026-04-18 00:50:46.768524 | orchestrator | ok: [testbed-manager] 2026-04-18 00:50:46.768528 | orchestrator | 2026-04-18 00:50:46.768535 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-18 00:50:46.768538 | orchestrator | Saturday 18 April 2026 00:50:28 +0000 (0:00:00.119) 0:03:42.993 ******** 2026-04-18 00:50:46.768542 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-18 00:50:46.768546 | orchestrator | 2026-04-18 00:50:46.768550 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-18 00:50:46.768553 | orchestrator | Saturday 18 April 2026 00:50:28 +0000 (0:00:00.206) 0:03:43.200 ******** 2026-04-18 00:50:46.768557 | orchestrator | changed: [testbed-manager] 2026-04-18 00:50:46.768561 | orchestrator | 2026-04-18 00:50:46.768565 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-18 00:50:46.768568 | orchestrator | 2026-04-18 00:50:46.768572 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-18 00:50:46.768576 | orchestrator | Saturday 18 April 2026 00:50:33 +0000 (0:00:04.829) 0:03:48.029 ******** 2026-04-18 00:50:46.768580 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:50:46.768583 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:50:46.768587 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:50:46.768591 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:50:46.768597 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:50:46.768602 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:50:46.768608 | orchestrator | 2026-04-18 00:50:46.768615 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-18 00:50:46.768621 | orchestrator | Saturday 18 April 2026 00:50:33 +0000 (0:00:00.498) 0:03:48.528 ******** 2026-04-18 00:50:46.768627 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-18 00:50:46.768632 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-18 00:50:46.768639 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-18 00:50:46.768644 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-18 00:50:46.768650 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-18 00:50:46.768656 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-18 00:50:46.768662 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-18 00:50:46.768669 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-18 00:50:46.768675 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-18 00:50:46.768681 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-18 00:50:46.768687 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-18 00:50:46.768694 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-18 00:50:46.768700 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-18 00:50:46.768706 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-18 00:50:46.768718 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-18 00:50:46.768725 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-18 00:50:46.768730 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-18 00:50:46.768736 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-18 00:50:46.768742 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-18 00:50:46.768749 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-18 00:50:46.768755 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-18 00:50:46.768769 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-18 00:50:46.768776 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-18 00:50:46.768782 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-18 00:50:46.768789 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-18 00:50:46.768796 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-18 00:50:46.768802 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-18 00:50:46.768809 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-18 00:50:46.768815 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-18 00:50:46.768821 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-18 00:50:46.768824 | orchestrator | 2026-04-18 00:50:46.768828 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-18 00:50:46.768832 | orchestrator | Saturday 18 April 2026 00:50:43 +0000 (0:00:09.759) 0:03:58.287 ******** 2026-04-18 00:50:46.768836 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.768843 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.768847 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.768865 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.768869 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.768872 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.768876 | orchestrator | 2026-04-18 00:50:46.768880 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-18 00:50:46.768884 | orchestrator | Saturday 18 April 2026 00:50:44 +0000 (0:00:00.561) 0:03:58.849 ******** 2026-04-18 00:50:46.768887 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:50:46.768891 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:50:46.768895 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:50:46.768898 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:50:46.768902 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:50:46.768906 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:50:46.768909 | orchestrator | 2026-04-18 00:50:46.768913 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:50:46.768917 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:50:46.768922 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-18 00:50:46.768926 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-18 00:50:46.768930 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-18 00:50:46.768934 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-18 00:50:46.768937 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-18 00:50:46.768941 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-18 00:50:46.768945 | orchestrator | 2026-04-18 00:50:46.768948 | orchestrator | 2026-04-18 00:50:46.768952 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:50:46.768960 | orchestrator | Saturday 18 April 2026 00:50:44 +0000 (0:00:00.400) 0:03:59.249 ******** 2026-04-18 00:50:46.768964 | orchestrator | =============================================================================== 2026-04-18 00:50:46.768967 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.61s 2026-04-18 00:50:46.768971 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 33.20s 2026-04-18 00:50:46.768975 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.80s 2026-04-18 00:50:46.768979 | orchestrator | kubectl : Install required packages ------------------------------------ 10.74s 2026-04-18 00:50:46.768982 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.12s 2026-04-18 00:50:46.768989 | orchestrator | Manage labels ----------------------------------------------------------- 9.76s 2026-04-18 00:50:46.768993 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 7.32s 2026-04-18 00:50:46.768997 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 5.51s 2026-04-18 00:50:46.769000 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.83s 2026-04-18 00:50:46.769004 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.23s 2026-04-18 00:50:46.769008 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.67s 2026-04-18 00:50:46.769012 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.64s 2026-04-18 00:50:46.769015 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.55s 2026-04-18 00:50:46.769019 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.37s 2026-04-18 00:50:46.769023 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.16s 2026-04-18 00:50:46.769027 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.05s 2026-04-18 00:50:46.769030 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.92s 2026-04-18 00:50:46.769034 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.87s 2026-04-18 00:50:46.769038 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.71s 2026-04-18 00:50:46.769041 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.65s 2026-04-18 00:50:46.769045 | orchestrator | 2026-04-18 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:49.787930 | orchestrator | 2026-04-18 00:50:49 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:49.789501 | orchestrator | 2026-04-18 00:50:49 | INFO  | Task c8b4e3dd-4e6a-4067-a4ae-25297700df2c is in state STARTED 2026-04-18 00:50:49.789783 | orchestrator | 2026-04-18 00:50:49 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:49.791669 | orchestrator | 2026-04-18 00:50:49 | INFO  | Task 466415e4-b7c5-4e8e-8dca-3893ce0557cc is in state STARTED 2026-04-18 00:50:49.791719 | orchestrator | 2026-04-18 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:52.824028 | orchestrator | 2026-04-18 00:50:52 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:52.825473 | orchestrator | 2026-04-18 00:50:52 | INFO  | Task c8b4e3dd-4e6a-4067-a4ae-25297700df2c is in state STARTED 2026-04-18 00:50:52.826224 | orchestrator | 2026-04-18 00:50:52 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:52.827714 | orchestrator | 2026-04-18 00:50:52 | INFO  | Task 466415e4-b7c5-4e8e-8dca-3893ce0557cc is in state SUCCESS 2026-04-18 00:50:52.827819 | orchestrator | 2026-04-18 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:55.862773 | orchestrator | 2026-04-18 00:50:55 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:55.863791 | orchestrator | 2026-04-18 00:50:55 | INFO  | Task c8b4e3dd-4e6a-4067-a4ae-25297700df2c is in state SUCCESS 2026-04-18 00:50:55.865679 | orchestrator | 2026-04-18 00:50:55 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:55.865758 | orchestrator | 2026-04-18 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:50:58.910964 | orchestrator | 2026-04-18 00:50:58 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:50:58.913593 | orchestrator | 2026-04-18 00:50:58 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:50:58.914947 | orchestrator | 2026-04-18 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:01.939393 | orchestrator | 2026-04-18 00:51:01 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:01.939590 | orchestrator | 2026-04-18 00:51:01 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:01.939611 | orchestrator | 2026-04-18 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:04.980360 | orchestrator | 2026-04-18 00:51:04 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:04.982757 | orchestrator | 2026-04-18 00:51:04 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:04.982826 | orchestrator | 2026-04-18 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:08.011105 | orchestrator | 2026-04-18 00:51:08 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:08.014717 | orchestrator | 2026-04-18 00:51:08 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:08.014766 | orchestrator | 2026-04-18 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:11.058672 | orchestrator | 2026-04-18 00:51:11 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:11.059015 | orchestrator | 2026-04-18 00:51:11 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:11.059158 | orchestrator | 2026-04-18 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:14.098441 | orchestrator | 2026-04-18 00:51:14 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:14.100368 | orchestrator | 2026-04-18 00:51:14 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:14.100394 | orchestrator | 2026-04-18 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:17.146984 | orchestrator | 2026-04-18 00:51:17 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:17.147956 | orchestrator | 2026-04-18 00:51:17 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:17.148043 | orchestrator | 2026-04-18 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:20.180511 | orchestrator | 2026-04-18 00:51:20 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:20.181405 | orchestrator | 2026-04-18 00:51:20 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:20.181466 | orchestrator | 2026-04-18 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:23.218836 | orchestrator | 2026-04-18 00:51:23 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:23.218965 | orchestrator | 2026-04-18 00:51:23 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:23.218995 | orchestrator | 2026-04-18 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:26.260479 | orchestrator | 2026-04-18 00:51:26 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:26.260842 | orchestrator | 2026-04-18 00:51:26 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:26.260869 | orchestrator | 2026-04-18 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:29.299599 | orchestrator | 2026-04-18 00:51:29 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:29.299744 | orchestrator | 2026-04-18 00:51:29 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:29.299755 | orchestrator | 2026-04-18 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:32.342496 | orchestrator | 2026-04-18 00:51:32 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:32.342590 | orchestrator | 2026-04-18 00:51:32 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:32.342601 | orchestrator | 2026-04-18 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:35.383387 | orchestrator | 2026-04-18 00:51:35 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:35.384576 | orchestrator | 2026-04-18 00:51:35 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:35.384670 | orchestrator | 2026-04-18 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:38.424886 | orchestrator | 2026-04-18 00:51:38 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:38.427377 | orchestrator | 2026-04-18 00:51:38 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:38.427755 | orchestrator | 2026-04-18 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:41.472157 | orchestrator | 2026-04-18 00:51:41 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:41.473946 | orchestrator | 2026-04-18 00:51:41 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:41.474053 | orchestrator | 2026-04-18 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:44.520071 | orchestrator | 2026-04-18 00:51:44 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:44.520183 | orchestrator | 2026-04-18 00:51:44 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:44.520195 | orchestrator | 2026-04-18 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:47.568495 | orchestrator | 2026-04-18 00:51:47 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:47.569970 | orchestrator | 2026-04-18 00:51:47 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:47.570178 | orchestrator | 2026-04-18 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:50.620084 | orchestrator | 2026-04-18 00:51:50 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:50.621757 | orchestrator | 2026-04-18 00:51:50 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:50.621820 | orchestrator | 2026-04-18 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:53.655427 | orchestrator | 2026-04-18 00:51:53 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:53.655999 | orchestrator | 2026-04-18 00:51:53 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:53.656534 | orchestrator | 2026-04-18 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:56.697889 | orchestrator | 2026-04-18 00:51:56 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:56.700354 | orchestrator | 2026-04-18 00:51:56 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:56.700436 | orchestrator | 2026-04-18 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:51:59.735328 | orchestrator | 2026-04-18 00:51:59 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:51:59.735901 | orchestrator | 2026-04-18 00:51:59 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:51:59.735951 | orchestrator | 2026-04-18 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:02.765272 | orchestrator | 2026-04-18 00:52:02 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:02.766137 | orchestrator | 2026-04-18 00:52:02 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:02.766219 | orchestrator | 2026-04-18 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:05.807308 | orchestrator | 2026-04-18 00:52:05 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:05.809770 | orchestrator | 2026-04-18 00:52:05 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:05.809832 | orchestrator | 2026-04-18 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:08.849496 | orchestrator | 2026-04-18 00:52:08 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:08.849750 | orchestrator | 2026-04-18 00:52:08 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:08.850079 | orchestrator | 2026-04-18 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:11.898150 | orchestrator | 2026-04-18 00:52:11 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:11.898607 | orchestrator | 2026-04-18 00:52:11 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:11.898623 | orchestrator | 2026-04-18 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:14.933667 | orchestrator | 2026-04-18 00:52:14 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:14.935585 | orchestrator | 2026-04-18 00:52:14 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:14.935654 | orchestrator | 2026-04-18 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:17.975972 | orchestrator | 2026-04-18 00:52:17 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:17.977867 | orchestrator | 2026-04-18 00:52:17 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:17.977913 | orchestrator | 2026-04-18 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:21.029879 | orchestrator | 2026-04-18 00:52:21 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:21.032422 | orchestrator | 2026-04-18 00:52:21 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:21.033925 | orchestrator | 2026-04-18 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:24.078510 | orchestrator | 2026-04-18 00:52:24 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:24.079776 | orchestrator | 2026-04-18 00:52:24 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:24.079998 | orchestrator | 2026-04-18 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:27.111113 | orchestrator | 2026-04-18 00:52:27 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:27.111656 | orchestrator | 2026-04-18 00:52:27 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:27.111690 | orchestrator | 2026-04-18 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:30.145018 | orchestrator | 2026-04-18 00:52:30 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:30.146332 | orchestrator | 2026-04-18 00:52:30 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:30.146385 | orchestrator | 2026-04-18 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:33.179856 | orchestrator | 2026-04-18 00:52:33 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:33.180365 | orchestrator | 2026-04-18 00:52:33 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:33.180434 | orchestrator | 2026-04-18 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:36.215120 | orchestrator | 2026-04-18 00:52:36 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:36.217260 | orchestrator | 2026-04-18 00:52:36 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:36.217522 | orchestrator | 2026-04-18 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:39.258447 | orchestrator | 2026-04-18 00:52:39 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:39.259912 | orchestrator | 2026-04-18 00:52:39 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:39.260384 | orchestrator | 2026-04-18 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:42.305445 | orchestrator | 2026-04-18 00:52:42 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:42.305870 | orchestrator | 2026-04-18 00:52:42 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:42.305885 | orchestrator | 2026-04-18 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:45.346924 | orchestrator | 2026-04-18 00:52:45 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:45.352062 | orchestrator | 2026-04-18 00:52:45 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:45.352150 | orchestrator | 2026-04-18 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:48.393575 | orchestrator | 2026-04-18 00:52:48 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:48.395749 | orchestrator | 2026-04-18 00:52:48 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:48.395797 | orchestrator | 2026-04-18 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:51.435868 | orchestrator | 2026-04-18 00:52:51 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:51.437698 | orchestrator | 2026-04-18 00:52:51 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:51.437771 | orchestrator | 2026-04-18 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:54.480570 | orchestrator | 2026-04-18 00:52:54 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:54.481692 | orchestrator | 2026-04-18 00:52:54 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:54.481776 | orchestrator | 2026-04-18 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:52:57.525432 | orchestrator | 2026-04-18 00:52:57 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:52:57.525544 | orchestrator | 2026-04-18 00:52:57 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:52:57.525568 | orchestrator | 2026-04-18 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:00.577403 | orchestrator | 2026-04-18 00:53:00 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:00.578864 | orchestrator | 2026-04-18 00:53:00 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:53:00.579133 | orchestrator | 2026-04-18 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:03.625736 | orchestrator | 2026-04-18 00:53:03 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:03.626585 | orchestrator | 2026-04-18 00:53:03 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:53:03.626648 | orchestrator | 2026-04-18 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:06.674919 | orchestrator | 2026-04-18 00:53:06 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:06.677045 | orchestrator | 2026-04-18 00:53:06 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:53:06.677176 | orchestrator | 2026-04-18 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:09.721107 | orchestrator | 2026-04-18 00:53:09 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:09.722402 | orchestrator | 2026-04-18 00:53:09 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:53:09.722445 | orchestrator | 2026-04-18 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:12.762063 | orchestrator | 2026-04-18 00:53:12 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:12.763803 | orchestrator | 2026-04-18 00:53:12 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state STARTED 2026-04-18 00:53:12.763856 | orchestrator | 2026-04-18 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:15.812865 | orchestrator | 2026-04-18 00:53:15 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:15.815891 | orchestrator | 2026-04-18 00:53:15 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:15.825288 | orchestrator | 2026-04-18 00:53:15 | INFO  | Task 974ab00b-c6c9-423d-8b42-6a869e768684 is in state SUCCESS 2026-04-18 00:53:15.827584 | orchestrator | 2026-04-18 00:53:15.827660 | orchestrator | 2026-04-18 00:53:15.827670 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-18 00:53:15.827678 | orchestrator | 2026-04-18 00:53:15.827684 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-18 00:53:15.827692 | orchestrator | Saturday 18 April 2026 00:50:47 +0000 (0:00:00.196) 0:00:00.196 ******** 2026-04-18 00:53:15.827700 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-18 00:53:15.827729 | orchestrator | 2026-04-18 00:53:15.827734 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-18 00:53:15.827738 | orchestrator | Saturday 18 April 2026 00:50:48 +0000 (0:00:01.037) 0:00:01.234 ******** 2026-04-18 00:53:15.827742 | orchestrator | changed: [testbed-manager] 2026-04-18 00:53:15.827746 | orchestrator | 2026-04-18 00:53:15.827750 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-18 00:53:15.827754 | orchestrator | Saturday 18 April 2026 00:50:49 +0000 (0:00:01.229) 0:00:02.463 ******** 2026-04-18 00:53:15.827757 | orchestrator | changed: [testbed-manager] 2026-04-18 00:53:15.827761 | orchestrator | 2026-04-18 00:53:15.827765 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:53:15.827769 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:53:15.827775 | orchestrator | 2026-04-18 00:53:15.827778 | orchestrator | 2026-04-18 00:53:15.827782 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:53:15.827786 | orchestrator | Saturday 18 April 2026 00:50:50 +0000 (0:00:00.379) 0:00:02.843 ******** 2026-04-18 00:53:15.827789 | orchestrator | =============================================================================== 2026-04-18 00:53:15.827793 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.23s 2026-04-18 00:53:15.827797 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.04s 2026-04-18 00:53:15.827801 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.38s 2026-04-18 00:53:15.827804 | orchestrator | 2026-04-18 00:53:15.827808 | orchestrator | 2026-04-18 00:53:15.827812 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-18 00:53:15.827815 | orchestrator | 2026-04-18 00:53:15.827819 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-18 00:53:15.827823 | orchestrator | Saturday 18 April 2026 00:50:47 +0000 (0:00:00.202) 0:00:00.202 ******** 2026-04-18 00:53:15.827826 | orchestrator | ok: [testbed-manager] 2026-04-18 00:53:15.827831 | orchestrator | 2026-04-18 00:53:15.827891 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-18 00:53:15.827896 | orchestrator | Saturday 18 April 2026 00:50:48 +0000 (0:00:00.810) 0:00:01.012 ******** 2026-04-18 00:53:15.827902 | orchestrator | ok: [testbed-manager] 2026-04-18 00:53:15.827908 | orchestrator | 2026-04-18 00:53:15.827914 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-18 00:53:15.827921 | orchestrator | Saturday 18 April 2026 00:50:48 +0000 (0:00:00.517) 0:00:01.530 ******** 2026-04-18 00:53:15.827977 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-18 00:53:15.827985 | orchestrator | 2026-04-18 00:53:15.827991 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-18 00:53:15.827997 | orchestrator | Saturday 18 April 2026 00:50:49 +0000 (0:00:00.960) 0:00:02.490 ******** 2026-04-18 00:53:15.828003 | orchestrator | changed: [testbed-manager] 2026-04-18 00:53:15.828009 | orchestrator | 2026-04-18 00:53:15.828015 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-18 00:53:15.828201 | orchestrator | Saturday 18 April 2026 00:50:50 +0000 (0:00:00.854) 0:00:03.345 ******** 2026-04-18 00:53:15.828212 | orchestrator | changed: [testbed-manager] 2026-04-18 00:53:15.828216 | orchestrator | 2026-04-18 00:53:15.828220 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-18 00:53:15.828224 | orchestrator | Saturday 18 April 2026 00:50:51 +0000 (0:00:00.391) 0:00:03.737 ******** 2026-04-18 00:53:15.828228 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-18 00:53:15.828232 | orchestrator | 2026-04-18 00:53:15.828236 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-18 00:53:15.828239 | orchestrator | Saturday 18 April 2026 00:50:52 +0000 (0:00:01.291) 0:00:05.028 ******** 2026-04-18 00:53:15.828243 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-18 00:53:15.828259 | orchestrator | 2026-04-18 00:53:15.828267 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-18 00:53:15.828273 | orchestrator | Saturday 18 April 2026 00:50:53 +0000 (0:00:00.800) 0:00:05.829 ******** 2026-04-18 00:53:15.828280 | orchestrator | ok: [testbed-manager] 2026-04-18 00:53:15.828286 | orchestrator | 2026-04-18 00:53:15.828332 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-18 00:53:15.828340 | orchestrator | Saturday 18 April 2026 00:50:53 +0000 (0:00:00.288) 0:00:06.118 ******** 2026-04-18 00:53:15.828383 | orchestrator | ok: [testbed-manager] 2026-04-18 00:53:15.828389 | orchestrator | 2026-04-18 00:53:15.828393 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:53:15.828397 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:53:15.828402 | orchestrator | 2026-04-18 00:53:15.828405 | orchestrator | 2026-04-18 00:53:15.828409 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:53:15.828424 | orchestrator | Saturday 18 April 2026 00:50:53 +0000 (0:00:00.242) 0:00:06.360 ******** 2026-04-18 00:53:15.828428 | orchestrator | =============================================================================== 2026-04-18 00:53:15.828432 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.29s 2026-04-18 00:53:15.828436 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.96s 2026-04-18 00:53:15.828440 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.85s 2026-04-18 00:53:15.828455 | orchestrator | Get home directory of operator user ------------------------------------- 0.81s 2026-04-18 00:53:15.828459 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.80s 2026-04-18 00:53:15.828462 | orchestrator | Create .kube directory -------------------------------------------------- 0.52s 2026-04-18 00:53:15.828466 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.39s 2026-04-18 00:53:15.828470 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.29s 2026-04-18 00:53:15.828476 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.24s 2026-04-18 00:53:15.828483 | orchestrator | 2026-04-18 00:53:15.828490 | orchestrator | 2026-04-18 00:53:15.828496 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:53:15.828503 | orchestrator | 2026-04-18 00:53:15.828510 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:53:15.828517 | orchestrator | Saturday 18 April 2026 00:47:59 +0000 (0:00:00.376) 0:00:00.376 ******** 2026-04-18 00:53:15.828523 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.828530 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.828535 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.828539 | orchestrator | 2026-04-18 00:53:15.828542 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:53:15.828546 | orchestrator | Saturday 18 April 2026 00:47:59 +0000 (0:00:00.301) 0:00:00.677 ******** 2026-04-18 00:53:15.828550 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-18 00:53:15.828554 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-18 00:53:15.828557 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-18 00:53:15.828562 | orchestrator | 2026-04-18 00:53:15.828568 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-18 00:53:15.828574 | orchestrator | 2026-04-18 00:53:15.828580 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-18 00:53:15.828586 | orchestrator | Saturday 18 April 2026 00:48:00 +0000 (0:00:00.710) 0:00:01.388 ******** 2026-04-18 00:53:15.829570 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.829591 | orchestrator | 2026-04-18 00:53:15.829596 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-18 00:53:15.829611 | orchestrator | Saturday 18 April 2026 00:48:01 +0000 (0:00:01.608) 0:00:02.996 ******** 2026-04-18 00:53:15.829615 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.829619 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.829623 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.829626 | orchestrator | 2026-04-18 00:53:15.829631 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-18 00:53:15.829638 | orchestrator | Saturday 18 April 2026 00:48:03 +0000 (0:00:01.204) 0:00:04.200 ******** 2026-04-18 00:53:15.829644 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.829650 | orchestrator | 2026-04-18 00:53:15.829656 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-18 00:53:15.829662 | orchestrator | Saturday 18 April 2026 00:48:03 +0000 (0:00:00.534) 0:00:04.735 ******** 2026-04-18 00:53:15.829668 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.829673 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.829679 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.829684 | orchestrator | 2026-04-18 00:53:15.829690 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-18 00:53:15.829696 | orchestrator | Saturday 18 April 2026 00:48:04 +0000 (0:00:01.227) 0:00:05.963 ******** 2026-04-18 00:53:15.829703 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-18 00:53:15.829710 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-18 00:53:15.829714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-18 00:53:15.829718 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-18 00:53:15.829802 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-18 00:53:15.829808 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-18 00:53:15.829813 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-18 00:53:15.829817 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-18 00:53:15.829820 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-18 00:53:15.829825 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-18 00:53:15.829831 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-18 00:53:15.829837 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-18 00:53:15.829843 | orchestrator | 2026-04-18 00:53:15.829849 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-18 00:53:15.829861 | orchestrator | Saturday 18 April 2026 00:48:08 +0000 (0:00:03.362) 0:00:09.326 ******** 2026-04-18 00:53:15.829867 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-18 00:53:15.829874 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-18 00:53:15.829880 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-18 00:53:15.829886 | orchestrator | 2026-04-18 00:53:15.829893 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-18 00:53:15.829922 | orchestrator | Saturday 18 April 2026 00:48:09 +0000 (0:00:01.072) 0:00:10.399 ******** 2026-04-18 00:53:15.829929 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-18 00:53:15.829935 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-18 00:53:15.829941 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-18 00:53:15.829948 | orchestrator | 2026-04-18 00:53:15.829952 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-18 00:53:15.829956 | orchestrator | Saturday 18 April 2026 00:48:10 +0000 (0:00:01.478) 0:00:11.877 ******** 2026-04-18 00:53:15.829995 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-18 00:53:15.830000 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.830004 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-18 00:53:15.830008 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.830011 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-18 00:53:15.830067 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.830073 | orchestrator | 2026-04-18 00:53:15.830331 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-18 00:53:15.830350 | orchestrator | Saturday 18 April 2026 00:48:12 +0000 (0:00:01.175) 0:00:13.052 ******** 2026-04-18 00:53:15.830361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.830373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.830380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.830384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.830391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.830431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.830439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.830506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.830512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.830534 | orchestrator | 2026-04-18 00:53:15.830541 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-18 00:53:15.830548 | orchestrator | Saturday 18 April 2026 00:48:14 +0000 (0:00:02.123) 0:00:15.176 ******** 2026-04-18 00:53:15.830555 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.830561 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.830568 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.830574 | orchestrator | 2026-04-18 00:53:15.830580 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-18 00:53:15.830584 | orchestrator | Saturday 18 April 2026 00:48:15 +0000 (0:00:01.425) 0:00:16.602 ******** 2026-04-18 00:53:15.830617 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-18 00:53:15.830624 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-18 00:53:15.830663 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-18 00:53:15.830925 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-18 00:53:15.830931 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-18 00:53:15.830935 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-18 00:53:15.830940 | orchestrator | 2026-04-18 00:53:15.830947 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-18 00:53:15.830955 | orchestrator | Saturday 18 April 2026 00:48:17 +0000 (0:00:02.070) 0:00:18.672 ******** 2026-04-18 00:53:15.830961 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.830967 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.830971 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.830983 | orchestrator | 2026-04-18 00:53:15.830987 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-18 00:53:15.830991 | orchestrator | Saturday 18 April 2026 00:48:18 +0000 (0:00:01.342) 0:00:20.015 ******** 2026-04-18 00:53:15.830995 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.830998 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.831002 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.831006 | orchestrator | 2026-04-18 00:53:15.831010 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-18 00:53:15.831014 | orchestrator | Saturday 18 April 2026 00:48:21 +0000 (0:00:02.186) 0:00:22.202 ******** 2026-04-18 00:53:15.831123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.831159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.831166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.831174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b8953d3d7ef50fb3b0cbbdcd1ff0d01e530a2523', '__omit_place_holder__b8953d3d7ef50fb3b0cbbdcd1ff0d01e530a2523'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-18 00:53:15.831181 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.831188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.831199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.831238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.831244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.831248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b8953d3d7ef50fb3b0cbbdcd1ff0d01e530a2523', '__omit_place_holder__b8953d3d7ef50fb3b0cbbdcd1ff0d01e530a2523'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-18 00:53:15.831252 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.831273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.831278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.831282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b8953d3d7ef50fb3b0cbbdcd1ff0d01e530a2523', '__omit_place_holder__b8953d3d7ef50fb3b0cbbdcd1ff0d01e530a2523'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-18 00:53:15.831290 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.831294 | orchestrator | 2026-04-18 00:53:15.831298 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-18 00:53:15.831302 | orchestrator | Saturday 18 April 2026 00:48:22 +0000 (0:00:00.934) 0:00:23.136 ******** 2026-04-18 00:53:15.831309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.831333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.831338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.831342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.831346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.831354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b8953d3d7ef50fb3b0cbbdcd1ff0d01e530a2523', '__omit_place_holder__b8953d3d7ef50fb3b0cbbdcd1ff0d01e530a2523'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-18 00:53:15.831360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.831386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.831393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.831402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b8953d3d7ef50fb3b0cbbdcd1ff0d01e530a2523', '__omit_place_holder__b8953d3d7ef50fb3b0cbbdcd1ff0d01e530a2523'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-18 00:53:15.831410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.831423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b8953d3d7ef50fb3b0cbbdcd1ff0d01e530a2523', '__omit_place_holder__b8953d3d7ef50fb3b0cbbdcd1ff0d01e530a2523'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-18 00:53:15.831429 | orchestrator | 2026-04-18 00:53:15.831435 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-18 00:53:15.831442 | orchestrator | Saturday 18 April 2026 00:48:25 +0000 (0:00:03.497) 0:00:26.634 ******** 2026-04-18 00:53:15.831453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.831878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.831904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.831909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.831913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.831927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.831931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.831941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.832004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.832010 | orchestrator | 2026-04-18 00:53:15.832014 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-18 00:53:15.832018 | orchestrator | Saturday 18 April 2026 00:48:29 +0000 (0:00:03.649) 0:00:30.284 ******** 2026-04-18 00:53:15.832023 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-18 00:53:15.832028 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-18 00:53:15.832031 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-18 00:53:15.832035 | orchestrator | 2026-04-18 00:53:15.832039 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-18 00:53:15.832042 | orchestrator | Saturday 18 April 2026 00:48:31 +0000 (0:00:01.788) 0:00:32.072 ******** 2026-04-18 00:53:15.832046 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-18 00:53:15.832050 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-18 00:53:15.832054 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-18 00:53:15.832062 | orchestrator | 2026-04-18 00:53:15.832066 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-18 00:53:15.832070 | orchestrator | Saturday 18 April 2026 00:48:34 +0000 (0:00:03.937) 0:00:36.010 ******** 2026-04-18 00:53:15.832074 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.832078 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.832081 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.832085 | orchestrator | 2026-04-18 00:53:15.832089 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-18 00:53:15.832093 | orchestrator | Saturday 18 April 2026 00:48:35 +0000 (0:00:00.510) 0:00:36.520 ******** 2026-04-18 00:53:15.832096 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-18 00:53:15.832102 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-18 00:53:15.832106 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-18 00:53:15.832110 | orchestrator | 2026-04-18 00:53:15.832113 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-18 00:53:15.832117 | orchestrator | Saturday 18 April 2026 00:48:37 +0000 (0:00:02.408) 0:00:38.929 ******** 2026-04-18 00:53:15.832121 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-18 00:53:15.832125 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-18 00:53:15.832144 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-18 00:53:15.832150 | orchestrator | 2026-04-18 00:53:15.832156 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-18 00:53:15.832162 | orchestrator | Saturday 18 April 2026 00:48:39 +0000 (0:00:01.655) 0:00:40.584 ******** 2026-04-18 00:53:15.832167 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.832173 | orchestrator | 2026-04-18 00:53:15.832179 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-18 00:53:15.832185 | orchestrator | Saturday 18 April 2026 00:48:40 +0000 (0:00:00.482) 0:00:41.067 ******** 2026-04-18 00:53:15.832190 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-18 00:53:15.832196 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-18 00:53:15.832243 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-18 00:53:15.832252 | orchestrator | 2026-04-18 00:53:15.832257 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-18 00:53:15.832263 | orchestrator | Saturday 18 April 2026 00:48:41 +0000 (0:00:01.770) 0:00:42.838 ******** 2026-04-18 00:53:15.832269 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-18 00:53:15.833392 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-18 00:53:15.833413 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-18 00:53:15.833417 | orchestrator | 2026-04-18 00:53:15.833422 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-18 00:53:15.833426 | orchestrator | Saturday 18 April 2026 00:48:43 +0000 (0:00:01.547) 0:00:44.385 ******** 2026-04-18 00:53:15.833430 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.833433 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.833437 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.833441 | orchestrator | 2026-04-18 00:53:15.833518 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-18 00:53:15.833527 | orchestrator | Saturday 18 April 2026 00:48:43 +0000 (0:00:00.287) 0:00:44.673 ******** 2026-04-18 00:53:15.833533 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.833550 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.833556 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.833561 | orchestrator | 2026-04-18 00:53:15.833567 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-18 00:53:15.833573 | orchestrator | Saturday 18 April 2026 00:48:43 +0000 (0:00:00.286) 0:00:44.960 ******** 2026-04-18 00:53:15.833578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.833583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.833587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.833591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.833596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.834296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.834341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.834349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.834355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.834362 | orchestrator | 2026-04-18 00:53:15.834368 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-18 00:53:15.834376 | orchestrator | Saturday 18 April 2026 00:48:47 +0000 (0:00:03.547) 0:00:48.507 ******** 2026-04-18 00:53:15.834382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.834388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.834398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.834410 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.834431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.834437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.834443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.834449 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.834455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.834461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.834467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.834483 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.834487 | orchestrator | 2026-04-18 00:53:15.834490 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-18 00:53:15.834494 | orchestrator | Saturday 18 April 2026 00:48:48 +0000 (0:00:00.580) 0:00:49.087 ******** 2026-04-18 00:53:15.834508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.834513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.834517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.834520 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.834524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.834528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.834532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.834540 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.834547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.834555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.834559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.834563 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.834566 | orchestrator | 2026-04-18 00:53:15.834570 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-18 00:53:15.834574 | orchestrator | Saturday 18 April 2026 00:48:48 +0000 (0:00:00.743) 0:00:49.831 ******** 2026-04-18 00:53:15.834578 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-18 00:53:15.834583 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-18 00:53:15.834587 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-18 00:53:15.834590 | orchestrator | 2026-04-18 00:53:15.834594 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-18 00:53:15.834598 | orchestrator | Saturday 18 April 2026 00:48:50 +0000 (0:00:01.918) 0:00:51.750 ******** 2026-04-18 00:53:15.834602 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-18 00:53:15.834608 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-18 00:53:15.834718 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-18 00:53:15.834726 | orchestrator | 2026-04-18 00:53:15.834730 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-18 00:53:15.834734 | orchestrator | Saturday 18 April 2026 00:48:52 +0000 (0:00:01.808) 0:00:53.558 ******** 2026-04-18 00:53:15.834738 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-18 00:53:15.834757 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-18 00:53:15.834761 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-18 00:53:15.834764 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-18 00:53:15.834768 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.834772 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-18 00:53:15.834776 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.834780 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-18 00:53:15.834785 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.834791 | orchestrator | 2026-04-18 00:53:15.834796 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-18 00:53:15.834801 | orchestrator | Saturday 18 April 2026 00:48:53 +0000 (0:00:00.775) 0:00:54.333 ******** 2026-04-18 00:53:15.834813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.834827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.834833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.834839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.834845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.834856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.834862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.834879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.834886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.834893 | orchestrator | 2026-04-18 00:53:15.834899 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-18 00:53:15.834906 | orchestrator | Saturday 18 April 2026 00:48:55 +0000 (0:00:02.510) 0:00:56.844 ******** 2026-04-18 00:53:15.834913 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 00:53:15.834919 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:53:15.834926 | orchestrator | } 2026-04-18 00:53:15.834932 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 00:53:15.834943 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:53:15.834950 | orchestrator | } 2026-04-18 00:53:15.834955 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 00:53:15.834961 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:53:15.834968 | orchestrator | } 2026-04-18 00:53:15.834974 | orchestrator | 2026-04-18 00:53:15.834979 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 00:53:15.834985 | orchestrator | Saturday 18 April 2026 00:48:56 +0000 (0:00:00.348) 0:00:57.192 ******** 2026-04-18 00:53:15.834991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.835004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.835011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.835017 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.835027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.835041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.835048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.835054 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.835061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.835073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.835079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.835086 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.835092 | orchestrator | 2026-04-18 00:53:15.835098 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-18 00:53:15.835104 | orchestrator | Saturday 18 April 2026 00:48:57 +0000 (0:00:01.612) 0:00:58.805 ******** 2026-04-18 00:53:15.835110 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.835116 | orchestrator | 2026-04-18 00:53:15.835122 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-18 00:53:15.835168 | orchestrator | Saturday 18 April 2026 00:48:58 +0000 (0:00:00.852) 0:00:59.657 ******** 2026-04-18 00:53:15.835187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.835196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.835203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.835228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.835356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.835365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.835390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835402 | orchestrator | 2026-04-18 00:53:15.835408 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-18 00:53:15.835415 | orchestrator | Saturday 18 April 2026 00:49:02 +0000 (0:00:03.425) 0:01:03.083 ******** 2026-04-18 00:53:15.835430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.835442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.835448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835461 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.835468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.835477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.835488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835504 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.835510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.835516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.835522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835538 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.835544 | orchestrator | 2026-04-18 00:53:15.835553 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-18 00:53:15.835566 | orchestrator | Saturday 18 April 2026 00:49:02 +0000 (0:00:00.592) 0:01:03.675 ******** 2026-04-18 00:53:15.835573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.835583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.835591 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.835596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.835602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.835608 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.835614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.835619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.835625 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.835630 | orchestrator | 2026-04-18 00:53:15.835636 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-18 00:53:15.835642 | orchestrator | Saturday 18 April 2026 00:49:03 +0000 (0:00:00.858) 0:01:04.534 ******** 2026-04-18 00:53:15.835648 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.835653 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.835659 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.835665 | orchestrator | 2026-04-18 00:53:15.835670 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-18 00:53:15.835677 | orchestrator | Saturday 18 April 2026 00:49:04 +0000 (0:00:01.237) 0:01:05.772 ******** 2026-04-18 00:53:15.835682 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.835688 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.835694 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.835700 | orchestrator | 2026-04-18 00:53:15.835705 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-18 00:53:15.835711 | orchestrator | Saturday 18 April 2026 00:49:06 +0000 (0:00:01.753) 0:01:07.526 ******** 2026-04-18 00:53:15.835717 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.835723 | orchestrator | 2026-04-18 00:53:15.835728 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-18 00:53:15.835734 | orchestrator | Saturday 18 April 2026 00:49:07 +0000 (0:00:00.568) 0:01:08.094 ******** 2026-04-18 00:53:15.835741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.835762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.835782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.835810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835821 | orchestrator | 2026-04-18 00:53:15.835828 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-18 00:53:15.835833 | orchestrator | Saturday 18 April 2026 00:49:10 +0000 (0:00:03.244) 0:01:11.338 ******** 2026-04-18 00:53:15.835863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.835870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835894 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.835900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.835907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835919 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.835925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.835945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.835958 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.835964 | orchestrator | 2026-04-18 00:53:15.835970 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-18 00:53:15.835976 | orchestrator | Saturday 18 April 2026 00:49:11 +0000 (0:00:00.736) 0:01:12.074 ******** 2026-04-18 00:53:15.835983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.835989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.835997 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.836094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.836101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.836111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.836125 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.836146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.836153 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.836159 | orchestrator | 2026-04-18 00:53:15.836165 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-18 00:53:15.836171 | orchestrator | Saturday 18 April 2026 00:49:11 +0000 (0:00:00.716) 0:01:12.791 ******** 2026-04-18 00:53:15.836177 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.836184 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.836190 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.836196 | orchestrator | 2026-04-18 00:53:15.836202 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-18 00:53:15.836208 | orchestrator | Saturday 18 April 2026 00:49:13 +0000 (0:00:01.256) 0:01:14.048 ******** 2026-04-18 00:53:15.836213 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.836219 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.836224 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.836230 | orchestrator | 2026-04-18 00:53:15.836236 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-18 00:53:15.836242 | orchestrator | Saturday 18 April 2026 00:49:14 +0000 (0:00:01.959) 0:01:16.007 ******** 2026-04-18 00:53:15.836248 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.836253 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.836259 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.836265 | orchestrator | 2026-04-18 00:53:15.836271 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-18 00:53:15.836278 | orchestrator | Saturday 18 April 2026 00:49:15 +0000 (0:00:00.240) 0:01:16.247 ******** 2026-04-18 00:53:15.836284 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.836290 | orchestrator | 2026-04-18 00:53:15.836301 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-18 00:53:15.836307 | orchestrator | Saturday 18 April 2026 00:49:15 +0000 (0:00:00.711) 0:01:16.959 ******** 2026-04-18 00:53:15.836320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-18 00:53:15.836325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-18 00:53:15.836334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-18 00:53:15.836338 | orchestrator | 2026-04-18 00:53:15.836342 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-18 00:53:15.836346 | orchestrator | Saturday 18 April 2026 00:49:18 +0000 (0:00:02.416) 0:01:19.376 ******** 2026-04-18 00:53:15.836350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-18 00:53:15.836354 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.838278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-18 00:53:15.838350 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.838361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-18 00:53:15.838368 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.838374 | orchestrator | 2026-04-18 00:53:15.838381 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-18 00:53:15.838408 | orchestrator | Saturday 18 April 2026 00:49:19 +0000 (0:00:01.421) 0:01:20.798 ******** 2026-04-18 00:53:15.838418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-18 00:53:15.838427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-18 00:53:15.838435 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.838442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-18 00:53:15.838447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-18 00:53:15.838453 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.838459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-18 00:53:15.838471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-18 00:53:15.838477 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.838482 | orchestrator | 2026-04-18 00:53:15.838488 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-18 00:53:15.838494 | orchestrator | Saturday 18 April 2026 00:49:21 +0000 (0:00:01.777) 0:01:22.575 ******** 2026-04-18 00:53:15.838500 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.838506 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.838512 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.838518 | orchestrator | 2026-04-18 00:53:15.838535 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-18 00:53:15.838542 | orchestrator | Saturday 18 April 2026 00:49:21 +0000 (0:00:00.374) 0:01:22.950 ******** 2026-04-18 00:53:15.838548 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.838554 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.838576 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.838582 | orchestrator | 2026-04-18 00:53:15.838596 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-18 00:53:15.838602 | orchestrator | Saturday 18 April 2026 00:49:22 +0000 (0:00:00.978) 0:01:23.929 ******** 2026-04-18 00:53:15.838616 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.838622 | orchestrator | 2026-04-18 00:53:15.838628 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-18 00:53:15.838634 | orchestrator | Saturday 18 April 2026 00:49:23 +0000 (0:00:00.720) 0:01:24.649 ******** 2026-04-18 00:53:15.838642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.838650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.838698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.838734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838758 | orchestrator | 2026-04-18 00:53:15.838765 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-18 00:53:15.838771 | orchestrator | Saturday 18 April 2026 00:49:27 +0000 (0:00:03.917) 0:01:28.567 ******** 2026-04-18 00:53:15.838778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.838788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838819 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.838825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.838833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.838860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838880 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.838886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.838899 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.838905 | orchestrator | 2026-04-18 00:53:15.838916 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-18 00:53:15.838922 | orchestrator | Saturday 18 April 2026 00:49:28 +0000 (0:00:00.546) 0:01:29.113 ******** 2026-04-18 00:53:15.838932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.838941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.838951 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.838958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.838964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.838970 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.838977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.838983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.838989 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.838996 | orchestrator | 2026-04-18 00:53:15.839002 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-18 00:53:15.839008 | orchestrator | Saturday 18 April 2026 00:49:28 +0000 (0:00:00.851) 0:01:29.965 ******** 2026-04-18 00:53:15.839014 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.839020 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.839027 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.839033 | orchestrator | 2026-04-18 00:53:15.839039 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-18 00:53:15.839045 | orchestrator | Saturday 18 April 2026 00:49:30 +0000 (0:00:01.269) 0:01:31.235 ******** 2026-04-18 00:53:15.839051 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.839057 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.839063 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.839070 | orchestrator | 2026-04-18 00:53:15.839076 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-18 00:53:15.839082 | orchestrator | Saturday 18 April 2026 00:49:32 +0000 (0:00:01.788) 0:01:33.023 ******** 2026-04-18 00:53:15.839089 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.839095 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.839101 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.839107 | orchestrator | 2026-04-18 00:53:15.839113 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-18 00:53:15.839119 | orchestrator | Saturday 18 April 2026 00:49:32 +0000 (0:00:00.247) 0:01:33.270 ******** 2026-04-18 00:53:15.839125 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.839168 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.839175 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.839181 | orchestrator | 2026-04-18 00:53:15.839187 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-18 00:53:15.839203 | orchestrator | Saturday 18 April 2026 00:49:32 +0000 (0:00:00.227) 0:01:33.498 ******** 2026-04-18 00:53:15.839216 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.839222 | orchestrator | 2026-04-18 00:53:15.839229 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-18 00:53:15.839235 | orchestrator | Saturday 18 April 2026 00:49:33 +0000 (0:00:00.746) 0:01:34.244 ******** 2026-04-18 00:53:15.839242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.839259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-18 00:53:15.839266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.839319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-18 00:53:15.839326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.839378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-18 00:53:15.839389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839433 | orchestrator | 2026-04-18 00:53:15.839439 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-18 00:53:15.839445 | orchestrator | Saturday 18 April 2026 00:49:37 +0000 (0:00:04.178) 0:01:38.422 ******** 2026-04-18 00:53:15.839452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.839463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-18 00:53:15.839470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.839492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-18 00:53:15.839509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839576 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.839582 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.839588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.839602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-18 00:53:15.839609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.839645 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.839651 | orchestrator | 2026-04-18 00:53:15.839657 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-18 00:53:15.839664 | orchestrator | Saturday 18 April 2026 00:49:38 +0000 (0:00:01.070) 0:01:39.493 ******** 2026-04-18 00:53:15.839671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.839685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.839695 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.839705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.839713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.839721 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.839728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.839741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.839747 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.839753 | orchestrator | 2026-04-18 00:53:15.839758 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-18 00:53:15.839765 | orchestrator | Saturday 18 April 2026 00:49:39 +0000 (0:00:01.410) 0:01:40.904 ******** 2026-04-18 00:53:15.839771 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.839778 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.839784 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.839789 | orchestrator | 2026-04-18 00:53:15.839795 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-18 00:53:15.839802 | orchestrator | Saturday 18 April 2026 00:49:41 +0000 (0:00:01.270) 0:01:42.174 ******** 2026-04-18 00:53:15.839808 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.839815 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.839820 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.839827 | orchestrator | 2026-04-18 00:53:15.839832 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-18 00:53:15.839839 | orchestrator | Saturday 18 April 2026 00:49:43 +0000 (0:00:02.061) 0:01:44.236 ******** 2026-04-18 00:53:15.839845 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.839852 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.839857 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.839863 | orchestrator | 2026-04-18 00:53:15.839868 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-18 00:53:15.839875 | orchestrator | Saturday 18 April 2026 00:49:43 +0000 (0:00:00.376) 0:01:44.612 ******** 2026-04-18 00:53:15.839882 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.839888 | orchestrator | 2026-04-18 00:53:15.839895 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-18 00:53:15.839902 | orchestrator | Saturday 18 April 2026 00:49:44 +0000 (0:00:00.741) 0:01:45.354 ******** 2026-04-18 00:53:15.839916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-18 00:53:15.839935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-18 00:53:15.839940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-18 00:53:15.839957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-18 00:53:15.839969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-18 00:53:15.839984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-18 00:53:15.839997 | orchestrator | 2026-04-18 00:53:15.840003 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-18 00:53:15.840009 | orchestrator | Saturday 18 April 2026 00:49:48 +0000 (0:00:04.660) 0:01:50.015 ******** 2026-04-18 00:53:15.840015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-18 00:53:15.840050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-18 00:53:15.840066 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.840074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-18 00:53:15.840091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-18 00:53:15.840100 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.840104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-18 00:53:15.840114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-18 00:53:15.840122 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.840126 | orchestrator | 2026-04-18 00:53:15.840173 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-18 00:53:15.840178 | orchestrator | Saturday 18 April 2026 00:49:51 +0000 (0:00:02.828) 0:01:52.843 ******** 2026-04-18 00:53:15.840182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-18 00:53:15.840187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-18 00:53:15.840192 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.840196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-18 00:53:15.840200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-18 00:53:15.840204 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.840208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-18 00:53:15.840216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-18 00:53:15.840226 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.840230 | orchestrator | 2026-04-18 00:53:15.840234 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-18 00:53:15.840238 | orchestrator | Saturday 18 April 2026 00:49:54 +0000 (0:00:02.560) 0:01:55.404 ******** 2026-04-18 00:53:15.840242 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.840245 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.840249 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.840253 | orchestrator | 2026-04-18 00:53:15.840257 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-18 00:53:15.840264 | orchestrator | Saturday 18 April 2026 00:49:55 +0000 (0:00:01.318) 0:01:56.722 ******** 2026-04-18 00:53:15.840268 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.840272 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.840276 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.840280 | orchestrator | 2026-04-18 00:53:15.840284 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-18 00:53:15.840288 | orchestrator | Saturday 18 April 2026 00:49:57 +0000 (0:00:01.767) 0:01:58.490 ******** 2026-04-18 00:53:15.840292 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.840295 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.840299 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.840303 | orchestrator | 2026-04-18 00:53:15.840307 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-18 00:53:15.840311 | orchestrator | Saturday 18 April 2026 00:49:57 +0000 (0:00:00.261) 0:01:58.752 ******** 2026-04-18 00:53:15.840315 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.840319 | orchestrator | 2026-04-18 00:53:15.840322 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-18 00:53:15.840326 | orchestrator | Saturday 18 April 2026 00:49:58 +0000 (0:00:00.639) 0:01:59.391 ******** 2026-04-18 00:53:15.840331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.840335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.840344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.840348 | orchestrator | 2026-04-18 00:53:15.840352 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-18 00:53:15.840355 | orchestrator | Saturday 18 April 2026 00:50:01 +0000 (0:00:02.798) 0:02:02.190 ******** 2026-04-18 00:53:15.840366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.840370 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.840374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.840378 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.840382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.840386 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.840389 | orchestrator | 2026-04-18 00:53:15.840393 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-18 00:53:15.840397 | orchestrator | Saturday 18 April 2026 00:50:01 +0000 (0:00:00.348) 0:02:02.538 ******** 2026-04-18 00:53:15.840405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.840409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.840413 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.840418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.840421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.840425 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.840429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.840433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.840437 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.840440 | orchestrator | 2026-04-18 00:53:15.840444 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-18 00:53:15.840448 | orchestrator | Saturday 18 April 2026 00:50:02 +0000 (0:00:00.574) 0:02:03.113 ******** 2026-04-18 00:53:15.840452 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.840456 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.840463 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.840467 | orchestrator | 2026-04-18 00:53:15.840471 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-18 00:53:15.840475 | orchestrator | Saturday 18 April 2026 00:50:03 +0000 (0:00:01.223) 0:02:04.336 ******** 2026-04-18 00:53:15.840478 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.840482 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.840486 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.840490 | orchestrator | 2026-04-18 00:53:15.840493 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-18 00:53:15.840500 | orchestrator | Saturday 18 April 2026 00:50:04 +0000 (0:00:01.590) 0:02:05.927 ******** 2026-04-18 00:53:15.840504 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.840509 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.840512 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.840516 | orchestrator | 2026-04-18 00:53:15.840520 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-18 00:53:15.840524 | orchestrator | Saturday 18 April 2026 00:50:05 +0000 (0:00:00.361) 0:02:06.288 ******** 2026-04-18 00:53:15.840528 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.840532 | orchestrator | 2026-04-18 00:53:15.840535 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-18 00:53:15.840539 | orchestrator | Saturday 18 April 2026 00:50:06 +0000 (0:00:00.788) 0:02:07.077 ******** 2026-04-18 00:53:15.840544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-18 00:53:15.840560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-18 00:53:15.840570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-18 00:53:15.840575 | orchestrator | 2026-04-18 00:53:15.840579 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-18 00:53:15.840583 | orchestrator | Saturday 18 April 2026 00:50:09 +0000 (0:00:03.514) 0:02:10.592 ******** 2026-04-18 00:53:15.840595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-18 00:53:15.840611 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.840630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-18 00:53:15.840637 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.840649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-18 00:53:15.840662 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.840668 | orchestrator | 2026-04-18 00:53:15.840674 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-18 00:53:15.840680 | orchestrator | Saturday 18 April 2026 00:50:10 +0000 (0:00:00.906) 0:02:11.498 ******** 2026-04-18 00:53:15.840688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-18 00:53:15.840696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-18 00:53:15.840704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-18 00:53:15.840712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-18 00:53:15.840723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-18 00:53:15.840729 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.840766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-18 00:53:15.840775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-18 00:53:15.840788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-18 00:53:15.840794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-18 00:53:15.840801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-18 00:53:15.840807 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.840813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-18 00:53:15.840820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-18 00:53:15.840827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-18 00:53:15.840834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-18 00:53:15.840839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-18 00:53:15.840843 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.840847 | orchestrator | 2026-04-18 00:53:15.840851 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-18 00:53:15.840854 | orchestrator | Saturday 18 April 2026 00:50:11 +0000 (0:00:00.984) 0:02:12.483 ******** 2026-04-18 00:53:15.840858 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.840862 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.840866 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.840871 | orchestrator | 2026-04-18 00:53:15.840874 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-18 00:53:15.840878 | orchestrator | Saturday 18 April 2026 00:50:12 +0000 (0:00:01.232) 0:02:13.715 ******** 2026-04-18 00:53:15.840882 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.840887 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.840891 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.840895 | orchestrator | 2026-04-18 00:53:15.840899 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-18 00:53:15.840902 | orchestrator | Saturday 18 April 2026 00:50:14 +0000 (0:00:02.219) 0:02:15.935 ******** 2026-04-18 00:53:15.840912 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.840916 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.840920 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.840924 | orchestrator | 2026-04-18 00:53:15.840933 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-18 00:53:15.840937 | orchestrator | Saturday 18 April 2026 00:50:15 +0000 (0:00:00.623) 0:02:16.559 ******** 2026-04-18 00:53:15.840941 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.840945 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.840948 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.840952 | orchestrator | 2026-04-18 00:53:15.840956 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-18 00:53:15.840960 | orchestrator | Saturday 18 April 2026 00:50:15 +0000 (0:00:00.288) 0:02:16.848 ******** 2026-04-18 00:53:15.840978 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.840983 | orchestrator | 2026-04-18 00:53:15.840987 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-18 00:53:15.840990 | orchestrator | Saturday 18 April 2026 00:50:16 +0000 (0:00:00.856) 0:02:17.704 ******** 2026-04-18 00:53:15.840995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:53:15.841001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:53:15.841006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:53:15.841010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:53:15.841026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:53:15.841030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:53:15.841034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:53:15.841038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:53:15.841044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:53:15.841058 | orchestrator | 2026-04-18 00:53:15.841068 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-18 00:53:15.841075 | orchestrator | Saturday 18 April 2026 00:50:20 +0000 (0:00:03.573) 0:02:21.278 ******** 2026-04-18 00:53:15.841101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:53:15.841109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:53:15.841115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:53:15.841121 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.841150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:53:15.841165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:53:15.841176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:53:15.841193 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.841201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:53:15.841208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:53:15.841214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:53:15.841225 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.841231 | orchestrator | 2026-04-18 00:53:15.841237 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-18 00:53:15.841244 | orchestrator | Saturday 18 April 2026 00:50:20 +0000 (0:00:00.718) 0:02:21.997 ******** 2026-04-18 00:53:15.841250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-18 00:53:15.841257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-18 00:53:15.841264 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.841270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-18 00:53:15.841280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-18 00:53:15.841287 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.841294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-18 00:53:15.841312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-18 00:53:15.841318 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.841325 | orchestrator | 2026-04-18 00:53:15.841332 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-18 00:53:15.841338 | orchestrator | Saturday 18 April 2026 00:50:21 +0000 (0:00:00.773) 0:02:22.770 ******** 2026-04-18 00:53:15.841344 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.841350 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.841356 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.841363 | orchestrator | 2026-04-18 00:53:15.841369 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-18 00:53:15.841375 | orchestrator | Saturday 18 April 2026 00:50:23 +0000 (0:00:01.347) 0:02:24.117 ******** 2026-04-18 00:53:15.841381 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.841387 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.841393 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.841399 | orchestrator | 2026-04-18 00:53:15.841406 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-18 00:53:15.841412 | orchestrator | Saturday 18 April 2026 00:50:24 +0000 (0:00:01.752) 0:02:25.869 ******** 2026-04-18 00:53:15.841418 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.841424 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.841430 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.841436 | orchestrator | 2026-04-18 00:53:15.841442 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-18 00:53:15.841449 | orchestrator | Saturday 18 April 2026 00:50:25 +0000 (0:00:00.377) 0:02:26.247 ******** 2026-04-18 00:53:15.841456 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.841467 | orchestrator | 2026-04-18 00:53:15.841473 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-18 00:53:15.841479 | orchestrator | Saturday 18 April 2026 00:50:26 +0000 (0:00:00.894) 0:02:27.141 ******** 2026-04-18 00:53:15.841486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.841494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.841521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.841540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841547 | orchestrator | 2026-04-18 00:53:15.841553 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-18 00:53:15.841560 | orchestrator | Saturday 18 April 2026 00:50:30 +0000 (0:00:03.964) 0:02:31.106 ******** 2026-04-18 00:53:15.841570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.841587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841594 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.841601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.841616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841622 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.841629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.841649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841657 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.841663 | orchestrator | 2026-04-18 00:53:15.841669 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-18 00:53:15.841675 | orchestrator | Saturday 18 April 2026 00:50:30 +0000 (0:00:00.841) 0:02:31.948 ******** 2026-04-18 00:53:15.841682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.841697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.841704 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.841711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.841717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.841724 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.841730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.841737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.841743 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.841749 | orchestrator | 2026-04-18 00:53:15.841756 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-18 00:53:15.841762 | orchestrator | Saturday 18 April 2026 00:50:31 +0000 (0:00:00.848) 0:02:32.796 ******** 2026-04-18 00:53:15.841769 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.841775 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.841782 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.841788 | orchestrator | 2026-04-18 00:53:15.841794 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-18 00:53:15.841800 | orchestrator | Saturday 18 April 2026 00:50:33 +0000 (0:00:01.298) 0:02:34.095 ******** 2026-04-18 00:53:15.841806 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.841813 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.841819 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.841825 | orchestrator | 2026-04-18 00:53:15.841831 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-18 00:53:15.841838 | orchestrator | Saturday 18 April 2026 00:50:35 +0000 (0:00:02.108) 0:02:36.203 ******** 2026-04-18 00:53:15.841844 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.841850 | orchestrator | 2026-04-18 00:53:15.841856 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-18 00:53:15.841862 | orchestrator | Saturday 18 April 2026 00:50:36 +0000 (0:00:01.100) 0:02:37.303 ******** 2026-04-18 00:53:15.841873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.841896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.841924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.841968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.841988 | orchestrator | 2026-04-18 00:53:15.841998 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-18 00:53:15.842007 | orchestrator | Saturday 18 April 2026 00:50:40 +0000 (0:00:04.245) 0:02:41.549 ******** 2026-04-18 00:53:15.842054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.842064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.842071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.842079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.842086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.842114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.842122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.842143 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.842151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.842157 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.842163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.842170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.842179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.842201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2024.2/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.842208 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.842214 | orchestrator | 2026-04-18 00:53:15.842221 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-18 00:53:15.842227 | orchestrator | Saturday 18 April 2026 00:50:41 +0000 (0:00:00.697) 0:02:42.246 ******** 2026-04-18 00:53:15.842233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.842239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.842246 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.842252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.842260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.842265 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.842269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.842273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.842277 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.842281 | orchestrator | 2026-04-18 00:53:15.842285 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-18 00:53:15.842289 | orchestrator | Saturday 18 April 2026 00:50:42 +0000 (0:00:01.235) 0:02:43.481 ******** 2026-04-18 00:53:15.842293 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.842297 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.842300 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.842304 | orchestrator | 2026-04-18 00:53:15.842308 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-18 00:53:15.842312 | orchestrator | Saturday 18 April 2026 00:50:43 +0000 (0:00:01.222) 0:02:44.703 ******** 2026-04-18 00:53:15.842321 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.842325 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.842328 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.842332 | orchestrator | 2026-04-18 00:53:15.842336 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-18 00:53:15.842341 | orchestrator | Saturday 18 April 2026 00:50:45 +0000 (0:00:01.773) 0:02:46.477 ******** 2026-04-18 00:53:15.842344 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.842348 | orchestrator | 2026-04-18 00:53:15.842352 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-18 00:53:15.842356 | orchestrator | Saturday 18 April 2026 00:50:46 +0000 (0:00:00.948) 0:02:47.425 ******** 2026-04-18 00:53:15.842360 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-18 00:53:15.842364 | orchestrator | 2026-04-18 00:53:15.842368 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-18 00:53:15.842372 | orchestrator | Saturday 18 April 2026 00:50:47 +0000 (0:00:01.544) 0:02:48.969 ******** 2026-04-18 00:53:15.842390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:53:15.842396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-18 00:53:15.842400 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.842404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:53:15.842421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-18 00:53:15.842425 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.842446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:53:15.842457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-18 00:53:15.842461 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.842465 | orchestrator | 2026-04-18 00:53:15.842469 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-18 00:53:15.842473 | orchestrator | Saturday 18 April 2026 00:50:50 +0000 (0:00:02.059) 0:02:51.029 ******** 2026-04-18 00:53:15.842490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'm2026-04-18 00:53:15 | INFO  | Task 8e32274e-d564-4f31-8cb7-318dec9e5b4c is in state STARTED 2026-04-18 00:53:15.842496 | orchestrator | 2026-04-18 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:15.842500 | orchestrator | ode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:53:15.842505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-18 00:53:15.842509 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.842513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:53:15.842524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-18 00:53:15.842528 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.842541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:53:15.842551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-18 00:53:15.842555 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.842559 | orchestrator | 2026-04-18 00:53:15.842562 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-18 00:53:15.842566 | orchestrator | Saturday 18 April 2026 00:50:52 +0000 (0:00:02.106) 0:02:53.136 ******** 2026-04-18 00:53:15.842571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-18 00:53:15.842575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-18 00:53:15.842579 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.842594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-18 00:53:15.842599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-18 00:53:15.842603 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.842607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-18 00:53:15.842615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-18 00:53:15.842619 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.842623 | orchestrator | 2026-04-18 00:53:15.842627 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-18 00:53:15.842631 | orchestrator | Saturday 18 April 2026 00:50:54 +0000 (0:00:02.082) 0:02:55.218 ******** 2026-04-18 00:53:15.842635 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.842639 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.842642 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.842646 | orchestrator | 2026-04-18 00:53:15.842650 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-18 00:53:15.842654 | orchestrator | Saturday 18 April 2026 00:50:55 +0000 (0:00:01.676) 0:02:56.895 ******** 2026-04-18 00:53:15.842658 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.842661 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.842665 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.842669 | orchestrator | 2026-04-18 00:53:15.842673 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-18 00:53:15.842677 | orchestrator | Saturday 18 April 2026 00:50:57 +0000 (0:00:01.153) 0:02:58.049 ******** 2026-04-18 00:53:15.842681 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.842685 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.842688 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.842692 | orchestrator | 2026-04-18 00:53:15.842696 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-18 00:53:15.842700 | orchestrator | Saturday 18 April 2026 00:50:57 +0000 (0:00:00.260) 0:02:58.310 ******** 2026-04-18 00:53:15.842704 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.842708 | orchestrator | 2026-04-18 00:53:15.842712 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-18 00:53:15.842716 | orchestrator | Saturday 18 April 2026 00:50:58 +0000 (0:00:00.943) 0:02:59.254 ******** 2026-04-18 00:53:15.842724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-18 00:53:15.842736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-18 00:53:15.842745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-18 00:53:15.842749 | orchestrator | 2026-04-18 00:53:15.842753 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-18 00:53:15.842757 | orchestrator | Saturday 18 April 2026 00:50:59 +0000 (0:00:01.631) 0:03:00.885 ******** 2026-04-18 00:53:15.842762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-18 00:53:15.842766 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.842770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-18 00:53:15.842774 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.842781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2024.2/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-18 00:53:15.842793 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.842800 | orchestrator | 2026-04-18 00:53:15.842804 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-18 00:53:15.842808 | orchestrator | Saturday 18 April 2026 00:51:00 +0000 (0:00:00.320) 0:03:01.205 ******** 2026-04-18 00:53:15.842812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-18 00:53:15.842817 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.842821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-18 00:53:15.842825 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.842829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-18 00:53:15.842833 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.842837 | orchestrator | 2026-04-18 00:53:15.842841 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-18 00:53:15.842845 | orchestrator | Saturday 18 April 2026 00:51:00 +0000 (0:00:00.522) 0:03:01.728 ******** 2026-04-18 00:53:15.842848 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.842852 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.842856 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.842859 | orchestrator | 2026-04-18 00:53:15.842863 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-18 00:53:15.842867 | orchestrator | Saturday 18 April 2026 00:51:01 +0000 (0:00:00.605) 0:03:02.333 ******** 2026-04-18 00:53:15.842871 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.842875 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.842878 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.842882 | orchestrator | 2026-04-18 00:53:15.842886 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-18 00:53:15.842890 | orchestrator | Saturday 18 April 2026 00:51:02 +0000 (0:00:01.109) 0:03:03.443 ******** 2026-04-18 00:53:15.842894 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.842898 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.842901 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.842905 | orchestrator | 2026-04-18 00:53:15.842909 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-18 00:53:15.842913 | orchestrator | Saturday 18 April 2026 00:51:02 +0000 (0:00:00.269) 0:03:03.712 ******** 2026-04-18 00:53:15.842917 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.842921 | orchestrator | 2026-04-18 00:53:15.842925 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-18 00:53:15.842928 | orchestrator | Saturday 18 April 2026 00:51:03 +0000 (0:00:01.032) 0:03:04.744 ******** 2026-04-18 00:53:15.842933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.842953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.842959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-18 00:53:15.842964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-18 00:53:15.842969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.842980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.842993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.842998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-18 00:53:15.843012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-18 00:53:15.843032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-18 00:53:15.843037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-18 00:53:15.843042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-18 00:53:15.843063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-18 00:53:15.843084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-18 00:53:15.843095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-18 00:53:15.843105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-18 00:53:15.843114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-18 00:53:15.843118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.843125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-18 00:53:15.843177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-18 00:53:15.843185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-18 00:53:15.843199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-18 00:53:15.843205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-18 00:53:15.843227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-18 00:53:15.843237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-18 00:53:15.843260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-18 00:53:15.843287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-18 00:53:15.843293 | orchestrator | 2026-04-18 00:53:15.843299 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-18 00:53:15.843306 | orchestrator | Saturday 18 April 2026 00:51:07 +0000 (0:00:03.799) 0:03:08.544 ******** 2026-04-18 00:53:15.843327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.843337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-18 00:53:15.843357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-18 00:53:15.843370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.843394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-18 00:53:15.843439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-18 00:53:15.843445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-18 00:53:15.843452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-18 00:53:15.843466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-18 00:53:15.843505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.843531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-18 00:53:15.843541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-18 00:53:15.843564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-18 00:53:15.843585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-18 00:53:15.843612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-18 00:53:15.843619 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.843626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-18 00:53:15.843637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2024.2/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-18 00:53:15.843651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-18 00:53:15.843690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-18 00:53:15.843704 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.843712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-18 00:53:15.843738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-18 00:53:15.843746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-18 00:53:15.843760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-18 00:53:15.843764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2024.2/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.843771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-18 00:53:15.843783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2024.2/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-18 00:53:15.843792 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.843796 | orchestrator | 2026-04-18 00:53:15.843800 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-18 00:53:15.843804 | orchestrator | Saturday 18 April 2026 00:51:08 +0000 (0:00:01.262) 0:03:09.807 ******** 2026-04-18 00:53:15.843808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.843813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.843817 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.843820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.843824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.843828 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.843832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.843836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.843840 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.843843 | orchestrator | 2026-04-18 00:53:15.843847 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-18 00:53:15.843851 | orchestrator | Saturday 18 April 2026 00:51:10 +0000 (0:00:01.311) 0:03:11.119 ******** 2026-04-18 00:53:15.843855 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.843859 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.843863 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.843866 | orchestrator | 2026-04-18 00:53:15.843871 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-18 00:53:15.843875 | orchestrator | Saturday 18 April 2026 00:51:11 +0000 (0:00:01.169) 0:03:12.289 ******** 2026-04-18 00:53:15.843878 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.843882 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.843886 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.843890 | orchestrator | 2026-04-18 00:53:15.843894 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-18 00:53:15.843898 | orchestrator | Saturday 18 April 2026 00:51:12 +0000 (0:00:01.714) 0:03:14.003 ******** 2026-04-18 00:53:15.843901 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.843905 | orchestrator | 2026-04-18 00:53:15.843909 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-18 00:53:15.843913 | orchestrator | Saturday 18 April 2026 00:51:14 +0000 (0:00:01.212) 0:03:15.216 ******** 2026-04-18 00:53:15.843928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-18 00:53:15.843937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-18 00:53:15.843942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-18 00:53:15.843946 | orchestrator | 2026-04-18 00:53:15.843951 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-18 00:53:15.843954 | orchestrator | Saturday 18 April 2026 00:51:17 +0000 (0:00:03.133) 0:03:18.349 ******** 2026-04-18 00:53:15.843959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-18 00:53:15.843966 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.843980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-18 00:53:15.843985 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.843990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2024.2/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-18 00:53:15.843994 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.843998 | orchestrator | 2026-04-18 00:53:15.844002 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-18 00:53:15.844006 | orchestrator | Saturday 18 April 2026 00:51:18 +0000 (0:00:00.835) 0:03:19.185 ******** 2026-04-18 00:53:15.844010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-18 00:53:15.844015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-18 00:53:15.844021 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.844024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-18 00:53:15.844029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-18 00:53:15.844033 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.844037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-18 00:53:15.844044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-18 00:53:15.844048 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.844052 | orchestrator | 2026-04-18 00:53:15.844056 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-18 00:53:15.844060 | orchestrator | Saturday 18 April 2026 00:51:18 +0000 (0:00:00.638) 0:03:19.824 ******** 2026-04-18 00:53:15.844064 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.844068 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.844072 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.844075 | orchestrator | 2026-04-18 00:53:15.844082 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-18 00:53:15.844086 | orchestrator | Saturday 18 April 2026 00:51:19 +0000 (0:00:01.068) 0:03:20.893 ******** 2026-04-18 00:53:15.844090 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.844094 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.844097 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.844101 | orchestrator | 2026-04-18 00:53:15.844105 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-18 00:53:15.844117 | orchestrator | Saturday 18 April 2026 00:51:21 +0000 (0:00:01.866) 0:03:22.759 ******** 2026-04-18 00:53:15.844121 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.844125 | orchestrator | 2026-04-18 00:53:15.844156 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-18 00:53:15.844162 | orchestrator | Saturday 18 April 2026 00:51:22 +0000 (0:00:01.234) 0:03:23.993 ******** 2026-04-18 00:53:15.844166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.844171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.844179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.844195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.844200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.844205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.844209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.844216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.844221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.844233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.844239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.844310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.844338 | orchestrator | 2026-04-18 00:53:15.844342 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-18 00:53:15.844346 | orchestrator | Saturday 18 April 2026 00:51:28 +0000 (0:00:05.375) 0:03:29.369 ******** 2026-04-18 00:53:15.844351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.844369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.844374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.844378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.844386 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.844390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.844397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.844410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.844414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.844418 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.844422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.844430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.844437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.844449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2024.2/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.844453 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.844457 | orchestrator | 2026-04-18 00:53:15.844461 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-18 00:53:15.844465 | orchestrator | Saturday 18 April 2026 00:51:29 +0000 (0:00:00.767) 0:03:30.137 ******** 2026-04-18 00:53:15.844469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.844473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.844482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.844486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.844490 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.844494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.844498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.844502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.844505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.844509 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.844513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.844518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.844522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.844528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.844532 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.844536 | orchestrator | 2026-04-18 00:53:15.844540 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-18 00:53:15.844552 | orchestrator | Saturday 18 April 2026 00:51:30 +0000 (0:00:01.704) 0:03:31.841 ******** 2026-04-18 00:53:15.844556 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.844560 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.844564 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.844567 | orchestrator | 2026-04-18 00:53:15.844571 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-18 00:53:15.844575 | orchestrator | Saturday 18 April 2026 00:51:32 +0000 (0:00:01.390) 0:03:33.231 ******** 2026-04-18 00:53:15.844580 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.844583 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.844587 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.844591 | orchestrator | 2026-04-18 00:53:15.844595 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-18 00:53:15.844603 | orchestrator | Saturday 18 April 2026 00:51:34 +0000 (0:00:02.160) 0:03:35.392 ******** 2026-04-18 00:53:15.844607 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.844611 | orchestrator | 2026-04-18 00:53:15.844615 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-18 00:53:15.844618 | orchestrator | Saturday 18 April 2026 00:51:35 +0000 (0:00:01.239) 0:03:36.632 ******** 2026-04-18 00:53:15.844622 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-18 00:53:15.844626 | orchestrator | 2026-04-18 00:53:15.844630 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-18 00:53:15.844634 | orchestrator | Saturday 18 April 2026 00:51:36 +0000 (0:00:01.127) 0:03:37.759 ******** 2026-04-18 00:53:15.844638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-18 00:53:15.844643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-18 00:53:15.844647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-18 00:53:15.844651 | orchestrator | 2026-04-18 00:53:15.844656 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-18 00:53:15.844663 | orchestrator | Saturday 18 April 2026 00:51:40 +0000 (0:00:03.936) 0:03:41.696 ******** 2026-04-18 00:53:15.844669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-18 00:53:15.844675 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.844686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-18 00:53:15.844698 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.844716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-18 00:53:15.844730 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.844736 | orchestrator | 2026-04-18 00:53:15.844742 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-18 00:53:15.844748 | orchestrator | Saturday 18 April 2026 00:51:41 +0000 (0:00:01.246) 0:03:42.943 ******** 2026-04-18 00:53:15.844755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-18 00:53:15.844761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-18 00:53:15.844768 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.844774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-18 00:53:15.844778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-18 00:53:15.844782 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.844786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-18 00:53:15.844790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-18 00:53:15.844794 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.844798 | orchestrator | 2026-04-18 00:53:15.844801 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-18 00:53:15.844805 | orchestrator | Saturday 18 April 2026 00:51:43 +0000 (0:00:01.577) 0:03:44.520 ******** 2026-04-18 00:53:15.844809 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.844813 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.844817 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.844821 | orchestrator | 2026-04-18 00:53:15.844824 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-18 00:53:15.844828 | orchestrator | Saturday 18 April 2026 00:51:45 +0000 (0:00:02.299) 0:03:46.819 ******** 2026-04-18 00:53:15.844832 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.844835 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.844839 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.844843 | orchestrator | 2026-04-18 00:53:15.844847 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-18 00:53:15.844851 | orchestrator | Saturday 18 April 2026 00:51:48 +0000 (0:00:02.920) 0:03:49.740 ******** 2026-04-18 00:53:15.844855 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-18 00:53:15.844859 | orchestrator | 2026-04-18 00:53:15.844863 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-18 00:53:15.844871 | orchestrator | Saturday 18 April 2026 00:51:49 +0000 (0:00:00.824) 0:03:50.565 ******** 2026-04-18 00:53:15.844879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-18 00:53:15.844883 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.844904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-18 00:53:15.844908 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.844912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-18 00:53:15.844916 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.844920 | orchestrator | 2026-04-18 00:53:15.844924 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-18 00:53:15.844928 | orchestrator | Saturday 18 April 2026 00:51:50 +0000 (0:00:01.338) 0:03:51.904 ******** 2026-04-18 00:53:15.844932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-18 00:53:15.844936 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.844940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-18 00:53:15.844944 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.844948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-18 00:53:15.844958 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.844962 | orchestrator | 2026-04-18 00:53:15.844965 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-18 00:53:15.844969 | orchestrator | Saturday 18 April 2026 00:51:52 +0000 (0:00:01.271) 0:03:53.176 ******** 2026-04-18 00:53:15.844973 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.844977 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.844980 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.844984 | orchestrator | 2026-04-18 00:53:15.844988 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-18 00:53:15.844992 | orchestrator | Saturday 18 April 2026 00:51:53 +0000 (0:00:01.217) 0:03:54.394 ******** 2026-04-18 00:53:15.844996 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.845000 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.845004 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.845007 | orchestrator | 2026-04-18 00:53:15.845011 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-18 00:53:15.845015 | orchestrator | Saturday 18 April 2026 00:51:55 +0000 (0:00:02.086) 0:03:56.480 ******** 2026-04-18 00:53:15.845019 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.845022 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.845026 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.845030 | orchestrator | 2026-04-18 00:53:15.845034 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-18 00:53:15.845037 | orchestrator | Saturday 18 April 2026 00:51:58 +0000 (0:00:02.607) 0:03:59.088 ******** 2026-04-18 00:53:15.845044 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-18 00:53:15.845048 | orchestrator | 2026-04-18 00:53:15.845052 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-18 00:53:15.845056 | orchestrator | Saturday 18 April 2026 00:51:59 +0000 (0:00:01.236) 0:04:00.325 ******** 2026-04-18 00:53:15.845069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-18 00:53:15.845073 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.845077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-18 00:53:15.845081 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.845085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-18 00:53:15.845088 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.845092 | orchestrator | 2026-04-18 00:53:15.845096 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-18 00:53:15.845104 | orchestrator | Saturday 18 April 2026 00:52:00 +0000 (0:00:01.177) 0:04:01.503 ******** 2026-04-18 00:53:15.845108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-18 00:53:15.845112 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.845115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-18 00:53:15.845119 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.845123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-18 00:53:15.845127 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.845148 | orchestrator | 2026-04-18 00:53:15.845152 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-18 00:53:15.845156 | orchestrator | Saturday 18 April 2026 00:52:01 +0000 (0:00:01.337) 0:04:02.840 ******** 2026-04-18 00:53:15.845162 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.845166 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.845170 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.845174 | orchestrator | 2026-04-18 00:53:15.845178 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-18 00:53:15.845182 | orchestrator | Saturday 18 April 2026 00:52:03 +0000 (0:00:01.748) 0:04:04.588 ******** 2026-04-18 00:53:15.845186 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.845192 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.845198 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.845207 | orchestrator | 2026-04-18 00:53:15.845228 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-18 00:53:15.845235 | orchestrator | Saturday 18 April 2026 00:52:05 +0000 (0:00:02.285) 0:04:06.874 ******** 2026-04-18 00:53:15.845240 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.845246 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.845253 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.845258 | orchestrator | 2026-04-18 00:53:15.845264 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-18 00:53:15.845271 | orchestrator | Saturday 18 April 2026 00:52:08 +0000 (0:00:03.123) 0:04:09.998 ******** 2026-04-18 00:53:15.845276 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.845282 | orchestrator | 2026-04-18 00:53:15.845287 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-18 00:53:15.845294 | orchestrator | Saturday 18 April 2026 00:52:10 +0000 (0:00:01.217) 0:04:11.215 ******** 2026-04-18 00:53:15.845301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-18 00:53:15.845313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-18 00:53:15.845321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.845328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.845347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.845355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-18 00:53:15.845367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-18 00:53:15.845373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.845379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-18 00:53:15.845388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.845404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-18 00:53:15.845410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.845421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.845428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.845435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.845441 | orchestrator | 2026-04-18 00:53:15.845449 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-18 00:53:15.845456 | orchestrator | Saturday 18 April 2026 00:52:13 +0000 (0:00:03.476) 0:04:14.692 ******** 2026-04-18 00:53:15.845465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-18 00:53:15.845480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-18 00:53:15.845487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.845492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.845498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.845504 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.845508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-18 00:53:15.845515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-18 00:53:15.845527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.845534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.845538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.845542 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.845546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-18 00:53:15.845550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-18 00:53:15.845556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.845575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-18 00:53:15.845579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-18 00:53:15.845583 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.845587 | orchestrator | 2026-04-18 00:53:15.845591 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-18 00:53:15.845594 | orchestrator | Saturday 18 April 2026 00:52:14 +0000 (0:00:00.627) 0:04:15.319 ******** 2026-04-18 00:53:15.845599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-18 00:53:15.845604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-18 00:53:15.845608 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.845612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-18 00:53:15.845616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-18 00:53:15.845620 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.845624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-18 00:53:15.845628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-18 00:53:15.845631 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.845635 | orchestrator | 2026-04-18 00:53:15.845639 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-18 00:53:15.845643 | orchestrator | Saturday 18 April 2026 00:52:15 +0000 (0:00:00.791) 0:04:16.111 ******** 2026-04-18 00:53:15.845647 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.845651 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.845658 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.845666 | orchestrator | 2026-04-18 00:53:15.845674 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-18 00:53:15.845679 | orchestrator | Saturday 18 April 2026 00:52:16 +0000 (0:00:01.275) 0:04:17.386 ******** 2026-04-18 00:53:15.845694 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.845700 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.845708 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.845713 | orchestrator | 2026-04-18 00:53:15.845717 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-18 00:53:15.845720 | orchestrator | Saturday 18 April 2026 00:52:18 +0000 (0:00:02.254) 0:04:19.640 ******** 2026-04-18 00:53:15.845724 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.845728 | orchestrator | 2026-04-18 00:53:15.845732 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-18 00:53:15.845738 | orchestrator | Saturday 18 April 2026 00:52:20 +0000 (0:00:01.480) 0:04:21.121 ******** 2026-04-18 00:53:15.845753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.845758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.845762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.845767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:15.845786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:15.845791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:15.845796 | orchestrator | 2026-04-18 00:53:15.845799 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-18 00:53:15.845803 | orchestrator | Saturday 18 April 2026 00:52:25 +0000 (0:00:05.145) 0:04:26.267 ******** 2026-04-18 00:53:15.845807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.845825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-18 00:53:15.845830 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.845835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.845839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-18 00:53:15.845843 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.845847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.845865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-18 00:53:15.845870 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.845874 | orchestrator | 2026-04-18 00:53:15.845878 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-18 00:53:15.845882 | orchestrator | Saturday 18 April 2026 00:52:26 +0000 (0:00:00.870) 0:04:27.137 ******** 2026-04-18 00:53:15.845886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.845890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-18 00:53:15.845895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-18 00:53:15.845900 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.845904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.845908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-18 00:53:15.845912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-18 00:53:15.845919 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.845923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.845927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-18 00:53:15.845931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-18 00:53:15.845935 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.845939 | orchestrator | 2026-04-18 00:53:15.845943 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-18 00:53:15.845946 | orchestrator | Saturday 18 April 2026 00:52:27 +0000 (0:00:01.281) 0:04:28.419 ******** 2026-04-18 00:53:15.845950 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.845954 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.845958 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.845961 | orchestrator | 2026-04-18 00:53:15.845965 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-18 00:53:15.845969 | orchestrator | Saturday 18 April 2026 00:52:27 +0000 (0:00:00.376) 0:04:28.795 ******** 2026-04-18 00:53:15.845973 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.845976 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.845980 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.845984 | orchestrator | 2026-04-18 00:53:15.845990 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-18 00:53:15.845994 | orchestrator | Saturday 18 April 2026 00:52:28 +0000 (0:00:01.092) 0:04:29.887 ******** 2026-04-18 00:53:15.845998 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.846002 | orchestrator | 2026-04-18 00:53:15.846006 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-18 00:53:15.846010 | orchestrator | Saturday 18 April 2026 00:52:30 +0000 (0:00:01.419) 0:04:31.307 ******** 2026-04-18 00:53:15.846048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-18 00:53:15.846054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 00:53:15.846062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 00:53:15.846086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-18 00:53:15.846091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 00:53:15.846095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 00:53:15.846112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-18 00:53:15.846125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 00:53:15.846165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 00:53:15.846183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.846187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-18 00:53:15.846207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 00:53:15.846223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.846228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-18 00:53:15.846232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 00:53:15.846258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:15.846263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-18 00:53:15.846267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 00:53:15.846293 | orchestrator | 2026-04-18 00:53:15.846299 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-18 00:53:15.846308 | orchestrator | Saturday 18 April 2026 00:52:34 +0000 (0:00:04.192) 0:04:35.499 ******** 2026-04-18 00:53:15.846316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-18 00:53:15.846329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 00:53:15.846336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 00:53:15.846369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.846387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-18 00:53:15.846394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 00:53:15.846413 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.846428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-18 00:53:15.846440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 00:53:15.846447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-18 00:53:15.846467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 00:53:15.846477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 00:53:15.846495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.846504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-18 00:53:15.846512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 00:53:15.846540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:15.846554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 00:53:15.846561 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.846567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-18 00:53:15.846574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 00:53:15.846603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 00:53:15.846611 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.846615 | orchestrator | 2026-04-18 00:53:15.846619 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-18 00:53:15.846623 | orchestrator | Saturday 18 April 2026 00:52:35 +0000 (0:00:00.791) 0:04:36.291 ******** 2026-04-18 00:53:15.846627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-18 00:53:15.846634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-18 00:53:15.846641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.846648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.846654 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.846660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-18 00:53:15.846667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-18 00:53:15.846673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.846688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.846695 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.846706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-18 00:53:15.846712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-18 00:53:15.846719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.846726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-18 00:53:15.846732 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.846738 | orchestrator | 2026-04-18 00:53:15.846744 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-18 00:53:15.846750 | orchestrator | Saturday 18 April 2026 00:52:36 +0000 (0:00:01.254) 0:04:37.546 ******** 2026-04-18 00:53:15.846756 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.846763 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.846769 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.846776 | orchestrator | 2026-04-18 00:53:15.846782 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-18 00:53:15.846786 | orchestrator | Saturday 18 April 2026 00:52:36 +0000 (0:00:00.432) 0:04:37.979 ******** 2026-04-18 00:53:15.846790 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.846794 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.846797 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.846801 | orchestrator | 2026-04-18 00:53:15.846805 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-18 00:53:15.846809 | orchestrator | Saturday 18 April 2026 00:52:38 +0000 (0:00:01.279) 0:04:39.258 ******** 2026-04-18 00:53:15.846812 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.846816 | orchestrator | 2026-04-18 00:53:15.846820 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-18 00:53:15.846824 | orchestrator | Saturday 18 April 2026 00:52:39 +0000 (0:00:01.338) 0:04:40.597 ******** 2026-04-18 00:53:15.846828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:53:15.846844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:53:15.846849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-18 00:53:15.846854 | orchestrator | 2026-04-18 00:53:15.846857 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-18 00:53:15.846862 | orchestrator | Saturday 18 April 2026 00:52:42 +0000 (0:00:02.711) 0:04:43.309 ******** 2026-04-18 00:53:15.846866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-18 00:53:15.846870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-18 00:53:15.846879 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.846883 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.846910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-18 00:53:15.846916 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.846920 | orchestrator | 2026-04-18 00:53:15.846923 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-18 00:53:15.846927 | orchestrator | Saturday 18 April 2026 00:52:42 +0000 (0:00:00.397) 0:04:43.706 ******** 2026-04-18 00:53:15.846932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-18 00:53:15.846936 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.846940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-18 00:53:15.846944 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.846947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-18 00:53:15.846951 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.846955 | orchestrator | 2026-04-18 00:53:15.846959 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-18 00:53:15.846963 | orchestrator | Saturday 18 April 2026 00:52:43 +0000 (0:00:00.606) 0:04:44.313 ******** 2026-04-18 00:53:15.846966 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.846970 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.846974 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.846978 | orchestrator | 2026-04-18 00:53:15.846982 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-18 00:53:15.846985 | orchestrator | Saturday 18 April 2026 00:52:43 +0000 (0:00:00.413) 0:04:44.727 ******** 2026-04-18 00:53:15.846989 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.846993 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.846997 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.847005 | orchestrator | 2026-04-18 00:53:15.847009 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-18 00:53:15.847013 | orchestrator | Saturday 18 April 2026 00:52:45 +0000 (0:00:01.356) 0:04:46.083 ******** 2026-04-18 00:53:15.847016 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.847020 | orchestrator | 2026-04-18 00:53:15.847024 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-18 00:53:15.847028 | orchestrator | Saturday 18 April 2026 00:52:46 +0000 (0:00:01.687) 0:04:47.770 ******** 2026-04-18 00:53:15.847032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-18 00:53:15.847039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-18 00:53:15.847052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-18 00:53:15.847059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-18 00:53:15.847076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-18 00:53:15.847093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-18 00:53:15.847100 | orchestrator | 2026-04-18 00:53:15.847106 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-18 00:53:15.847111 | orchestrator | Saturday 18 April 2026 00:52:52 +0000 (0:00:06.026) 0:04:53.797 ******** 2026-04-18 00:53:15.847117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-18 00:53:15.847149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-18 00:53:15.847157 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.847163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-18 00:53:15.847179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-18 00:53:15.847185 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.847192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-18 00:53:15.847207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-18 00:53:15.847214 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.847220 | orchestrator | 2026-04-18 00:53:15.847226 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-18 00:53:15.847233 | orchestrator | Saturday 18 April 2026 00:52:53 +0000 (0:00:01.012) 0:04:54.809 ******** 2026-04-18 00:53:15.847239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-18 00:53:15.847248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-18 00:53:15.847254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-18 00:53:15.847265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-18 00:53:15.847271 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.847277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-18 00:53:15.847287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-18 00:53:15.847294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-18 00:53:15.847308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-18 00:53:15.847315 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.847321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-18 00:53:15.847328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-18 00:53:15.847334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-18 00:53:15.847340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-18 00:53:15.847347 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.847352 | orchestrator | 2026-04-18 00:53:15.847359 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-18 00:53:15.847365 | orchestrator | Saturday 18 April 2026 00:52:55 +0000 (0:00:01.296) 0:04:56.106 ******** 2026-04-18 00:53:15.847372 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.847378 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.847384 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.847390 | orchestrator | 2026-04-18 00:53:15.847396 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-18 00:53:15.847402 | orchestrator | Saturday 18 April 2026 00:52:56 +0000 (0:00:01.124) 0:04:57.230 ******** 2026-04-18 00:53:15.847408 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:15.847415 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:15.847421 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:15.847427 | orchestrator | 2026-04-18 00:53:15.847434 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-18 00:53:15.847440 | orchestrator | Saturday 18 April 2026 00:52:58 +0000 (0:00:01.996) 0:04:59.227 ******** 2026-04-18 00:53:15.847446 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.847452 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.847458 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.847464 | orchestrator | 2026-04-18 00:53:15.847471 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-18 00:53:15.847478 | orchestrator | Saturday 18 April 2026 00:52:58 +0000 (0:00:00.301) 0:04:59.529 ******** 2026-04-18 00:53:15.847484 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.847490 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.847496 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.847502 | orchestrator | 2026-04-18 00:53:15.847508 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-18 00:53:15.847514 | orchestrator | Saturday 18 April 2026 00:52:59 +0000 (0:00:00.580) 0:05:00.110 ******** 2026-04-18 00:53:15.847520 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.847527 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.847533 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.847539 | orchestrator | 2026-04-18 00:53:15.847545 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-18 00:53:15.847557 | orchestrator | Saturday 18 April 2026 00:52:59 +0000 (0:00:00.307) 0:05:00.418 ******** 2026-04-18 00:53:15.847563 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.847569 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.847575 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.847581 | orchestrator | 2026-04-18 00:53:15.847593 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-18 00:53:15.847599 | orchestrator | Saturday 18 April 2026 00:52:59 +0000 (0:00:00.280) 0:05:00.698 ******** 2026-04-18 00:53:15.847605 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.847611 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.847617 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.847623 | orchestrator | 2026-04-18 00:53:15.847629 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-18 00:53:15.847639 | orchestrator | Saturday 18 April 2026 00:52:59 +0000 (0:00:00.284) 0:05:00.983 ******** 2026-04-18 00:53:15.847645 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:15.847652 | orchestrator | 2026-04-18 00:53:15.847658 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-18 00:53:15.847664 | orchestrator | Saturday 18 April 2026 00:53:01 +0000 (0:00:01.710) 0:05:02.693 ******** 2026-04-18 00:53:15.847671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.847678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.847685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-18 00:53:15.847691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.847703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.847717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-18 00:53:15.847724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.847731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.847737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-18 00:53:15.847743 | orchestrator | 2026-04-18 00:53:15.847750 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-18 00:53:15.847756 | orchestrator | Saturday 18 April 2026 00:53:04 +0000 (0:00:02.497) 0:05:05.191 ******** 2026-04-18 00:53:15.847762 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 00:53:15.847769 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:53:15.847775 | orchestrator | } 2026-04-18 00:53:15.847781 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 00:53:15.847787 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:53:15.847794 | orchestrator | } 2026-04-18 00:53:15.847799 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 00:53:15.847806 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:53:15.847812 | orchestrator | } 2026-04-18 00:53:15.847818 | orchestrator | 2026-04-18 00:53:15.847825 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 00:53:15.847831 | orchestrator | Saturday 18 April 2026 00:53:04 +0000 (0:00:00.340) 0:05:05.532 ******** 2026-04-18 00:53:15.847841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.847848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.847862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.847869 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:15.847875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.847882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.847888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.847895 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:15.847906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-18 00:53:15.847913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-18 00:53:15.847922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-18 00:53:15.847929 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:15.847935 | orchestrator | 2026-04-18 00:53:15.847941 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-18 00:53:15.847947 | orchestrator | Saturday 18 April 2026 00:53:06 +0000 (0:00:01.606) 0:05:07.138 ******** 2026-04-18 00:53:15.847957 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.847963 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.847970 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.847976 | orchestrator | 2026-04-18 00:53:15.847982 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-18 00:53:15.847988 | orchestrator | Saturday 18 April 2026 00:53:07 +0000 (0:00:00.986) 0:05:08.124 ******** 2026-04-18 00:53:15.847995 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.848001 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.848007 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.848013 | orchestrator | 2026-04-18 00:53:15.848019 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-18 00:53:15.848025 | orchestrator | Saturday 18 April 2026 00:53:07 +0000 (0:00:00.338) 0:05:08.462 ******** 2026-04-18 00:53:15.848031 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.848037 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.848044 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.848050 | orchestrator | 2026-04-18 00:53:15.848055 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-18 00:53:15.848062 | orchestrator | Saturday 18 April 2026 00:53:08 +0000 (0:00:00.959) 0:05:09.422 ******** 2026-04-18 00:53:15.848068 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.848074 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.848080 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.848087 | orchestrator | 2026-04-18 00:53:15.848093 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-18 00:53:15.848099 | orchestrator | Saturday 18 April 2026 00:53:09 +0000 (0:00:00.846) 0:05:10.268 ******** 2026-04-18 00:53:15.848105 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:15.848112 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:15.848122 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:15.848165 | orchestrator | 2026-04-18 00:53:15.848172 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-18 00:53:15.848178 | orchestrator | Saturday 18 April 2026 00:53:10 +0000 (0:00:01.242) 0:05:11.511 ******** 2026-04-18 00:53:15.848189 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_g9un60id/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_g9un60id/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_g9un60id/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_g9un60id/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy: Internal Server Error (\"unknown: repository kolla/release/2024.2/haproxy not found\")\\n'"} 2026-04-18 00:53:15.848204 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_z1psep2j/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_z1psep2j/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_z1psep2j/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_z1psep2j/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy: Internal Server Error (\"unknown: repository kolla/release/2024.2/haproxy not found\")\\n'"} 2026-04-18 00:53:15.848225 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_4wu9q7eh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_4wu9q7eh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_4wu9q7eh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_4wu9q7eh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.8.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fhaproxy: Internal Server Error (\"unknown: repository kolla/release/2024.2/haproxy not found\")\\n'"} 2026-04-18 00:53:15.848234 | orchestrator | 2026-04-18 00:53:15.848240 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:53:15.848244 | orchestrator | testbed-node-0 : ok=120  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-18 00:53:15.848253 | orchestrator | testbed-node-1 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-18 00:53:15.848257 | orchestrator | testbed-node-2 : ok=119  changed=76  unreachable=0 failed=1  skipped=88  rescued=0 ignored=0 2026-04-18 00:53:15.848260 | orchestrator | 2026-04-18 00:53:15.848264 | orchestrator | 2026-04-18 00:53:15.848268 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:53:15.848272 | orchestrator | Saturday 18 April 2026 00:53:13 +0000 (0:00:02.644) 0:05:14.155 ******** 2026-04-18 00:53:15.848276 | orchestrator | =============================================================================== 2026-04-18 00:53:15.848280 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.03s 2026-04-18 00:53:15.848283 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.38s 2026-04-18 00:53:15.848287 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.15s 2026-04-18 00:53:15.848291 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.66s 2026-04-18 00:53:15.848295 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.25s 2026-04-18 00:53:15.848298 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.19s 2026-04-18 00:53:15.848302 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.18s 2026-04-18 00:53:15.848306 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.96s 2026-04-18 00:53:15.848310 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.94s 2026-04-18 00:53:15.848313 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.94s 2026-04-18 00:53:15.848317 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.92s 2026-04-18 00:53:15.848321 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.80s 2026-04-18 00:53:15.848325 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.65s 2026-04-18 00:53:15.848328 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.57s 2026-04-18 00:53:15.848332 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.55s 2026-04-18 00:53:15.848336 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.51s 2026-04-18 00:53:15.848339 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.50s 2026-04-18 00:53:15.848343 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.48s 2026-04-18 00:53:15.848347 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.43s 2026-04-18 00:53:15.848351 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 3.36s 2026-04-18 00:53:18.881228 | orchestrator | 2026-04-18 00:53:18 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:18.883483 | orchestrator | 2026-04-18 00:53:18 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:18.885360 | orchestrator | 2026-04-18 00:53:18 | INFO  | Task 8e32274e-d564-4f31-8cb7-318dec9e5b4c is in state STARTED 2026-04-18 00:53:18.885440 | orchestrator | 2026-04-18 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:21.918686 | orchestrator | 2026-04-18 00:53:21 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:21.918969 | orchestrator | 2026-04-18 00:53:21 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:21.919546 | orchestrator | 2026-04-18 00:53:21 | INFO  | Task 8e32274e-d564-4f31-8cb7-318dec9e5b4c is in state STARTED 2026-04-18 00:53:21.919560 | orchestrator | 2026-04-18 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:24.953439 | orchestrator | 2026-04-18 00:53:24 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:24.954722 | orchestrator | 2026-04-18 00:53:24 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:24.955621 | orchestrator | 2026-04-18 00:53:24 | INFO  | Task 8e32274e-d564-4f31-8cb7-318dec9e5b4c is in state STARTED 2026-04-18 00:53:24.955664 | orchestrator | 2026-04-18 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:27.988409 | orchestrator | 2026-04-18 00:53:27 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:27.988698 | orchestrator | 2026-04-18 00:53:27 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:27.989539 | orchestrator | 2026-04-18 00:53:27 | INFO  | Task 8e32274e-d564-4f31-8cb7-318dec9e5b4c is in state STARTED 2026-04-18 00:53:27.989567 | orchestrator | 2026-04-18 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:31.028477 | orchestrator | 2026-04-18 00:53:31 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:31.028922 | orchestrator | 2026-04-18 00:53:31 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:31.029604 | orchestrator | 2026-04-18 00:53:31 | INFO  | Task 8e32274e-d564-4f31-8cb7-318dec9e5b4c is in state STARTED 2026-04-18 00:53:31.029636 | orchestrator | 2026-04-18 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:34.061152 | orchestrator | 2026-04-18 00:53:34 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:34.064258 | orchestrator | 2026-04-18 00:53:34 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:34.064635 | orchestrator | 2026-04-18 00:53:34 | INFO  | Task 8e32274e-d564-4f31-8cb7-318dec9e5b4c is in state STARTED 2026-04-18 00:53:34.064720 | orchestrator | 2026-04-18 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:37.099245 | orchestrator | 2026-04-18 00:53:37 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:37.099550 | orchestrator | 2026-04-18 00:53:37 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:37.101521 | orchestrator | 2026-04-18 00:53:37 | INFO  | Task 8e32274e-d564-4f31-8cb7-318dec9e5b4c is in state STARTED 2026-04-18 00:53:37.101588 | orchestrator | 2026-04-18 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:40.129985 | orchestrator | 2026-04-18 00:53:40 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:40.130579 | orchestrator | 2026-04-18 00:53:40 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:40.133135 | orchestrator | 2026-04-18 00:53:40.133300 | orchestrator | 2026-04-18 00:53:40.133318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:53:40.133326 | orchestrator | 2026-04-18 00:53:40.133332 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:53:40.133338 | orchestrator | Saturday 18 April 2026 00:53:16 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-04-18 00:53:40.133344 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:53:40.133351 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:53:40.133358 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:53:40.133384 | orchestrator | 2026-04-18 00:53:40.133392 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:53:40.133399 | orchestrator | Saturday 18 April 2026 00:53:17 +0000 (0:00:00.272) 0:00:00.569 ******** 2026-04-18 00:53:40.133431 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-18 00:53:40.133440 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-18 00:53:40.133447 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-18 00:53:40.133454 | orchestrator | 2026-04-18 00:53:40.133462 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-18 00:53:40.133499 | orchestrator | 2026-04-18 00:53:40.133509 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-18 00:53:40.133517 | orchestrator | Saturday 18 April 2026 00:53:17 +0000 (0:00:00.280) 0:00:00.849 ******** 2026-04-18 00:53:40.133523 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:40.133530 | orchestrator | 2026-04-18 00:53:40.133536 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-18 00:53:40.133542 | orchestrator | Saturday 18 April 2026 00:53:17 +0000 (0:00:00.574) 0:00:01.424 ******** 2026-04-18 00:53:40.133548 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-18 00:53:40.133567 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-18 00:53:40.133573 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-18 00:53:40.133578 | orchestrator | 2026-04-18 00:53:40.133584 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-18 00:53:40.133590 | orchestrator | Saturday 18 April 2026 00:53:19 +0000 (0:00:01.026) 0:00:02.450 ******** 2026-04-18 00:53:40.133599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:40.133609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:40.133632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:40.133650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:40.133659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:40.133666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:40.133673 | orchestrator | 2026-04-18 00:53:40.133678 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-18 00:53:40.133689 | orchestrator | Saturday 18 April 2026 00:53:20 +0000 (0:00:01.356) 0:00:03.807 ******** 2026-04-18 00:53:40.133700 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:53:40.133706 | orchestrator | 2026-04-18 00:53:40.133713 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-18 00:53:40.133720 | orchestrator | Saturday 18 April 2026 00:53:20 +0000 (0:00:00.437) 0:00:04.244 ******** 2026-04-18 00:53:40.133726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:40.133738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:40.133745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:40.133752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:40.133768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:40.133779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:40.133786 | orchestrator | 2026-04-18 00:53:40.133793 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-18 00:53:40.133799 | orchestrator | Saturday 18 April 2026 00:53:23 +0000 (0:00:02.431) 0:00:06.675 ******** 2026-04-18 00:53:40.133806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:40.133822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:40.133834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-18 00:53:40.133842 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:40.133850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:40.133857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-18 00:53:40.133869 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:40.133883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-18 00:53:40.133891 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:40.133897 | orchestrator | 2026-04-18 00:53:40.133903 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-18 00:53:40.133910 | orchestrator | Saturday 18 April 2026 00:53:23 +0000 (0:00:00.654) 0:00:07.329 ******** 2026-04-18 00:53:40.133921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:40.133929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:40.133937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-18 00:53:40.133957 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:40.133970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-18 00:53:40.133977 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:40.133988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:40.133996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-18 00:53:40.134009 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:40.134065 | orchestrator | 2026-04-18 00:53:40.134072 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-18 00:53:40.134079 | orchestrator | Saturday 18 April 2026 00:53:24 +0000 (0:00:00.861) 0:00:08.191 ******** 2026-04-18 00:53:40.134093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:40.134100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:40.134113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:40.134120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:40.134421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:40.134494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:40.134506 | orchestrator | 2026-04-18 00:53:40.134515 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-18 00:53:40.134533 | orchestrator | Saturday 18 April 2026 00:53:27 +0000 (0:00:02.353) 0:00:10.545 ******** 2026-04-18 00:53:40.134541 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:40.134549 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:40.134563 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:40.134571 | orchestrator | 2026-04-18 00:53:40.134578 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-18 00:53:40.134586 | orchestrator | Saturday 18 April 2026 00:53:29 +0000 (0:00:02.386) 0:00:12.932 ******** 2026-04-18 00:53:40.134593 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:53:40.134600 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:53:40.134607 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:53:40.134614 | orchestrator | 2026-04-18 00:53:40.134621 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-18 00:53:40.134628 | orchestrator | Saturday 18 April 2026 00:53:30 +0000 (0:00:01.432) 0:00:14.364 ******** 2026-04-18 00:53:40.134653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:40.134662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:40.134689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 00:53:40.134713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:40.134722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:40.134743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-18 00:53:40.134752 | orchestrator | 2026-04-18 00:53:40.134760 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-18 00:53:40.134766 | orchestrator | Saturday 18 April 2026 00:53:33 +0000 (0:00:02.378) 0:00:16.742 ******** 2026-04-18 00:53:40.134773 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 00:53:40.134779 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:53:40.134786 | orchestrator | } 2026-04-18 00:53:40.134792 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 00:53:40.134798 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:53:40.134805 | orchestrator | } 2026-04-18 00:53:40.134811 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 00:53:40.134818 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:53:40.134825 | orchestrator | } 2026-04-18 00:53:40.134832 | orchestrator | 2026-04-18 00:53:40.134839 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 00:53:40.134845 | orchestrator | Saturday 18 April 2026 00:53:33 +0000 (0:00:00.402) 0:00:17.144 ******** 2026-04-18 00:53:40.134855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:40.134869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-18 00:53:40.134876 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:40.134883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:40.134896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-18 00:53:40.134903 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:40.134913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 00:53:40.134924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2024.2/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-18 00:53:40.134931 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:40.134937 | orchestrator | 2026-04-18 00:53:40.134943 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-18 00:53:40.134948 | orchestrator | Saturday 18 April 2026 00:53:34 +0000 (0:00:00.675) 0:00:17.820 ******** 2026-04-18 00:53:40.134954 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:40.134960 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:53:40.134966 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:53:40.134973 | orchestrator | 2026-04-18 00:53:40.134980 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-18 00:53:40.134987 | orchestrator | Saturday 18 April 2026 00:53:34 +0000 (0:00:00.267) 0:00:18.088 ******** 2026-04-18 00:53:40.134994 | orchestrator | 2026-04-18 00:53:40.135002 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-18 00:53:40.135010 | orchestrator | Saturday 18 April 2026 00:53:34 +0000 (0:00:00.062) 0:00:18.151 ******** 2026-04-18 00:53:40.135017 | orchestrator | 2026-04-18 00:53:40.135025 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-18 00:53:40.135032 | orchestrator | Saturday 18 April 2026 00:53:34 +0000 (0:00:00.060) 0:00:18.212 ******** 2026-04-18 00:53:40.135040 | orchestrator | 2026-04-18 00:53:40.135056 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-18 00:53:40.135069 | orchestrator | Saturday 18 April 2026 00:53:34 +0000 (0:00:00.061) 0:00:18.273 ******** 2026-04-18 00:53:40.135095 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:40.135104 | orchestrator | 2026-04-18 00:53:40.135112 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-18 00:53:40.135120 | orchestrator | Saturday 18 April 2026 00:53:35 +0000 (0:00:00.402) 0:00:18.675 ******** 2026-04-18 00:53:40.135127 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:53:40.135135 | orchestrator | 2026-04-18 00:53:40.135143 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-18 00:53:40.135150 | orchestrator | Saturday 18 April 2026 00:53:35 +0000 (0:00:00.304) 0:00:18.980 ******** 2026-04-18 00:53:40.135183 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_sqbxzbuh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_sqbxzbuh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_sqbxzbuh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_sqbxzbuh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch: Internal Server Error (\"unknown: repository kolla/release/2024.2/opensearch not found\")\\n'"} 2026-04-18 00:53:40.135211 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_7c38eojh/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_7c38eojh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_7c38eojh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_7c38eojh/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch: Internal Server Error (\"unknown: repository kolla/release/2024.2/opensearch not found\")\\n'"} 2026-04-18 00:53:40.135229 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_8jhx8al6/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_8jhx8al6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_8jhx8al6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_8jhx8al6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=2.19.5.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fopensearch: Internal Server Error (\"unknown: repository kolla/release/2024.2/opensearch not found\")\\n'"} 2026-04-18 00:53:40.135238 | orchestrator | 2026-04-18 00:53:40.135245 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:53:40.135259 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-18 00:53:40.135269 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-18 00:53:40.135277 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-18 00:53:40.135290 | orchestrator | 2026-04-18 00:53:40.135297 | orchestrator | 2026-04-18 00:53:40.135305 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:53:40.135312 | orchestrator | Saturday 18 April 2026 00:53:39 +0000 (0:00:03.730) 0:00:22.710 ******** 2026-04-18 00:53:40.135319 | orchestrator | =============================================================================== 2026-04-18 00:53:40.135327 | orchestrator | opensearch : Restart opensearch container ------------------------------- 3.73s 2026-04-18 00:53:40.135335 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.43s 2026-04-18 00:53:40.135343 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.39s 2026-04-18 00:53:40.135350 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.38s 2026-04-18 00:53:40.135357 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.35s 2026-04-18 00:53:40.135365 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.43s 2026-04-18 00:53:40.135372 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.36s 2026-04-18 00:53:40.135380 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.03s 2026-04-18 00:53:40.135387 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.86s 2026-04-18 00:53:40.135398 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.68s 2026-04-18 00:53:40.135406 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.65s 2026-04-18 00:53:40.135414 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-04-18 00:53:40.135421 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2026-04-18 00:53:40.135428 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.40s 2026-04-18 00:53:40.135435 | orchestrator | service-check-containers : opensearch | Notify handlers to restart containers --- 0.40s 2026-04-18 00:53:40.135443 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.30s 2026-04-18 00:53:40.135450 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.28s 2026-04-18 00:53:40.135457 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-04-18 00:53:40.135464 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.27s 2026-04-18 00:53:40.135471 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.18s 2026-04-18 00:53:40.135479 | orchestrator | 2026-04-18 00:53:40 | INFO  | Task 8e32274e-d564-4f31-8cb7-318dec9e5b4c is in state SUCCESS 2026-04-18 00:53:40.135486 | orchestrator | 2026-04-18 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:43.151346 | orchestrator | 2026-04-18 00:53:43 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:43.151631 | orchestrator | 2026-04-18 00:53:43 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:43.151654 | orchestrator | 2026-04-18 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:46.185467 | orchestrator | 2026-04-18 00:53:46 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:46.188122 | orchestrator | 2026-04-18 00:53:46 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:46.188257 | orchestrator | 2026-04-18 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:49.231829 | orchestrator | 2026-04-18 00:53:49 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:49.232070 | orchestrator | 2026-04-18 00:53:49 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:49.232154 | orchestrator | 2026-04-18 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:52.262237 | orchestrator | 2026-04-18 00:53:52 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:52.264588 | orchestrator | 2026-04-18 00:53:52 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:52.264652 | orchestrator | 2026-04-18 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:55.315067 | orchestrator | 2026-04-18 00:53:55 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:55.315412 | orchestrator | 2026-04-18 00:53:55 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:55.315476 | orchestrator | 2026-04-18 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:53:58.365468 | orchestrator | 2026-04-18 00:53:58 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:53:58.366841 | orchestrator | 2026-04-18 00:53:58 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:53:58.366954 | orchestrator | 2026-04-18 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:01.407535 | orchestrator | 2026-04-18 00:54:01 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:54:01.408975 | orchestrator | 2026-04-18 00:54:01 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:01.409044 | orchestrator | 2026-04-18 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:04.452034 | orchestrator | 2026-04-18 00:54:04 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:54:04.453574 | orchestrator | 2026-04-18 00:54:04 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:04.453704 | orchestrator | 2026-04-18 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:07.496893 | orchestrator | 2026-04-18 00:54:07 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:54:07.498747 | orchestrator | 2026-04-18 00:54:07 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:07.498805 | orchestrator | 2026-04-18 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:10.542108 | orchestrator | 2026-04-18 00:54:10 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:54:10.542723 | orchestrator | 2026-04-18 00:54:10 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:10.542784 | orchestrator | 2026-04-18 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:13.581995 | orchestrator | 2026-04-18 00:54:13 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:54:13.583572 | orchestrator | 2026-04-18 00:54:13 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:13.583668 | orchestrator | 2026-04-18 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:16.625106 | orchestrator | 2026-04-18 00:54:16 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:54:16.625301 | orchestrator | 2026-04-18 00:54:16 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:16.625318 | orchestrator | 2026-04-18 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:19.667841 | orchestrator | 2026-04-18 00:54:19 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:54:19.669394 | orchestrator | 2026-04-18 00:54:19 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:19.669798 | orchestrator | 2026-04-18 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:22.705342 | orchestrator | 2026-04-18 00:54:22 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state STARTED 2026-04-18 00:54:22.705929 | orchestrator | 2026-04-18 00:54:22 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:22.706113 | orchestrator | 2026-04-18 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:25.745854 | orchestrator | 2026-04-18 00:54:25 | INFO  | Task e6716653-f1db-4b08-b50e-4bd0511a9eb8 is in state SUCCESS 2026-04-18 00:54:25.746693 | orchestrator | 2026-04-18 00:54:25.746735 | orchestrator | 2026-04-18 00:54:25.746741 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-18 00:54:25.746747 | orchestrator | 2026-04-18 00:54:25.746751 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-18 00:54:25.746756 | orchestrator | Saturday 18 April 2026 00:53:16 +0000 (0:00:00.102) 0:00:00.102 ******** 2026-04-18 00:54:25.746760 | orchestrator | ok: [localhost] => { 2026-04-18 00:54:25.746767 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-18 00:54:25.746771 | orchestrator | } 2026-04-18 00:54:25.746776 | orchestrator | 2026-04-18 00:54:25.746783 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-18 00:54:25.746789 | orchestrator | Saturday 18 April 2026 00:53:16 +0000 (0:00:00.057) 0:00:00.159 ******** 2026-04-18 00:54:25.746800 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-18 00:54:25.746808 | orchestrator | ...ignoring 2026-04-18 00:54:25.746815 | orchestrator | 2026-04-18 00:54:25.746821 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-18 00:54:25.746827 | orchestrator | Saturday 18 April 2026 00:53:19 +0000 (0:00:02.934) 0:00:03.094 ******** 2026-04-18 00:54:25.746834 | orchestrator | skipping: [localhost] 2026-04-18 00:54:25.746840 | orchestrator | 2026-04-18 00:54:25.746847 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-18 00:54:25.746854 | orchestrator | Saturday 18 April 2026 00:53:19 +0000 (0:00:00.060) 0:00:03.154 ******** 2026-04-18 00:54:25.746862 | orchestrator | ok: [localhost] 2026-04-18 00:54:25.746870 | orchestrator | 2026-04-18 00:54:25.746879 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:54:25.746886 | orchestrator | 2026-04-18 00:54:25.746891 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:54:25.746897 | orchestrator | Saturday 18 April 2026 00:53:19 +0000 (0:00:00.205) 0:00:03.359 ******** 2026-04-18 00:54:25.746904 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:25.746910 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:25.746916 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:25.746922 | orchestrator | 2026-04-18 00:54:25.746927 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:54:25.746933 | orchestrator | Saturday 18 April 2026 00:53:20 +0000 (0:00:00.287) 0:00:03.647 ******** 2026-04-18 00:54:25.746939 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-18 00:54:25.746947 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-18 00:54:25.746953 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-18 00:54:25.746959 | orchestrator | 2026-04-18 00:54:25.746965 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-18 00:54:25.746972 | orchestrator | 2026-04-18 00:54:25.746979 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-18 00:54:25.747044 | orchestrator | Saturday 18 April 2026 00:53:20 +0000 (0:00:00.390) 0:00:04.037 ******** 2026-04-18 00:54:25.747087 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-18 00:54:25.747094 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-18 00:54:25.747097 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-18 00:54:25.747174 | orchestrator | 2026-04-18 00:54:25.747264 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-18 00:54:25.747275 | orchestrator | Saturday 18 April 2026 00:53:20 +0000 (0:00:00.335) 0:00:04.373 ******** 2026-04-18 00:54:25.747310 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:54:25.747320 | orchestrator | 2026-04-18 00:54:25.747327 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-18 00:54:25.747333 | orchestrator | Saturday 18 April 2026 00:53:21 +0000 (0:00:00.572) 0:00:04.946 ******** 2026-04-18 00:54:25.747371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-18 00:54:25.747390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-18 00:54:25.747413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-18 00:54:25.747421 | orchestrator | 2026-04-18 00:54:25.747429 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-18 00:54:25.747436 | orchestrator | Saturday 18 April 2026 00:53:24 +0000 (0:00:02.563) 0:00:07.509 ******** 2026-04-18 00:54:25.747443 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.747452 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.747460 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:54:25.747466 | orchestrator | 2026-04-18 00:54:25.747473 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-18 00:54:25.747480 | orchestrator | Saturday 18 April 2026 00:53:24 +0000 (0:00:00.550) 0:00:08.060 ******** 2026-04-18 00:54:25.747487 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.747494 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.747500 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:54:25.747507 | orchestrator | 2026-04-18 00:54:25.747514 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-18 00:54:25.747521 | orchestrator | Saturday 18 April 2026 00:53:25 +0000 (0:00:01.323) 0:00:09.384 ******** 2026-04-18 00:54:25.747567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-18 00:54:25.747589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-18 00:54:25.747598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-18 00:54:25.747612 | orchestrator | 2026-04-18 00:54:25.747619 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-18 00:54:25.747631 | orchestrator | Saturday 18 April 2026 00:53:28 +0000 (0:00:03.020) 0:00:12.405 ******** 2026-04-18 00:54:25.747639 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.747646 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.747653 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:54:25.747659 | orchestrator | 2026-04-18 00:54:25.747666 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-18 00:54:25.747673 | orchestrator | Saturday 18 April 2026 00:53:30 +0000 (0:00:01.032) 0:00:13.437 ******** 2026-04-18 00:54:25.747680 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:54:25.747686 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:54:25.747693 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:54:25.747700 | orchestrator | 2026-04-18 00:54:25.747707 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-18 00:54:25.747713 | orchestrator | Saturday 18 April 2026 00:53:33 +0000 (0:00:03.660) 0:00:17.098 ******** 2026-04-18 00:54:25.747720 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:54:25.747727 | orchestrator | 2026-04-18 00:54:25.747734 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-18 00:54:25.747741 | orchestrator | Saturday 18 April 2026 00:53:34 +0000 (0:00:00.453) 0:00:17.552 ******** 2026-04-18 00:54:25.747756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.747771 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.747783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.747790 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.747803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.747816 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.747823 | orchestrator | 2026-04-18 00:54:25.747830 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-18 00:54:25.747837 | orchestrator | Saturday 18 April 2026 00:53:36 +0000 (0:00:02.306) 0:00:19.858 ******** 2026-04-18 00:54:25.747849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.747864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.747877 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.747884 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.747892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.747899 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.747906 | orchestrator | 2026-04-18 00:54:25.747913 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-18 00:54:25.747920 | orchestrator | Saturday 18 April 2026 00:53:38 +0000 (0:00:02.280) 0:00:22.138 ******** 2026-04-18 00:54:25.747984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.748005 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.748013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.748021 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.748034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.748041 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.748054 | orchestrator | 2026-04-18 00:54:25.748377 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-18 00:54:25.748395 | orchestrator | Saturday 18 April 2026 00:53:40 +0000 (0:00:02.177) 0:00:24.316 ******** 2026-04-18 00:54:25.748404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-18 00:54:25.748421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-18 00:54:25.748438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-18 00:54:25.748456 | orchestrator | 2026-04-18 00:54:25.748463 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-18 00:54:25.748469 | orchestrator | Saturday 18 April 2026 00:53:43 +0000 (0:00:02.174) 0:00:26.490 ******** 2026-04-18 00:54:25.748475 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 00:54:25.748481 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:54:25.748488 | orchestrator | } 2026-04-18 00:54:25.748494 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 00:54:25.748500 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:54:25.748506 | orchestrator | } 2026-04-18 00:54:25.748512 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 00:54:25.748519 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:54:25.748525 | orchestrator | } 2026-04-18 00:54:25.748532 | orchestrator | 2026-04-18 00:54:25.748538 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 00:54:25.748545 | orchestrator | Saturday 18 April 2026 00:53:43 +0000 (0:00:00.269) 0:00:26.760 ******** 2026-04-18 00:54:25.748557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.748570 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.748582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.748590 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.748601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.748615 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.748621 | orchestrator | 2026-04-18 00:54:25.748628 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-18 00:54:25.748634 | orchestrator | Saturday 18 April 2026 00:53:45 +0000 (0:00:01.971) 0:00:28.731 ******** 2026-04-18 00:54:25.748641 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.748647 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.748653 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.748659 | orchestrator | 2026-04-18 00:54:25.748666 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-18 00:54:25.748675 | orchestrator | Saturday 18 April 2026 00:53:45 +0000 (0:00:00.368) 0:00:29.100 ******** 2026-04-18 00:54:25.748682 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.748688 | orchestrator | 2026-04-18 00:54:25.748694 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-18 00:54:25.748701 | orchestrator | Saturday 18 April 2026 00:53:45 +0000 (0:00:00.088) 0:00:29.188 ******** 2026-04-18 00:54:25.748707 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.748713 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.748720 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.748726 | orchestrator | 2026-04-18 00:54:25.748732 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-18 00:54:25.748738 | orchestrator | Saturday 18 April 2026 00:53:45 +0000 (0:00:00.246) 0:00:29.434 ******** 2026-04-18 00:54:25.748744 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.748750 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.748756 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.748762 | orchestrator | 2026-04-18 00:54:25.748768 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-18 00:54:25.748779 | orchestrator | Saturday 18 April 2026 00:53:46 +0000 (0:00:00.258) 0:00:29.693 ******** 2026-04-18 00:54:25.748786 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.748792 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.748799 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.748805 | orchestrator | 2026-04-18 00:54:25.748811 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-18 00:54:25.748817 | orchestrator | Saturday 18 April 2026 00:53:46 +0000 (0:00:00.266) 0:00:29.960 ******** 2026-04-18 00:54:25.748824 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.748830 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.748836 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.748842 | orchestrator | 2026-04-18 00:54:25.748848 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-18 00:54:25.748855 | orchestrator | Saturday 18 April 2026 00:53:46 +0000 (0:00:00.354) 0:00:30.315 ******** 2026-04-18 00:54:25.748861 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.748867 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.748874 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.748880 | orchestrator | 2026-04-18 00:54:25.748887 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-18 00:54:25.748893 | orchestrator | Saturday 18 April 2026 00:53:47 +0000 (0:00:00.248) 0:00:30.564 ******** 2026-04-18 00:54:25.748899 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.748905 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.748911 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.748918 | orchestrator | 2026-04-18 00:54:25.748924 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-18 00:54:25.748930 | orchestrator | Saturday 18 April 2026 00:53:47 +0000 (0:00:00.297) 0:00:30.861 ******** 2026-04-18 00:54:25.748942 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-18 00:54:25.748949 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-18 00:54:25.748955 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-18 00:54:25.748962 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.748968 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-18 00:54:25.748974 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-18 00:54:25.748985 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-18 00:54:25.748991 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.748997 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-18 00:54:25.749004 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-18 00:54:25.749010 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-18 00:54:25.749017 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749023 | orchestrator | 2026-04-18 00:54:25.749029 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-18 00:54:25.749036 | orchestrator | Saturday 18 April 2026 00:53:47 +0000 (0:00:00.328) 0:00:31.190 ******** 2026-04-18 00:54:25.749042 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749049 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749055 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749061 | orchestrator | 2026-04-18 00:54:25.749068 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-18 00:54:25.749074 | orchestrator | Saturday 18 April 2026 00:53:48 +0000 (0:00:00.444) 0:00:31.634 ******** 2026-04-18 00:54:25.749081 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749087 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749093 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749099 | orchestrator | 2026-04-18 00:54:25.749106 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-18 00:54:25.749113 | orchestrator | Saturday 18 April 2026 00:53:48 +0000 (0:00:00.301) 0:00:31.936 ******** 2026-04-18 00:54:25.749119 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749126 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749133 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749139 | orchestrator | 2026-04-18 00:54:25.749146 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-18 00:54:25.749152 | orchestrator | Saturday 18 April 2026 00:53:48 +0000 (0:00:00.279) 0:00:32.216 ******** 2026-04-18 00:54:25.749158 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749164 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749171 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749176 | orchestrator | 2026-04-18 00:54:25.749182 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-18 00:54:25.749208 | orchestrator | Saturday 18 April 2026 00:53:49 +0000 (0:00:00.315) 0:00:32.531 ******** 2026-04-18 00:54:25.749214 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749220 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749226 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749232 | orchestrator | 2026-04-18 00:54:25.749237 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-18 00:54:25.749248 | orchestrator | Saturday 18 April 2026 00:53:49 +0000 (0:00:00.449) 0:00:32.981 ******** 2026-04-18 00:54:25.749255 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749261 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749267 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749273 | orchestrator | 2026-04-18 00:54:25.749280 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-18 00:54:25.749286 | orchestrator | Saturday 18 April 2026 00:53:49 +0000 (0:00:00.292) 0:00:33.274 ******** 2026-04-18 00:54:25.749304 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749310 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749317 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749323 | orchestrator | 2026-04-18 00:54:25.749329 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-18 00:54:25.749335 | orchestrator | Saturday 18 April 2026 00:53:50 +0000 (0:00:00.306) 0:00:33.580 ******** 2026-04-18 00:54:25.749341 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749347 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749354 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749358 | orchestrator | 2026-04-18 00:54:25.749362 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-18 00:54:25.749366 | orchestrator | Saturday 18 April 2026 00:53:50 +0000 (0:00:00.294) 0:00:33.876 ******** 2026-04-18 00:54:25.749374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.749378 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.749398 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.749417 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749426 | orchestrator | 2026-04-18 00:54:25.749434 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-18 00:54:25.749439 | orchestrator | Saturday 18 April 2026 00:53:52 +0000 (0:00:02.079) 0:00:35.955 ******** 2026-04-18 00:54:25.749444 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749451 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749456 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749462 | orchestrator | 2026-04-18 00:54:25.749468 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-18 00:54:25.749473 | orchestrator | Saturday 18 April 2026 00:53:52 +0000 (0:00:00.463) 0:00:36.418 ******** 2026-04-18 00:54:25.749485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.749502 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.749521 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-18 00:54:25.749538 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749542 | orchestrator | 2026-04-18 00:54:25.749546 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-18 00:54:25.749550 | orchestrator | Saturday 18 April 2026 00:53:54 +0000 (0:00:01.845) 0:00:38.263 ******** 2026-04-18 00:54:25.749553 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749557 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749561 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749564 | orchestrator | 2026-04-18 00:54:25.749568 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-18 00:54:25.749572 | orchestrator | Saturday 18 April 2026 00:53:55 +0000 (0:00:00.307) 0:00:38.570 ******** 2026-04-18 00:54:25.749575 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749579 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749583 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749587 | orchestrator | 2026-04-18 00:54:25.749590 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-18 00:54:25.749594 | orchestrator | Saturday 18 April 2026 00:53:55 +0000 (0:00:00.279) 0:00:38.850 ******** 2026-04-18 00:54:25.749598 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749601 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749605 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749609 | orchestrator | 2026-04-18 00:54:25.749612 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-18 00:54:25.749616 | orchestrator | Saturday 18 April 2026 00:53:55 +0000 (0:00:00.459) 0:00:39.309 ******** 2026-04-18 00:54:25.749620 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749623 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749627 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749631 | orchestrator | 2026-04-18 00:54:25.749635 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-18 00:54:25.749638 | orchestrator | Saturday 18 April 2026 00:53:56 +0000 (0:00:00.491) 0:00:39.801 ******** 2026-04-18 00:54:25.749642 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749646 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749649 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749653 | orchestrator | 2026-04-18 00:54:25.749657 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-18 00:54:25.749661 | orchestrator | Saturday 18 April 2026 00:53:56 +0000 (0:00:00.312) 0:00:40.113 ******** 2026-04-18 00:54:25.749665 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:54:25.749668 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:54:25.749672 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:54:25.749676 | orchestrator | 2026-04-18 00:54:25.749679 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-18 00:54:25.749683 | orchestrator | Saturday 18 April 2026 00:53:57 +0000 (0:00:00.913) 0:00:41.027 ******** 2026-04-18 00:54:25.749687 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:25.749691 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:25.749695 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:25.749698 | orchestrator | 2026-04-18 00:54:25.749710 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-18 00:54:25.749714 | orchestrator | Saturday 18 April 2026 00:53:57 +0000 (0:00:00.299) 0:00:41.327 ******** 2026-04-18 00:54:25.749718 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:25.749721 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:25.749725 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:25.749729 | orchestrator | 2026-04-18 00:54:25.749732 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-18 00:54:25.749736 | orchestrator | Saturday 18 April 2026 00:53:58 +0000 (0:00:00.299) 0:00:41.627 ******** 2026-04-18 00:54:25.749741 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-18 00:54:25.749747 | orchestrator | ...ignoring 2026-04-18 00:54:25.749751 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-18 00:54:25.749755 | orchestrator | ...ignoring 2026-04-18 00:54:25.749759 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-18 00:54:25.749763 | orchestrator | ...ignoring 2026-04-18 00:54:25.749766 | orchestrator | 2026-04-18 00:54:25.749770 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-18 00:54:25.749774 | orchestrator | Saturday 18 April 2026 00:54:08 +0000 (0:00:10.780) 0:00:52.408 ******** 2026-04-18 00:54:25.749778 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:25.749781 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:25.749785 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:25.749789 | orchestrator | 2026-04-18 00:54:25.749793 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-18 00:54:25.749796 | orchestrator | Saturday 18 April 2026 00:54:09 +0000 (0:00:00.402) 0:00:52.810 ******** 2026-04-18 00:54:25.749800 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749804 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749807 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749811 | orchestrator | 2026-04-18 00:54:25.749815 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-18 00:54:25.749818 | orchestrator | Saturday 18 April 2026 00:54:09 +0000 (0:00:00.260) 0:00:53.071 ******** 2026-04-18 00:54:25.749822 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749826 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749829 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749833 | orchestrator | 2026-04-18 00:54:25.749839 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-18 00:54:25.749843 | orchestrator | Saturday 18 April 2026 00:54:09 +0000 (0:00:00.280) 0:00:53.352 ******** 2026-04-18 00:54:25.749847 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749850 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749854 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749858 | orchestrator | 2026-04-18 00:54:25.749862 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-18 00:54:25.749865 | orchestrator | Saturday 18 April 2026 00:54:10 +0000 (0:00:00.294) 0:00:53.646 ******** 2026-04-18 00:54:25.749869 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:25.749873 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:25.749876 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:25.749880 | orchestrator | 2026-04-18 00:54:25.749884 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-18 00:54:25.749887 | orchestrator | Saturday 18 April 2026 00:54:10 +0000 (0:00:00.262) 0:00:53.908 ******** 2026-04-18 00:54:25.749898 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:25.749902 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749906 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749910 | orchestrator | 2026-04-18 00:54:25.749918 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-18 00:54:25.749922 | orchestrator | Saturday 18 April 2026 00:54:10 +0000 (0:00:00.438) 0:00:54.347 ******** 2026-04-18 00:54:25.749926 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749929 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749933 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-18 00:54:25.749937 | orchestrator | 2026-04-18 00:54:25.749940 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-18 00:54:25.749944 | orchestrator | Saturday 18 April 2026 00:54:11 +0000 (0:00:00.320) 0:00:54.668 ******** 2026-04-18 00:54:25.749953 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_xx4_88px/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_xx4_88px/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_xx4_88px/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmariadb-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/mariadb-server not found\")\\n'"} 2026-04-18 00:54:25.749957 | orchestrator | 2026-04-18 00:54:25.749961 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-18 00:54:25.749965 | orchestrator | Saturday 18 April 2026 00:54:14 +0000 (0:00:03.591) 0:00:58.260 ******** 2026-04-18 00:54:25.749969 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749972 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749976 | orchestrator | 2026-04-18 00:54:25.749980 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-18 00:54:25.749983 | orchestrator | Saturday 18 April 2026 00:54:15 +0000 (0:00:00.489) 0:00:58.749 ******** 2026-04-18 00:54:25.749987 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:25.749991 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:25.749994 | orchestrator | 2026-04-18 00:54:25.750000 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-18 00:54:25.750004 | orchestrator | Saturday 18 April 2026 00:54:15 +0000 (0:00:00.187) 0:00:58.937 ******** 2026-04-18 00:54:25.750008 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:54:25.750080 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:54:25.750085 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-18 00:54:25.750089 | orchestrator | 2026-04-18 00:54:25.750093 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-18 00:54:25.750097 | orchestrator | skipping: no hosts matched 2026-04-18 00:54:25.750101 | orchestrator | 2026-04-18 00:54:25.750104 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-18 00:54:25.750108 | orchestrator | 2026-04-18 00:54:25.750112 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-18 00:54:25.750115 | orchestrator | Saturday 18 April 2026 00:54:15 +0000 (0:00:00.207) 0:00:59.144 ******** 2026-04-18 00:54:25.750125 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmariadb-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_k4cvv7ri/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_k4cvv7ri/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_k4cvv7ri/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_k4cvv7ri/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=10.11.16.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fmariadb-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/mariadb-server not found\")\\n'"} 2026-04-18 00:54:25.750129 | orchestrator | 2026-04-18 00:54:25.750133 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:54:25.750137 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-18 00:54:25.750141 | orchestrator | testbed-node-0 : ok=20  changed=9  unreachable=0 failed=1  skipped=33  rescued=0 ignored=1  2026-04-18 00:54:25.750147 | orchestrator | testbed-node-1 : ok=16  changed=7  unreachable=0 failed=1  skipped=38  rescued=0 ignored=1  2026-04-18 00:54:25.750156 | orchestrator | testbed-node-2 : ok=16  changed=7  unreachable=0 failed=0 skipped=38  rescued=0 ignored=1  2026-04-18 00:54:25.750162 | orchestrator | 2026-04-18 00:54:25.750166 | orchestrator | 2026-04-18 00:54:25.750170 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:54:25.750174 | orchestrator | Saturday 18 April 2026 00:54:24 +0000 (0:00:08.843) 0:01:07.988 ******** 2026-04-18 00:54:25.750182 | orchestrator | =============================================================================== 2026-04-18 00:54:25.750230 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.78s 2026-04-18 00:54:25.750238 | orchestrator | mariadb : Restart MariaDB container ------------------------------------- 8.84s 2026-04-18 00:54:25.750245 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.66s 2026-04-18 00:54:25.750251 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 3.59s 2026-04-18 00:54:25.750257 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.02s 2026-04-18 00:54:25.750262 | orchestrator | Check MariaDB service --------------------------------------------------- 2.93s 2026-04-18 00:54:25.750268 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.56s 2026-04-18 00:54:25.750272 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.31s 2026-04-18 00:54:25.750276 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.28s 2026-04-18 00:54:25.750280 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.18s 2026-04-18 00:54:25.750283 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 2.17s 2026-04-18 00:54:25.750287 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.08s 2026-04-18 00:54:25.750291 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.97s 2026-04-18 00:54:25.750294 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 1.85s 2026-04-18 00:54:25.750298 | orchestrator | mariadb : Copying over my.cnf for mariabackup --------------------------- 1.32s 2026-04-18 00:54:25.750302 | orchestrator | mariadb : Copying over config.json files for mariabackup ---------------- 1.03s 2026-04-18 00:54:25.750306 | orchestrator | mariadb : Create MariaDB volume ----------------------------------------- 0.91s 2026-04-18 00:54:25.750309 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.57s 2026-04-18 00:54:25.750313 | orchestrator | mariadb : Ensuring database backup config directory exists -------------- 0.55s 2026-04-18 00:54:25.750316 | orchestrator | service-check : mariadb | Fail if containers are unhealthy -------------- 0.49s 2026-04-18 00:54:25.750320 | orchestrator | 2026-04-18 00:54:25 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:25.750324 | orchestrator | 2026-04-18 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:28.787286 | orchestrator | 2026-04-18 00:54:28 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:28.787681 | orchestrator | 2026-04-18 00:54:28 | INFO  | Task ce5b273e-43ce-4a5d-821f-ad5da081b5b7 is in state STARTED 2026-04-18 00:54:28.788380 | orchestrator | 2026-04-18 00:54:28 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:54:28.788484 | orchestrator | 2026-04-18 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:31.826550 | orchestrator | 2026-04-18 00:54:31 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:31.826612 | orchestrator | 2026-04-18 00:54:31 | INFO  | Task ce5b273e-43ce-4a5d-821f-ad5da081b5b7 is in state STARTED 2026-04-18 00:54:31.826622 | orchestrator | 2026-04-18 00:54:31 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:54:31.826649 | orchestrator | 2026-04-18 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:34.876267 | orchestrator | 2026-04-18 00:54:34 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:34.877627 | orchestrator | 2026-04-18 00:54:34 | INFO  | Task ce5b273e-43ce-4a5d-821f-ad5da081b5b7 is in state STARTED 2026-04-18 00:54:34.884857 | orchestrator | 2026-04-18 00:54:34 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:54:34.884903 | orchestrator | 2026-04-18 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:37.930389 | orchestrator | 2026-04-18 00:54:37 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:37.931094 | orchestrator | 2026-04-18 00:54:37 | INFO  | Task ce5b273e-43ce-4a5d-821f-ad5da081b5b7 is in state STARTED 2026-04-18 00:54:37.931871 | orchestrator | 2026-04-18 00:54:37 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:54:37.931902 | orchestrator | 2026-04-18 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:40.977515 | orchestrator | 2026-04-18 00:54:40 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:40.977558 | orchestrator | 2026-04-18 00:54:40 | INFO  | Task ce5b273e-43ce-4a5d-821f-ad5da081b5b7 is in state STARTED 2026-04-18 00:54:40.978721 | orchestrator | 2026-04-18 00:54:40 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:54:40.978755 | orchestrator | 2026-04-18 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:44.052665 | orchestrator | 2026-04-18 00:54:44 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:44.053107 | orchestrator | 2026-04-18 00:54:44 | INFO  | Task ce5b273e-43ce-4a5d-821f-ad5da081b5b7 is in state STARTED 2026-04-18 00:54:44.055960 | orchestrator | 2026-04-18 00:54:44 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:54:44.056010 | orchestrator | 2026-04-18 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:47.095739 | orchestrator | 2026-04-18 00:54:47 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:47.098851 | orchestrator | 2026-04-18 00:54:47 | INFO  | Task ce5b273e-43ce-4a5d-821f-ad5da081b5b7 is in state STARTED 2026-04-18 00:54:47.098920 | orchestrator | 2026-04-18 00:54:47 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:54:47.098928 | orchestrator | 2026-04-18 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:50.135951 | orchestrator | 2026-04-18 00:54:50 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:50.136035 | orchestrator | 2026-04-18 00:54:50 | INFO  | Task ce5b273e-43ce-4a5d-821f-ad5da081b5b7 is in state STARTED 2026-04-18 00:54:50.136885 | orchestrator | 2026-04-18 00:54:50 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:54:50.136918 | orchestrator | 2026-04-18 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:53.168814 | orchestrator | 2026-04-18 00:54:53 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:53.169663 | orchestrator | 2026-04-18 00:54:53 | INFO  | Task ce5b273e-43ce-4a5d-821f-ad5da081b5b7 is in state STARTED 2026-04-18 00:54:53.171316 | orchestrator | 2026-04-18 00:54:53 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:54:53.171364 | orchestrator | 2026-04-18 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:56.201167 | orchestrator | 2026-04-18 00:54:56 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:56.203505 | orchestrator | 2026-04-18 00:54:56 | INFO  | Task ce5b273e-43ce-4a5d-821f-ad5da081b5b7 is in state STARTED 2026-04-18 00:54:56.204688 | orchestrator | 2026-04-18 00:54:56 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:54:56.205035 | orchestrator | 2026-04-18 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:54:59.234804 | orchestrator | 2026-04-18 00:54:59 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:54:59.235861 | orchestrator | 2026-04-18 00:54:59 | INFO  | Task ce5b273e-43ce-4a5d-821f-ad5da081b5b7 is in state SUCCESS 2026-04-18 00:54:59.237821 | orchestrator | 2026-04-18 00:54:59.237869 | orchestrator | 2026-04-18 00:54:59.237877 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:54:59.237884 | orchestrator | 2026-04-18 00:54:59.237890 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:54:59.237896 | orchestrator | Saturday 18 April 2026 00:54:27 +0000 (0:00:00.270) 0:00:00.270 ******** 2026-04-18 00:54:59.237901 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:59.237908 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:59.237913 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:59.237918 | orchestrator | 2026-04-18 00:54:59.237924 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:54:59.237929 | orchestrator | Saturday 18 April 2026 00:54:27 +0000 (0:00:00.270) 0:00:00.540 ******** 2026-04-18 00:54:59.237934 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-18 00:54:59.237940 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-18 00:54:59.237945 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-18 00:54:59.237950 | orchestrator | 2026-04-18 00:54:59.237955 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-18 00:54:59.237960 | orchestrator | 2026-04-18 00:54:59.237965 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-18 00:54:59.237971 | orchestrator | Saturday 18 April 2026 00:54:28 +0000 (0:00:00.256) 0:00:00.796 ******** 2026-04-18 00:54:59.237976 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:54:59.237983 | orchestrator | 2026-04-18 00:54:59.237988 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-18 00:54:59.237993 | orchestrator | Saturday 18 April 2026 00:54:28 +0000 (0:00:00.424) 0:00:01.221 ******** 2026-04-18 00:54:59.238390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-18 00:54:59.238457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-18 00:54:59.238469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-18 00:54:59.238480 | orchestrator | 2026-04-18 00:54:59.238485 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-18 00:54:59.238491 | orchestrator | Saturday 18 April 2026 00:54:30 +0000 (0:00:01.619) 0:00:02.841 ******** 2026-04-18 00:54:59.238496 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:59.238501 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:59.238506 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:59.238511 | orchestrator | 2026-04-18 00:54:59.238522 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-18 00:54:59.238527 | orchestrator | Saturday 18 April 2026 00:54:30 +0000 (0:00:00.233) 0:00:03.074 ******** 2026-04-18 00:54:59.238532 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-18 00:54:59.238538 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-18 00:54:59.238543 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-18 00:54:59.238548 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-18 00:54:59.238553 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-18 00:54:59.238558 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-18 00:54:59.238563 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-18 00:54:59.238568 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-18 00:54:59.238573 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-18 00:54:59.238578 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-18 00:54:59.238583 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-18 00:54:59.238588 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-18 00:54:59.238593 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-18 00:54:59.238598 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-18 00:54:59.238603 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-18 00:54:59.238608 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-18 00:54:59.238613 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-18 00:54:59.238625 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-18 00:54:59.238630 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-18 00:54:59.238635 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-18 00:54:59.238640 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-18 00:54:59.238645 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-18 00:54:59.238650 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-18 00:54:59.238655 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-18 00:54:59.238661 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-18 00:54:59.238667 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-18 00:54:59.238672 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-18 00:54:59.238678 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-18 00:54:59.238683 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-18 00:54:59.238688 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-18 00:54:59.238693 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-18 00:54:59.238698 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-18 00:54:59.238703 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-18 00:54:59.238711 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-18 00:54:59.238716 | orchestrator | 2026-04-18 00:54:59.238721 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-18 00:54:59.238727 | orchestrator | Saturday 18 April 2026 00:54:31 +0000 (0:00:00.626) 0:00:03.701 ******** 2026-04-18 00:54:59.238732 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:59.238737 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:59.238745 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:59.238751 | orchestrator | 2026-04-18 00:54:59.238756 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-18 00:54:59.238761 | orchestrator | Saturday 18 April 2026 00:54:31 +0000 (0:00:00.340) 0:00:04.042 ******** 2026-04-18 00:54:59.238766 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.238771 | orchestrator | 2026-04-18 00:54:59.238776 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-18 00:54:59.238781 | orchestrator | Saturday 18 April 2026 00:54:31 +0000 (0:00:00.114) 0:00:04.157 ******** 2026-04-18 00:54:59.238786 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.238791 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.238796 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.238801 | orchestrator | 2026-04-18 00:54:59.238806 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-18 00:54:59.238811 | orchestrator | Saturday 18 April 2026 00:54:31 +0000 (0:00:00.284) 0:00:04.442 ******** 2026-04-18 00:54:59.238820 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:59.238825 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:59.238833 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:59.238842 | orchestrator | 2026-04-18 00:54:59.238854 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-18 00:54:59.238866 | orchestrator | Saturday 18 April 2026 00:54:32 +0000 (0:00:00.280) 0:00:04.723 ******** 2026-04-18 00:54:59.238874 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.238883 | orchestrator | 2026-04-18 00:54:59.238891 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-18 00:54:59.238900 | orchestrator | Saturday 18 April 2026 00:54:32 +0000 (0:00:00.112) 0:00:04.835 ******** 2026-04-18 00:54:59.238909 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.238918 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.238925 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.238934 | orchestrator | 2026-04-18 00:54:59.238943 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-18 00:54:59.238952 | orchestrator | Saturday 18 April 2026 00:54:32 +0000 (0:00:00.328) 0:00:05.163 ******** 2026-04-18 00:54:59.238961 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:59.238970 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:59.238979 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:59.238984 | orchestrator | 2026-04-18 00:54:59.238989 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-18 00:54:59.238994 | orchestrator | Saturday 18 April 2026 00:54:32 +0000 (0:00:00.280) 0:00:05.443 ******** 2026-04-18 00:54:59.238999 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239005 | orchestrator | 2026-04-18 00:54:59.239010 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-18 00:54:59.239016 | orchestrator | Saturday 18 April 2026 00:54:32 +0000 (0:00:00.151) 0:00:05.595 ******** 2026-04-18 00:54:59.239022 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239027 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.239033 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.239039 | orchestrator | 2026-04-18 00:54:59.239044 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-18 00:54:59.239050 | orchestrator | Saturday 18 April 2026 00:54:33 +0000 (0:00:00.236) 0:00:05.831 ******** 2026-04-18 00:54:59.239055 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:59.239061 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:59.239067 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:59.239072 | orchestrator | 2026-04-18 00:54:59.239078 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-18 00:54:59.239083 | orchestrator | Saturday 18 April 2026 00:54:33 +0000 (0:00:00.277) 0:00:06.109 ******** 2026-04-18 00:54:59.239089 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239094 | orchestrator | 2026-04-18 00:54:59.239100 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-18 00:54:59.239106 | orchestrator | Saturday 18 April 2026 00:54:33 +0000 (0:00:00.106) 0:00:06.216 ******** 2026-04-18 00:54:59.239112 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239117 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.239123 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.239129 | orchestrator | 2026-04-18 00:54:59.239134 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-18 00:54:59.239140 | orchestrator | Saturday 18 April 2026 00:54:33 +0000 (0:00:00.417) 0:00:06.633 ******** 2026-04-18 00:54:59.239145 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:59.239151 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:59.239157 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:59.239162 | orchestrator | 2026-04-18 00:54:59.239168 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-18 00:54:59.239174 | orchestrator | Saturday 18 April 2026 00:54:34 +0000 (0:00:00.311) 0:00:06.945 ******** 2026-04-18 00:54:59.239185 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239191 | orchestrator | 2026-04-18 00:54:59.239196 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-18 00:54:59.239202 | orchestrator | Saturday 18 April 2026 00:54:34 +0000 (0:00:00.104) 0:00:07.050 ******** 2026-04-18 00:54:59.239268 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239274 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.239280 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.239285 | orchestrator | 2026-04-18 00:54:59.239291 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-18 00:54:59.239297 | orchestrator | Saturday 18 April 2026 00:54:34 +0000 (0:00:00.264) 0:00:07.315 ******** 2026-04-18 00:54:59.239303 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:59.239308 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:59.239314 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:59.239319 | orchestrator | 2026-04-18 00:54:59.239329 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-18 00:54:59.239335 | orchestrator | Saturday 18 April 2026 00:54:34 +0000 (0:00:00.271) 0:00:07.586 ******** 2026-04-18 00:54:59.239340 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239346 | orchestrator | 2026-04-18 00:54:59.239352 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-18 00:54:59.239358 | orchestrator | Saturday 18 April 2026 00:54:35 +0000 (0:00:00.112) 0:00:07.699 ******** 2026-04-18 00:54:59.239369 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239376 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.239381 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.239387 | orchestrator | 2026-04-18 00:54:59.239393 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-18 00:54:59.239399 | orchestrator | Saturday 18 April 2026 00:54:35 +0000 (0:00:00.433) 0:00:08.132 ******** 2026-04-18 00:54:59.239405 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:59.239411 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:59.239416 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:59.239422 | orchestrator | 2026-04-18 00:54:59.239428 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-18 00:54:59.239433 | orchestrator | Saturday 18 April 2026 00:54:35 +0000 (0:00:00.297) 0:00:08.430 ******** 2026-04-18 00:54:59.239439 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239444 | orchestrator | 2026-04-18 00:54:59.239450 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-18 00:54:59.239456 | orchestrator | Saturday 18 April 2026 00:54:35 +0000 (0:00:00.133) 0:00:08.563 ******** 2026-04-18 00:54:59.239462 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239467 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.239473 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.239479 | orchestrator | 2026-04-18 00:54:59.239485 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-18 00:54:59.239490 | orchestrator | Saturday 18 April 2026 00:54:36 +0000 (0:00:00.358) 0:00:08.922 ******** 2026-04-18 00:54:59.239496 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:59.239501 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:59.239507 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:59.239513 | orchestrator | 2026-04-18 00:54:59.239519 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-18 00:54:59.239525 | orchestrator | Saturday 18 April 2026 00:54:36 +0000 (0:00:00.496) 0:00:09.418 ******** 2026-04-18 00:54:59.239531 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239536 | orchestrator | 2026-04-18 00:54:59.239542 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-18 00:54:59.239548 | orchestrator | Saturday 18 April 2026 00:54:37 +0000 (0:00:00.291) 0:00:09.710 ******** 2026-04-18 00:54:59.239553 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239559 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.239565 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.239576 | orchestrator | 2026-04-18 00:54:59.239581 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-18 00:54:59.239587 | orchestrator | Saturday 18 April 2026 00:54:37 +0000 (0:00:00.297) 0:00:10.009 ******** 2026-04-18 00:54:59.239593 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:59.239598 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:59.239604 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:59.239610 | orchestrator | 2026-04-18 00:54:59.239615 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-18 00:54:59.239621 | orchestrator | Saturday 18 April 2026 00:54:37 +0000 (0:00:00.364) 0:00:10.374 ******** 2026-04-18 00:54:59.239627 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239632 | orchestrator | 2026-04-18 00:54:59.239638 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-18 00:54:59.239644 | orchestrator | Saturday 18 April 2026 00:54:37 +0000 (0:00:00.156) 0:00:10.530 ******** 2026-04-18 00:54:59.239650 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239655 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.239661 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.239666 | orchestrator | 2026-04-18 00:54:59.239672 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-18 00:54:59.239677 | orchestrator | Saturday 18 April 2026 00:54:38 +0000 (0:00:00.277) 0:00:10.807 ******** 2026-04-18 00:54:59.239684 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:54:59.239689 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:54:59.239695 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:54:59.239700 | orchestrator | 2026-04-18 00:54:59.239706 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-18 00:54:59.239712 | orchestrator | Saturday 18 April 2026 00:54:38 +0000 (0:00:00.436) 0:00:11.244 ******** 2026-04-18 00:54:59.239717 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239723 | orchestrator | 2026-04-18 00:54:59.239728 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-18 00:54:59.239735 | orchestrator | Saturday 18 April 2026 00:54:38 +0000 (0:00:00.124) 0:00:11.368 ******** 2026-04-18 00:54:59.239740 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239746 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.239751 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.239757 | orchestrator | 2026-04-18 00:54:59.239763 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-18 00:54:59.239768 | orchestrator | Saturday 18 April 2026 00:54:38 +0000 (0:00:00.260) 0:00:11.628 ******** 2026-04-18 00:54:59.239774 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:54:59.239780 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:54:59.239786 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:54:59.239791 | orchestrator | 2026-04-18 00:54:59.239806 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-18 00:54:59.239812 | orchestrator | Saturday 18 April 2026 00:54:40 +0000 (0:00:01.762) 0:00:13.391 ******** 2026-04-18 00:54:59.239825 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-18 00:54:59.239831 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-18 00:54:59.239840 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-18 00:54:59.239845 | orchestrator | 2026-04-18 00:54:59.239851 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-18 00:54:59.239857 | orchestrator | Saturday 18 April 2026 00:54:43 +0000 (0:00:02.539) 0:00:15.931 ******** 2026-04-18 00:54:59.239863 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-18 00:54:59.239873 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-18 00:54:59.239879 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-18 00:54:59.239889 | orchestrator | 2026-04-18 00:54:59.239895 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-18 00:54:59.239901 | orchestrator | Saturday 18 April 2026 00:54:45 +0000 (0:00:02.508) 0:00:18.439 ******** 2026-04-18 00:54:59.239907 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-18 00:54:59.239913 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-18 00:54:59.239919 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-18 00:54:59.239925 | orchestrator | 2026-04-18 00:54:59.239931 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-18 00:54:59.239936 | orchestrator | Saturday 18 April 2026 00:54:47 +0000 (0:00:01.762) 0:00:20.202 ******** 2026-04-18 00:54:59.239942 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239947 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.239953 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.239958 | orchestrator | 2026-04-18 00:54:59.239964 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-18 00:54:59.239969 | orchestrator | Saturday 18 April 2026 00:54:47 +0000 (0:00:00.275) 0:00:20.478 ******** 2026-04-18 00:54:59.239975 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.239980 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.239986 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.239992 | orchestrator | 2026-04-18 00:54:59.239997 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-18 00:54:59.240003 | orchestrator | Saturday 18 April 2026 00:54:48 +0000 (0:00:00.265) 0:00:20.744 ******** 2026-04-18 00:54:59.240008 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:54:59.240017 | orchestrator | 2026-04-18 00:54:59.240026 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-18 00:54:59.240035 | orchestrator | Saturday 18 April 2026 00:54:48 +0000 (0:00:00.688) 0:00:21.432 ******** 2026-04-18 00:54:59.240056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-18 00:54:59.240091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-18 00:54:59.240114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-18 00:54:59.240129 | orchestrator | 2026-04-18 00:54:59.240136 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-18 00:54:59.240142 | orchestrator | Saturday 18 April 2026 00:54:50 +0000 (0:00:01.738) 0:00:23.171 ******** 2026-04-18 00:54:59.240149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-18 00:54:59.240155 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.240170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-18 00:54:59.240181 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.240187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-18 00:54:59.240194 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.240200 | orchestrator | 2026-04-18 00:54:59.240232 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-18 00:54:59.240242 | orchestrator | Saturday 18 April 2026 00:54:51 +0000 (0:00:00.832) 0:00:24.004 ******** 2026-04-18 00:54:59.240257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-18 00:54:59.240264 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.240270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-18 00:54:59.240281 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.240295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-18 00:54:59.240302 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.240308 | orchestrator | 2026-04-18 00:54:59.240314 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-18 00:54:59.240320 | orchestrator | Saturday 18 April 2026 00:54:52 +0000 (0:00:01.138) 0:00:25.142 ******** 2026-04-18 00:54:59.240334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-18 00:54:59.240346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-18 00:54:59.240362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-18 00:54:59.240374 | orchestrator | 2026-04-18 00:54:59.240380 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-18 00:54:59.240386 | orchestrator | Saturday 18 April 2026 00:54:53 +0000 (0:00:01.487) 0:00:26.630 ******** 2026-04-18 00:54:59.240392 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 00:54:59.240397 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:54:59.240403 | orchestrator | } 2026-04-18 00:54:59.240409 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 00:54:59.240415 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:54:59.240421 | orchestrator | } 2026-04-18 00:54:59.240427 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 00:54:59.240432 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:54:59.240438 | orchestrator | } 2026-04-18 00:54:59.240444 | orchestrator | 2026-04-18 00:54:59.240450 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 00:54:59.240456 | orchestrator | Saturday 18 April 2026 00:54:54 +0000 (0:00:00.361) 0:00:26.992 ******** 2026-04-18 00:54:59.240462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-18 00:54:59.240473 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.240489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-18 00:54:59.240496 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.240508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-18 00:54:59.240519 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.240525 | orchestrator | 2026-04-18 00:54:59.240531 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-18 00:54:59.240537 | orchestrator | Saturday 18 April 2026 00:54:55 +0000 (0:00:01.192) 0:00:28.185 ******** 2026-04-18 00:54:59.240546 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:54:59.240552 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:54:59.240557 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:54:59.240563 | orchestrator | 2026-04-18 00:54:59.240569 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-18 00:54:59.240575 | orchestrator | Saturday 18 April 2026 00:54:55 +0000 (0:00:00.390) 0:00:28.575 ******** 2026-04-18 00:54:59.240581 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:54:59.240587 | orchestrator | 2026-04-18 00:54:59.240593 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-18 00:54:59.240598 | orchestrator | Saturday 18 April 2026 00:54:56 +0000 (0:00:00.507) 0:00:29.083 ******** 2026-04-18 00:54:59.240604 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:54:59.240610 | orchestrator | 2026-04-18 00:54:59.240616 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:54:59.240622 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=1  skipped=26  rescued=0 ignored=0 2026-04-18 00:54:59.240628 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-18 00:54:59.240635 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-18 00:54:59.240641 | orchestrator | 2026-04-18 00:54:59.240646 | orchestrator | 2026-04-18 00:54:59.240652 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:54:59.240659 | orchestrator | Saturday 18 April 2026 00:54:57 +0000 (0:00:00.798) 0:00:29.882 ******** 2026-04-18 00:54:59.240669 | orchestrator | =============================================================================== 2026-04-18 00:54:59.240675 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.54s 2026-04-18 00:54:59.240681 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.51s 2026-04-18 00:54:59.240687 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.76s 2026-04-18 00:54:59.240692 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.76s 2026-04-18 00:54:59.240698 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.74s 2026-04-18 00:54:59.240704 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.62s 2026-04-18 00:54:59.240710 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.49s 2026-04-18 00:54:59.240715 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.19s 2026-04-18 00:54:59.240721 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.14s 2026-04-18 00:54:59.240727 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.83s 2026-04-18 00:54:59.240732 | orchestrator | horizon : Creating Horizon database ------------------------------------- 0.80s 2026-04-18 00:54:59.240738 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2026-04-18 00:54:59.240744 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2026-04-18 00:54:59.240749 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2026-04-18 00:54:59.240755 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2026-04-18 00:54:59.240761 | orchestrator | horizon : Update policy file name --------------------------------------- 0.44s 2026-04-18 00:54:59.240766 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.43s 2026-04-18 00:54:59.240772 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.42s 2026-04-18 00:54:59.240778 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.42s 2026-04-18 00:54:59.240783 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.39s 2026-04-18 00:54:59.240789 | orchestrator | 2026-04-18 00:54:59 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:54:59.240795 | orchestrator | 2026-04-18 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:02.275006 | orchestrator | 2026-04-18 00:55:02 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:02.276437 | orchestrator | 2026-04-18 00:55:02 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:55:02.276508 | orchestrator | 2026-04-18 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:05.317297 | orchestrator | 2026-04-18 00:55:05 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:05.319929 | orchestrator | 2026-04-18 00:55:05 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:55:05.319993 | orchestrator | 2026-04-18 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:08.366366 | orchestrator | 2026-04-18 00:55:08 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:08.368788 | orchestrator | 2026-04-18 00:55:08 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state STARTED 2026-04-18 00:55:08.369134 | orchestrator | 2026-04-18 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:11.416098 | orchestrator | 2026-04-18 00:55:11 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:11.420141 | orchestrator | 2026-04-18 00:55:11.420189 | orchestrator | 2026-04-18 00:55:11.420196 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:55:11.420290 | orchestrator | 2026-04-18 00:55:11.420302 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:55:11.420310 | orchestrator | Saturday 18 April 2026 00:54:27 +0000 (0:00:00.263) 0:00:00.263 ******** 2026-04-18 00:55:11.420316 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:55:11.420324 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:55:11.420331 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:55:11.420337 | orchestrator | 2026-04-18 00:55:11.420344 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:55:11.420350 | orchestrator | Saturday 18 April 2026 00:54:27 +0000 (0:00:00.237) 0:00:00.501 ******** 2026-04-18 00:55:11.420377 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-18 00:55:11.420381 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-18 00:55:11.420412 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-18 00:55:11.420417 | orchestrator | 2026-04-18 00:55:11.420421 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-18 00:55:11.420425 | orchestrator | 2026-04-18 00:55:11.420429 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-18 00:55:11.420433 | orchestrator | Saturday 18 April 2026 00:54:27 +0000 (0:00:00.214) 0:00:00.716 ******** 2026-04-18 00:55:11.420607 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:55:11.420613 | orchestrator | 2026-04-18 00:55:11.420617 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-18 00:55:11.420621 | orchestrator | Saturday 18 April 2026 00:54:28 +0000 (0:00:00.442) 0:00:01.158 ******** 2026-04-18 00:55:11.420629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.420656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.420678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.420700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-18 00:55:11.420707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-18 00:55:11.420711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-18 00:55:11.420716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.420723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.420734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.420739 | orchestrator | 2026-04-18 00:55:11.420743 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-18 00:55:11.420747 | orchestrator | Saturday 18 April 2026 00:54:30 +0000 (0:00:02.529) 0:00:03.687 ******** 2026-04-18 00:55:11.420751 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:55:11.420756 | orchestrator | 2026-04-18 00:55:11.420760 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-18 00:55:11.420764 | orchestrator | Saturday 18 April 2026 00:54:31 +0000 (0:00:00.116) 0:00:03.804 ******** 2026-04-18 00:55:11.420768 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:55:11.420772 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:55:11.420776 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:55:11.420835 | orchestrator | 2026-04-18 00:55:11.420842 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-18 00:55:11.420852 | orchestrator | Saturday 18 April 2026 00:54:31 +0000 (0:00:00.230) 0:00:04.034 ******** 2026-04-18 00:55:11.420860 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-18 00:55:11.420866 | orchestrator | 2026-04-18 00:55:11.420872 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-18 00:55:11.420879 | orchestrator | Saturday 18 April 2026 00:54:32 +0000 (0:00:00.819) 0:00:04.854 ******** 2026-04-18 00:55:11.420885 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:55:11.420891 | orchestrator | 2026-04-18 00:55:11.420897 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-18 00:55:11.420903 | orchestrator | Saturday 18 April 2026 00:54:32 +0000 (0:00:00.564) 0:00:05.419 ******** 2026-04-18 00:55:11.420910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.420917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.420955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.420964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-18 00:55:11.420971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-18 00:55:11.420979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-18 00:55:11.420986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421017 | orchestrator | 2026-04-18 00:55:11.421024 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-18 00:55:11.421030 | orchestrator | Saturday 18 April 2026 00:54:35 +0000 (0:00:02.941) 0:00:08.360 ******** 2026-04-18 00:55:11.421038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:55:11.421045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.421052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:55:11.421064 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:55:11.421074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:55:11.421083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.421087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:55:11.421091 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:55:11.421095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:55:11.421102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.421106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:55:11.421110 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:55:11.421114 | orchestrator | 2026-04-18 00:55:11.421120 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-18 00:55:11.421124 | orchestrator | Saturday 18 April 2026 00:54:36 +0000 (0:00:00.570) 0:00:08.930 ******** 2026-04-18 00:55:11.421132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:55:11.421136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.421140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:55:11.421147 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:55:11.421152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:55:11.421161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.421167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:55:11.421172 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:55:11.421176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:55:11.421180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.421187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:55:11.421191 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:55:11.421195 | orchestrator | 2026-04-18 00:55:11.421199 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-18 00:55:11.421204 | orchestrator | Saturday 18 April 2026 00:54:37 +0000 (0:00:01.068) 0:00:09.999 ******** 2026-04-18 00:55:11.421210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.421230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.421236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.421243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421277 | orchestrator | 2026-04-18 00:55:11.421281 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-18 00:55:11.421285 | orchestrator | Saturday 18 April 2026 00:54:40 +0000 (0:00:03.222) 0:00:13.221 ******** 2026-04-18 00:55:11.421289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.421295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.421303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.421308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.421315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.421319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.421324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421358 | orchestrator | 2026-04-18 00:55:11.421362 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-18 00:55:11.421369 | orchestrator | Saturday 18 April 2026 00:54:46 +0000 (0:00:05.635) 0:00:18.856 ******** 2026-04-18 00:55:11.421373 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:55:11.421377 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:55:11.421381 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:55:11.421385 | orchestrator | 2026-04-18 00:55:11.421389 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-18 00:55:11.421393 | orchestrator | Saturday 18 April 2026 00:54:47 +0000 (0:00:01.622) 0:00:20.478 ******** 2026-04-18 00:55:11.421397 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:55:11.421401 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:55:11.421411 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:55:11.421419 | orchestrator | 2026-04-18 00:55:11.421424 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-18 00:55:11.421430 | orchestrator | Saturday 18 April 2026 00:54:48 +0000 (0:00:00.680) 0:00:21.159 ******** 2026-04-18 00:55:11.421436 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:55:11.421441 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:55:11.421448 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:55:11.421453 | orchestrator | 2026-04-18 00:55:11.421459 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-18 00:55:11.421465 | orchestrator | Saturday 18 April 2026 00:54:48 +0000 (0:00:00.436) 0:00:21.595 ******** 2026-04-18 00:55:11.421471 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:55:11.421478 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:55:11.421484 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:55:11.421490 | orchestrator | 2026-04-18 00:55:11.421497 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-18 00:55:11.421503 | orchestrator | Saturday 18 April 2026 00:54:49 +0000 (0:00:00.389) 0:00:21.984 ******** 2026-04-18 00:55:11.421511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:55:11.421520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.421529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:55:11.421537 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:55:11.421541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:55:11.421546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.421550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:55:11.421554 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:55:11.421560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:55:11.421568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.421575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:55:11.421579 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:55:11.421583 | orchestrator | 2026-04-18 00:55:11.421587 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-18 00:55:11.421591 | orchestrator | Saturday 18 April 2026 00:54:49 +0000 (0:00:00.537) 0:00:22.522 ******** 2026-04-18 00:55:11.421595 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:55:11.421598 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:55:11.421602 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:55:11.421606 | orchestrator | 2026-04-18 00:55:11.421610 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-18 00:55:11.421614 | orchestrator | Saturday 18 April 2026 00:54:50 +0000 (0:00:00.255) 0:00:22.777 ******** 2026-04-18 00:55:11.421618 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-18 00:55:11.421623 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-18 00:55:11.421627 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-18 00:55:11.421630 | orchestrator | 2026-04-18 00:55:11.421634 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-18 00:55:11.421638 | orchestrator | Saturday 18 April 2026 00:54:52 +0000 (0:00:02.024) 0:00:24.801 ******** 2026-04-18 00:55:11.421642 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-18 00:55:11.421646 | orchestrator | 2026-04-18 00:55:11.421650 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-18 00:55:11.421654 | orchestrator | Saturday 18 April 2026 00:54:53 +0000 (0:00:01.013) 0:00:25.815 ******** 2026-04-18 00:55:11.421658 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:55:11.421662 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:55:11.421666 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:55:11.421670 | orchestrator | 2026-04-18 00:55:11.421673 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-18 00:55:11.421677 | orchestrator | Saturday 18 April 2026 00:54:53 +0000 (0:00:00.497) 0:00:26.312 ******** 2026-04-18 00:55:11.421681 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-18 00:55:11.421685 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-18 00:55:11.421689 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-18 00:55:11.421693 | orchestrator | 2026-04-18 00:55:11.421697 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-18 00:55:11.421701 | orchestrator | Saturday 18 April 2026 00:54:55 +0000 (0:00:01.478) 0:00:27.791 ******** 2026-04-18 00:55:11.421705 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:55:11.421709 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:55:11.421713 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:55:11.421717 | orchestrator | 2026-04-18 00:55:11.421724 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-18 00:55:11.421728 | orchestrator | Saturday 18 April 2026 00:54:55 +0000 (0:00:00.290) 0:00:28.081 ******** 2026-04-18 00:55:11.421732 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-18 00:55:11.421736 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-18 00:55:11.421740 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-18 00:55:11.421743 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-18 00:55:11.421747 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-18 00:55:11.421755 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-18 00:55:11.421759 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-18 00:55:11.421763 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-18 00:55:11.421767 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-18 00:55:11.421771 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-18 00:55:11.421775 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-18 00:55:11.421781 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-18 00:55:11.421785 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-18 00:55:11.421789 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-18 00:55:11.421793 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-18 00:55:11.421797 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-18 00:55:11.421801 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-18 00:55:11.421805 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-18 00:55:11.421809 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-18 00:55:11.421813 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-18 00:55:11.421817 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-18 00:55:11.421821 | orchestrator | 2026-04-18 00:55:11.421825 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-18 00:55:11.421829 | orchestrator | Saturday 18 April 2026 00:55:04 +0000 (0:00:09.202) 0:00:37.283 ******** 2026-04-18 00:55:11.421833 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-18 00:55:11.421837 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-18 00:55:11.421841 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-18 00:55:11.421845 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-18 00:55:11.421849 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-18 00:55:11.421853 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-18 00:55:11.421857 | orchestrator | 2026-04-18 00:55:11.421861 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-18 00:55:11.421865 | orchestrator | Saturday 18 April 2026 00:55:06 +0000 (0:00:02.289) 0:00:39.573 ******** 2026-04-18 00:55:11.421872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.421881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.421897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-18 00:55:11.421905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/f2026-04-18 00:55:11 | INFO  | Task 7e84a163-fd2a-4495-931a-01cc4f52160f is in state SUCCESS 2026-04-18 00:55:11.421940 | orchestrator | ernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-18 00:55:11.421962 | orchestrator | 2026-04-18 00:55:11.421966 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-18 00:55:11.421970 | orchestrator | Saturday 18 April 2026 00:55:09 +0000 (0:00:02.158) 0:00:41.732 ******** 2026-04-18 00:55:11.421974 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 00:55:11.421978 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:55:11.421982 | orchestrator | } 2026-04-18 00:55:11.421986 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 00:55:11.421990 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:55:11.421994 | orchestrator | } 2026-04-18 00:55:11.421998 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 00:55:11.422005 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 00:55:11.422009 | orchestrator | } 2026-04-18 00:55:11.422049 | orchestrator | 2026-04-18 00:55:11.422053 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 00:55:11.422057 | orchestrator | Saturday 18 April 2026 00:55:09 +0000 (0:00:00.280) 0:00:42.013 ******** 2026-04-18 00:55:11.422062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:55:11.422066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.422073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:55:11.422077 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:55:11.422085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:55:11.422089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.422096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:55:11.422100 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:55:11.422105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-18 00:55:11.422111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-18 00:55:11.422119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-18 00:55:11.422123 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:55:11.422127 | orchestrator | 2026-04-18 00:55:11.422132 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-18 00:55:11.422138 | orchestrator | Saturday 18 April 2026 00:55:10 +0000 (0:00:00.912) 0:00:42.925 ******** 2026-04-18 00:55:11.422152 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:55:11.422161 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:55:11.422167 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:55:11.422174 | orchestrator | 2026-04-18 00:55:11.422181 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-18 00:55:11.422187 | orchestrator | Saturday 18 April 2026 00:55:10 +0000 (0:00:00.287) 0:00:43.212 ******** 2026-04-18 00:55:11.422194 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:55:11.422202 | orchestrator | 2026-04-18 00:55:11.422209 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:55:11.422232 | orchestrator | testbed-node-0 : ok=18  changed=10  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-18 00:55:11.422241 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-18 00:55:11.422248 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-18 00:55:11.422254 | orchestrator | 2026-04-18 00:55:11.422261 | orchestrator | 2026-04-18 00:55:11.422267 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:55:11.422316 | orchestrator | Saturday 18 April 2026 00:55:11 +0000 (0:00:00.624) 0:00:43.837 ******** 2026-04-18 00:55:11.422327 | orchestrator | =============================================================================== 2026-04-18 00:55:11.422334 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.20s 2026-04-18 00:55:11.422339 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.64s 2026-04-18 00:55:11.422346 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.22s 2026-04-18 00:55:11.422353 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 2.94s 2026-04-18 00:55:11.422359 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.53s 2026-04-18 00:55:11.422366 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.29s 2026-04-18 00:55:11.422373 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.16s 2026-04-18 00:55:11.422380 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.02s 2026-04-18 00:55:11.422387 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.62s 2026-04-18 00:55:11.422394 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.48s 2026-04-18 00:55:11.422398 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 1.07s 2026-04-18 00:55:11.422402 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 1.01s 2026-04-18 00:55:11.422406 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.91s 2026-04-18 00:55:11.422410 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 0.82s 2026-04-18 00:55:11.422414 | orchestrator | keystone : Create Keystone domain-specific config directory ------------- 0.68s 2026-04-18 00:55:11.422418 | orchestrator | keystone : Creating keystone database ----------------------------------- 0.62s 2026-04-18 00:55:11.422422 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 0.57s 2026-04-18 00:55:11.422426 | orchestrator | keystone : include_tasks ------------------------------------------------ 0.56s 2026-04-18 00:55:11.422430 | orchestrator | keystone : Copying over existing policy file ---------------------------- 0.54s 2026-04-18 00:55:11.422438 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 0.50s 2026-04-18 00:55:11.422442 | orchestrator | 2026-04-18 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:14.454584 | orchestrator | 2026-04-18 00:55:14 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:14.454956 | orchestrator | 2026-04-18 00:55:14 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:14.457656 | orchestrator | 2026-04-18 00:55:14 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:14.458471 | orchestrator | 2026-04-18 00:55:14 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:14.459378 | orchestrator | 2026-04-18 00:55:14 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:14.459419 | orchestrator | 2026-04-18 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:17.485024 | orchestrator | 2026-04-18 00:55:17 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:17.485113 | orchestrator | 2026-04-18 00:55:17 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:17.485517 | orchestrator | 2026-04-18 00:55:17 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:17.486115 | orchestrator | 2026-04-18 00:55:17 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:17.486792 | orchestrator | 2026-04-18 00:55:17 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:17.486843 | orchestrator | 2026-04-18 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:20.520026 | orchestrator | 2026-04-18 00:55:20 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:20.520817 | orchestrator | 2026-04-18 00:55:20 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:20.522197 | orchestrator | 2026-04-18 00:55:20 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:20.523005 | orchestrator | 2026-04-18 00:55:20 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:20.524321 | orchestrator | 2026-04-18 00:55:20 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:20.524353 | orchestrator | 2026-04-18 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:23.572630 | orchestrator | 2026-04-18 00:55:23 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:23.574219 | orchestrator | 2026-04-18 00:55:23 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:23.575604 | orchestrator | 2026-04-18 00:55:23 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:23.578486 | orchestrator | 2026-04-18 00:55:23 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:23.580857 | orchestrator | 2026-04-18 00:55:23 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:23.581011 | orchestrator | 2026-04-18 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:26.623132 | orchestrator | 2026-04-18 00:55:26 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:26.624974 | orchestrator | 2026-04-18 00:55:26 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:26.626372 | orchestrator | 2026-04-18 00:55:26 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:26.627985 | orchestrator | 2026-04-18 00:55:26 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:26.629329 | orchestrator | 2026-04-18 00:55:26 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:26.629508 | orchestrator | 2026-04-18 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:29.669346 | orchestrator | 2026-04-18 00:55:29 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:29.671682 | orchestrator | 2026-04-18 00:55:29 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:29.673863 | orchestrator | 2026-04-18 00:55:29 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:29.675629 | orchestrator | 2026-04-18 00:55:29 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:29.677874 | orchestrator | 2026-04-18 00:55:29 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:29.677940 | orchestrator | 2026-04-18 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:32.724687 | orchestrator | 2026-04-18 00:55:32 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:32.725123 | orchestrator | 2026-04-18 00:55:32 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:32.726567 | orchestrator | 2026-04-18 00:55:32 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:32.727055 | orchestrator | 2026-04-18 00:55:32 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:32.727841 | orchestrator | 2026-04-18 00:55:32 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:32.727898 | orchestrator | 2026-04-18 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:35.771431 | orchestrator | 2026-04-18 00:55:35 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:35.773648 | orchestrator | 2026-04-18 00:55:35 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:35.775714 | orchestrator | 2026-04-18 00:55:35 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:35.777835 | orchestrator | 2026-04-18 00:55:35 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:35.779789 | orchestrator | 2026-04-18 00:55:35 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:35.779942 | orchestrator | 2026-04-18 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:38.833401 | orchestrator | 2026-04-18 00:55:38 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:38.837322 | orchestrator | 2026-04-18 00:55:38 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:38.838828 | orchestrator | 2026-04-18 00:55:38 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:38.841480 | orchestrator | 2026-04-18 00:55:38 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:38.843092 | orchestrator | 2026-04-18 00:55:38 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:38.843184 | orchestrator | 2026-04-18 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:41.881401 | orchestrator | 2026-04-18 00:55:41 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:41.882929 | orchestrator | 2026-04-18 00:55:41 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:41.885621 | orchestrator | 2026-04-18 00:55:41 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:41.887427 | orchestrator | 2026-04-18 00:55:41 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:41.888932 | orchestrator | 2026-04-18 00:55:41 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:41.889309 | orchestrator | 2026-04-18 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:44.933673 | orchestrator | 2026-04-18 00:55:44 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:44.933757 | orchestrator | 2026-04-18 00:55:44 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:44.933770 | orchestrator | 2026-04-18 00:55:44 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:44.935337 | orchestrator | 2026-04-18 00:55:44 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:44.936527 | orchestrator | 2026-04-18 00:55:44 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:44.936567 | orchestrator | 2026-04-18 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:47.985419 | orchestrator | 2026-04-18 00:55:47 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:47.986898 | orchestrator | 2026-04-18 00:55:47 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:47.988293 | orchestrator | 2026-04-18 00:55:47 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:47.989722 | orchestrator | 2026-04-18 00:55:47 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:47.990953 | orchestrator | 2026-04-18 00:55:47 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:47.990999 | orchestrator | 2026-04-18 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:51.033360 | orchestrator | 2026-04-18 00:55:51 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:51.034049 | orchestrator | 2026-04-18 00:55:51 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:51.035044 | orchestrator | 2026-04-18 00:55:51 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:51.037135 | orchestrator | 2026-04-18 00:55:51 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:51.039236 | orchestrator | 2026-04-18 00:55:51 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:51.039299 | orchestrator | 2026-04-18 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:54.089747 | orchestrator | 2026-04-18 00:55:54 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:54.091491 | orchestrator | 2026-04-18 00:55:54 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:54.093422 | orchestrator | 2026-04-18 00:55:54 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:54.095713 | orchestrator | 2026-04-18 00:55:54 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:54.096551 | orchestrator | 2026-04-18 00:55:54 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:54.096608 | orchestrator | 2026-04-18 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:55:57.139527 | orchestrator | 2026-04-18 00:55:57 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:55:57.140573 | orchestrator | 2026-04-18 00:55:57 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:55:57.141938 | orchestrator | 2026-04-18 00:55:57 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:55:57.143392 | orchestrator | 2026-04-18 00:55:57 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:55:57.144668 | orchestrator | 2026-04-18 00:55:57 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:55:57.144715 | orchestrator | 2026-04-18 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:00.189300 | orchestrator | 2026-04-18 00:56:00 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:00.190903 | orchestrator | 2026-04-18 00:56:00 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:56:00.192358 | orchestrator | 2026-04-18 00:56:00 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:56:00.193784 | orchestrator | 2026-04-18 00:56:00 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:56:00.195061 | orchestrator | 2026-04-18 00:56:00 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:56:00.195208 | orchestrator | 2026-04-18 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:03.239545 | orchestrator | 2026-04-18 00:56:03 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:03.241236 | orchestrator | 2026-04-18 00:56:03 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:56:03.243468 | orchestrator | 2026-04-18 00:56:03 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:56:03.244696 | orchestrator | 2026-04-18 00:56:03 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:56:03.246416 | orchestrator | 2026-04-18 00:56:03 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:56:03.246459 | orchestrator | 2026-04-18 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:06.290565 | orchestrator | 2026-04-18 00:56:06 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:06.292505 | orchestrator | 2026-04-18 00:56:06 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:56:06.294477 | orchestrator | 2026-04-18 00:56:06 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:56:06.296654 | orchestrator | 2026-04-18 00:56:06 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:56:06.298454 | orchestrator | 2026-04-18 00:56:06 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:56:06.298539 | orchestrator | 2026-04-18 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:09.348767 | orchestrator | 2026-04-18 00:56:09 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:09.351010 | orchestrator | 2026-04-18 00:56:09 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:56:09.353595 | orchestrator | 2026-04-18 00:56:09 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:56:09.355511 | orchestrator | 2026-04-18 00:56:09 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state STARTED 2026-04-18 00:56:09.357605 | orchestrator | 2026-04-18 00:56:09 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state STARTED 2026-04-18 00:56:09.358001 | orchestrator | 2026-04-18 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:12.404144 | orchestrator | 2026-04-18 00:56:12 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:12.406895 | orchestrator | 2026-04-18 00:56:12 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:56:12.409687 | orchestrator | 2026-04-18 00:56:12 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state STARTED 2026-04-18 00:56:12.411982 | orchestrator | 2026-04-18 00:56:12 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:12.414240 | orchestrator | 2026-04-18 00:56:12 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:12.415994 | orchestrator | 2026-04-18 00:56:12 | INFO  | Task 63398fa5-83e5-4798-9bf5-797c2af76a8e is in state SUCCESS 2026-04-18 00:56:12.417452 | orchestrator | 2026-04-18 00:56:12 | INFO  | Task 45b7ed40-8359-40bf-858a-eaf8e96fc1d5 is in state SUCCESS 2026-04-18 00:56:12.417487 | orchestrator | 2026-04-18 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:15.462542 | orchestrator | 2026-04-18 00:56:15 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:15.464080 | orchestrator | 2026-04-18 00:56:15 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:56:15.465351 | orchestrator | 2026-04-18 00:56:15 | INFO  | Task abe761e1-815b-40e9-8590-368d33526ecf is in state SUCCESS 2026-04-18 00:56:15.465966 | orchestrator | 2026-04-18 00:56:15.465993 | orchestrator | 2026-04-18 00:56:15.466000 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:56:15.466008 | orchestrator | 2026-04-18 00:56:15.466053 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:56:15.466061 | orchestrator | Saturday 18 April 2026 00:55:14 +0000 (0:00:00.307) 0:00:00.307 ******** 2026-04-18 00:56:15.466067 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:15.466074 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:15.466080 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:15.466086 | orchestrator | 2026-04-18 00:56:15.466093 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:56:15.466100 | orchestrator | Saturday 18 April 2026 00:55:14 +0000 (0:00:00.333) 0:00:00.640 ******** 2026-04-18 00:56:15.466107 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-18 00:56:15.466115 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-18 00:56:15.466122 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-18 00:56:15.466128 | orchestrator | 2026-04-18 00:56:15.466135 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-18 00:56:15.466141 | orchestrator | 2026-04-18 00:56:15.466148 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-18 00:56:15.466155 | orchestrator | Saturday 18 April 2026 00:55:15 +0000 (0:00:00.361) 0:00:01.002 ******** 2026-04-18 00:56:15.466162 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:15.466170 | orchestrator | 2026-04-18 00:56:15.466176 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-04-18 00:56:15.466183 | orchestrator | Saturday 18 April 2026 00:55:15 +0000 (0:00:00.586) 0:00:01.588 ******** 2026-04-18 00:56:15.466189 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (5 retries left). 2026-04-18 00:56:15.466196 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (4 retries left). 2026-04-18 00:56:15.466203 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (3 retries left). 2026-04-18 00:56:15.466209 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (2 retries left). 2026-04-18 00:56:15.466296 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating/deleting services (1 retries left). 2026-04-18 00:56:15.466323 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:56:15.466333 | orchestrator | 2026-04-18 00:56:15.466340 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:56:15.466347 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-18 00:56:15.466355 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:56:15.466364 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:56:15.466370 | orchestrator | 2026-04-18 00:56:15.466377 | orchestrator | 2026-04-18 00:56:15.466384 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:56:15.466390 | orchestrator | Saturday 18 April 2026 00:56:09 +0000 (0:00:53.488) 0:00:55.077 ******** 2026-04-18 00:56:15.466397 | orchestrator | =============================================================================== 2026-04-18 00:56:15.466403 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------ 53.49s 2026-04-18 00:56:15.466410 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.59s 2026-04-18 00:56:15.466416 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2026-04-18 00:56:15.466423 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-04-18 00:56:15.466429 | orchestrator | 2026-04-18 00:56:15.466436 | orchestrator | 2026-04-18 00:56:15.466442 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:56:15.466448 | orchestrator | 2026-04-18 00:56:15.466454 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:56:15.466460 | orchestrator | Saturday 18 April 2026 00:55:14 +0000 (0:00:00.298) 0:00:00.298 ******** 2026-04-18 00:56:15.466466 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:15.466472 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:15.466478 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:15.466483 | orchestrator | 2026-04-18 00:56:15.466489 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:56:15.466495 | orchestrator | Saturday 18 April 2026 00:55:15 +0000 (0:00:00.252) 0:00:00.551 ******** 2026-04-18 00:56:15.466501 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-18 00:56:15.466508 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-18 00:56:15.466513 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-18 00:56:15.466519 | orchestrator | 2026-04-18 00:56:15.466525 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-18 00:56:15.466531 | orchestrator | 2026-04-18 00:56:15.466548 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-18 00:56:15.466555 | orchestrator | Saturday 18 April 2026 00:55:15 +0000 (0:00:00.453) 0:00:01.005 ******** 2026-04-18 00:56:15.466561 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:15.466567 | orchestrator | 2026-04-18 00:56:15.466573 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-04-18 00:56:15.466579 | orchestrator | Saturday 18 April 2026 00:55:16 +0000 (0:00:00.796) 0:00:01.802 ******** 2026-04-18 00:56:15.466594 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (5 retries left). 2026-04-18 00:56:15.466600 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (4 retries left). 2026-04-18 00:56:15.466607 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (3 retries left). 2026-04-18 00:56:15.466613 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (2 retries left). 2026-04-18 00:56:15.466619 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating/deleting services (1 retries left). 2026-04-18 00:56:15.466627 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:56:15.466634 | orchestrator | 2026-04-18 00:56:15.466640 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:56:15.466646 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-18 00:56:15.466653 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:56:15.466664 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:56:15.466671 | orchestrator | 2026-04-18 00:56:15.466678 | orchestrator | 2026-04-18 00:56:15.466685 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:56:15.466691 | orchestrator | Saturday 18 April 2026 00:56:09 +0000 (0:00:53.517) 0:00:55.320 ******** 2026-04-18 00:56:15.466698 | orchestrator | =============================================================================== 2026-04-18 00:56:15.466704 | orchestrator | service-ks-register : designate | Creating/deleting services ----------- 53.52s 2026-04-18 00:56:15.466710 | orchestrator | designate : include_tasks ----------------------------------------------- 0.80s 2026-04-18 00:56:15.466716 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-04-18 00:56:15.466722 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2026-04-18 00:56:15.466729 | orchestrator | 2026-04-18 00:56:15.466734 | orchestrator | 2026-04-18 00:56:15.466741 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:56:15.466746 | orchestrator | 2026-04-18 00:56:15.466767 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:56:15.466772 | orchestrator | Saturday 18 April 2026 00:55:14 +0000 (0:00:00.273) 0:00:00.273 ******** 2026-04-18 00:56:15.466776 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:15.466781 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:15.466785 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:15.466790 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:15.466794 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:15.466799 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:15.466803 | orchestrator | 2026-04-18 00:56:15.466808 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:56:15.466812 | orchestrator | Saturday 18 April 2026 00:55:15 +0000 (0:00:00.471) 0:00:00.744 ******** 2026-04-18 00:56:15.466816 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-18 00:56:15.466821 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-18 00:56:15.466826 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-18 00:56:15.466829 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-18 00:56:15.466833 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-18 00:56:15.466842 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-18 00:56:15.466846 | orchestrator | 2026-04-18 00:56:15.466850 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-18 00:56:15.466853 | orchestrator | 2026-04-18 00:56:15.466857 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-18 00:56:15.466861 | orchestrator | Saturday 18 April 2026 00:55:16 +0000 (0:00:00.908) 0:00:01.653 ******** 2026-04-18 00:56:15.466865 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:15.466869 | orchestrator | 2026-04-18 00:56:15.466872 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-18 00:56:15.466876 | orchestrator | Saturday 18 April 2026 00:55:17 +0000 (0:00:00.889) 0:00:02.543 ******** 2026-04-18 00:56:15.466880 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:15.466883 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:15.466887 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:15.466891 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:15.466900 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:15.466904 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:15.466908 | orchestrator | 2026-04-18 00:56:15.466911 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-18 00:56:15.466915 | orchestrator | Saturday 18 April 2026 00:55:18 +0000 (0:00:01.404) 0:00:03.948 ******** 2026-04-18 00:56:15.466919 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:15.466922 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:15.466926 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:15.466930 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:15.466933 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:15.466937 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:15.466941 | orchestrator | 2026-04-18 00:56:15.466944 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-18 00:56:15.466948 | orchestrator | Saturday 18 April 2026 00:55:19 +0000 (0:00:01.083) 0:00:05.031 ******** 2026-04-18 00:56:15.466952 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:15.466955 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:15.466959 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:15.466963 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:15.466966 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:15.466970 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:15.466974 | orchestrator | 2026-04-18 00:56:15.466977 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-18 00:56:15.466981 | orchestrator | Saturday 18 April 2026 00:55:20 +0000 (0:00:00.531) 0:00:05.563 ******** 2026-04-18 00:56:15.466985 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:15.466988 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:15.466992 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:15.466996 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:15.466999 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:15.467003 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:15.467009 | orchestrator | 2026-04-18 00:56:15.467015 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-04-18 00:56:15.467020 | orchestrator | Saturday 18 April 2026 00:55:20 +0000 (0:00:00.582) 0:00:06.146 ******** 2026-04-18 00:56:15.467026 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (5 retries left). 2026-04-18 00:56:15.467032 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (4 retries left). 2026-04-18 00:56:15.467039 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (3 retries left). 2026-04-18 00:56:15.467049 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (2 retries left). 2026-04-18 00:56:15.467055 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating/deleting services (1 retries left). 2026-04-18 00:56:15.467066 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:56:15.467074 | orchestrator | 2026-04-18 00:56:15.467080 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:56:15.467086 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2026-04-18 00:56:15.467091 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:56:15.467097 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:56:15.467103 | orchestrator | testbed-node-3 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:56:15.467109 | orchestrator | testbed-node-4 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:56:15.467114 | orchestrator | testbed-node-5 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 00:56:15.467120 | orchestrator | 2026-04-18 00:56:15.467126 | orchestrator | 2026-04-18 00:56:15.467131 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:56:15.467137 | orchestrator | Saturday 18 April 2026 00:56:13 +0000 (0:00:53.320) 0:00:59.467 ******** 2026-04-18 00:56:15.467143 | orchestrator | =============================================================================== 2026-04-18 00:56:15.467149 | orchestrator | service-ks-register : neutron | Creating/deleting services ------------- 53.32s 2026-04-18 00:56:15.467155 | orchestrator | neutron : Get container facts ------------------------------------------- 1.40s 2026-04-18 00:56:15.467160 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.08s 2026-04-18 00:56:15.467168 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.91s 2026-04-18 00:56:15.467172 | orchestrator | neutron : include_tasks ------------------------------------------------- 0.89s 2026-04-18 00:56:15.467176 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.58s 2026-04-18 00:56:15.467184 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 0.53s 2026-04-18 00:56:15.467188 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2026-04-18 00:56:15.467287 | orchestrator | 2026-04-18 00:56:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:15.468706 | orchestrator | 2026-04-18 00:56:15 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:15.470148 | orchestrator | 2026-04-18 00:56:15 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:15.470271 | orchestrator | 2026-04-18 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:18.527772 | orchestrator | 2026-04-18 00:56:18 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:18.529546 | orchestrator | 2026-04-18 00:56:18 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:56:18.531966 | orchestrator | 2026-04-18 00:56:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:18.533964 | orchestrator | 2026-04-18 00:56:18 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:18.535740 | orchestrator | 2026-04-18 00:56:18 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:18.535790 | orchestrator | 2026-04-18 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:21.578568 | orchestrator | 2026-04-18 00:56:21 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:21.580097 | orchestrator | 2026-04-18 00:56:21 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:56:21.581821 | orchestrator | 2026-04-18 00:56:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:21.583301 | orchestrator | 2026-04-18 00:56:21 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:21.585494 | orchestrator | 2026-04-18 00:56:21 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:21.585544 | orchestrator | 2026-04-18 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:24.627018 | orchestrator | 2026-04-18 00:56:24 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:24.630230 | orchestrator | 2026-04-18 00:56:24 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:56:24.632130 | orchestrator | 2026-04-18 00:56:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:24.634673 | orchestrator | 2026-04-18 00:56:24 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:24.636585 | orchestrator | 2026-04-18 00:56:24 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:24.636814 | orchestrator | 2026-04-18 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:27.678164 | orchestrator | 2026-04-18 00:56:27 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:27.679974 | orchestrator | 2026-04-18 00:56:27 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:56:27.681522 | orchestrator | 2026-04-18 00:56:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:27.683146 | orchestrator | 2026-04-18 00:56:27 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:27.684689 | orchestrator | 2026-04-18 00:56:27 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:27.684722 | orchestrator | 2026-04-18 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:30.729913 | orchestrator | 2026-04-18 00:56:30 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:30.731468 | orchestrator | 2026-04-18 00:56:30 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state STARTED 2026-04-18 00:56:30.732932 | orchestrator | 2026-04-18 00:56:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:30.734183 | orchestrator | 2026-04-18 00:56:30 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:30.735561 | orchestrator | 2026-04-18 00:56:30 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:30.735648 | orchestrator | 2026-04-18 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:33.769236 | orchestrator | 2026-04-18 00:56:33 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:33.770053 | orchestrator | 2026-04-18 00:56:33 | INFO  | Task c2ce7ac8-b40e-4e8f-87b9-81fc529d8014 is in state SUCCESS 2026-04-18 00:56:33.771927 | orchestrator | 2026-04-18 00:56:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:33.773951 | orchestrator | 2026-04-18 00:56:33 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:33.775683 | orchestrator | 2026-04-18 00:56:33 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:33.775719 | orchestrator | 2026-04-18 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:36.814184 | orchestrator | 2026-04-18 00:56:36 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:36.814707 | orchestrator | 2026-04-18 00:56:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:36.815828 | orchestrator | 2026-04-18 00:56:36 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:36.816888 | orchestrator | 2026-04-18 00:56:36 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:36.816954 | orchestrator | 2026-04-18 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:39.860002 | orchestrator | 2026-04-18 00:56:39 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:39.861332 | orchestrator | 2026-04-18 00:56:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:39.863172 | orchestrator | 2026-04-18 00:56:39 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:39.864773 | orchestrator | 2026-04-18 00:56:39 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:39.864848 | orchestrator | 2026-04-18 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:42.914988 | orchestrator | 2026-04-18 00:56:42 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state STARTED 2026-04-18 00:56:42.916877 | orchestrator | 2026-04-18 00:56:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:42.919493 | orchestrator | 2026-04-18 00:56:42 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:42.921356 | orchestrator | 2026-04-18 00:56:42 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:42.921623 | orchestrator | 2026-04-18 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:45.976223 | orchestrator | 2026-04-18 00:56:45 | INFO  | Task dfbc2c11-a88d-4a91-a422-4de15c50c547 is in state SUCCESS 2026-04-18 00:56:45.978325 | orchestrator | 2026-04-18 00:56:45.978392 | orchestrator | 2026-04-18 00:56:45.978403 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-18 00:56:45.978413 | orchestrator | 2026-04-18 00:56:45.978422 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-18 00:56:45.978431 | orchestrator | Saturday 18 April 2026 00:55:14 +0000 (0:00:00.094) 0:00:00.094 ******** 2026-04-18 00:56:45.978437 | orchestrator | changed: [localhost] 2026-04-18 00:56:45.978444 | orchestrator | 2026-04-18 00:56:45.978449 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-18 00:56:45.978455 | orchestrator | Saturday 18 April 2026 00:55:15 +0000 (0:00:00.822) 0:00:00.916 ******** 2026-04-18 00:56:45.978461 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-04-18 00:56:45.978469 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-04-18 00:56:45.978475 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-04-18 00:56:45.978483 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.initramfs", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2025.1.initramfs.sha256"} 2026-04-18 00:56:45.978514 | orchestrator | 2026-04-18 00:56:45.978520 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:56:45.978527 | orchestrator | localhost : ok=1  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-18 00:56:45.978534 | orchestrator | 2026-04-18 00:56:45.978540 | orchestrator | 2026-04-18 00:56:45.978547 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:56:45.978554 | orchestrator | Saturday 18 April 2026 00:56:31 +0000 (0:01:15.819) 0:01:16.736 ******** 2026-04-18 00:56:45.978560 | orchestrator | =============================================================================== 2026-04-18 00:56:45.978566 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 75.82s 2026-04-18 00:56:45.978573 | orchestrator | Ensure the destination directory exists --------------------------------- 0.82s 2026-04-18 00:56:45.978580 | orchestrator | 2026-04-18 00:56:45.978584 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-18 00:56:45.978589 | orchestrator | 2.16.14 2026-04-18 00:56:45.978593 | orchestrator | 2026-04-18 00:56:45.978597 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-18 00:56:45.978673 | orchestrator | 2026-04-18 00:56:45.978678 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-18 00:56:45.978682 | orchestrator | Saturday 18 April 2026 00:46:46 +0000 (0:00:00.667) 0:00:00.667 ******** 2026-04-18 00:56:45.978686 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.978690 | orchestrator | 2026-04-18 00:56:45.978694 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-18 00:56:45.978698 | orchestrator | Saturday 18 April 2026 00:46:48 +0000 (0:00:01.106) 0:00:01.774 ******** 2026-04-18 00:56:45.978702 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.978706 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.978709 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.978713 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.978717 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.978721 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.978724 | orchestrator | 2026-04-18 00:56:45.978741 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-18 00:56:45.978746 | orchestrator | Saturday 18 April 2026 00:46:49 +0000 (0:00:01.853) 0:00:03.628 ******** 2026-04-18 00:56:45.978750 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.978754 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.978758 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.978761 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.978765 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.978769 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.978772 | orchestrator | 2026-04-18 00:56:45.978776 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-18 00:56:45.978809 | orchestrator | Saturday 18 April 2026 00:46:50 +0000 (0:00:00.656) 0:00:04.284 ******** 2026-04-18 00:56:45.978813 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.978817 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.978821 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.978824 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.978828 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.978832 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.978835 | orchestrator | 2026-04-18 00:56:45.978849 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-18 00:56:45.978853 | orchestrator | Saturday 18 April 2026 00:46:51 +0000 (0:00:01.251) 0:00:05.535 ******** 2026-04-18 00:56:45.978857 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.978867 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.978872 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.978876 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.978881 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.978885 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.978889 | orchestrator | 2026-04-18 00:56:45.978893 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-18 00:56:45.978897 | orchestrator | Saturday 18 April 2026 00:46:53 +0000 (0:00:01.347) 0:00:06.883 ******** 2026-04-18 00:56:45.978902 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.978906 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.978910 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.978915 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.978919 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.978923 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.978927 | orchestrator | 2026-04-18 00:56:45.978934 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-18 00:56:45.978955 | orchestrator | Saturday 18 April 2026 00:46:54 +0000 (0:00:01.042) 0:00:07.925 ******** 2026-04-18 00:56:45.978964 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.978970 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.978976 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.978983 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.978989 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.978996 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.979002 | orchestrator | 2026-04-18 00:56:45.979009 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-18 00:56:45.979016 | orchestrator | Saturday 18 April 2026 00:46:55 +0000 (0:00:01.402) 0:00:09.328 ******** 2026-04-18 00:56:45.979023 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979030 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.979036 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.979042 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.979048 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.979054 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.979060 | orchestrator | 2026-04-18 00:56:45.979067 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-18 00:56:45.979073 | orchestrator | Saturday 18 April 2026 00:46:56 +0000 (0:00:01.139) 0:00:10.468 ******** 2026-04-18 00:56:45.979080 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.979087 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.979092 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.979097 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.979101 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.979105 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.979110 | orchestrator | 2026-04-18 00:56:45.979115 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-18 00:56:45.979121 | orchestrator | Saturday 18 April 2026 00:46:58 +0000 (0:00:01.596) 0:00:12.065 ******** 2026-04-18 00:56:45.979127 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-18 00:56:45.979134 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-18 00:56:45.979140 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-18 00:56:45.979147 | orchestrator | 2026-04-18 00:56:45.979153 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-18 00:56:45.979160 | orchestrator | Saturday 18 April 2026 00:46:58 +0000 (0:00:00.649) 0:00:12.714 ******** 2026-04-18 00:56:45.979167 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.979173 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.979179 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.979185 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.979192 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.979197 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.979202 | orchestrator | 2026-04-18 00:56:45.979222 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-18 00:56:45.979227 | orchestrator | Saturday 18 April 2026 00:47:00 +0000 (0:00:01.474) 0:00:14.189 ******** 2026-04-18 00:56:45.979231 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-18 00:56:45.979236 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-18 00:56:45.979240 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-18 00:56:45.979244 | orchestrator | 2026-04-18 00:56:45.979249 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-18 00:56:45.979253 | orchestrator | Saturday 18 April 2026 00:47:03 +0000 (0:00:02.949) 0:00:17.138 ******** 2026-04-18 00:56:45.979335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-18 00:56:45.979343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-18 00:56:45.979350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-18 00:56:45.979356 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979362 | orchestrator | 2026-04-18 00:56:45.979369 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-18 00:56:45.979416 | orchestrator | Saturday 18 April 2026 00:47:04 +0000 (0:00:00.697) 0:00:17.836 ******** 2026-04-18 00:56:45.979423 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.979429 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.979439 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.979443 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979446 | orchestrator | 2026-04-18 00:56:45.979450 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-18 00:56:45.979454 | orchestrator | Saturday 18 April 2026 00:47:05 +0000 (0:00:01.481) 0:00:19.318 ******** 2026-04-18 00:56:45.979469 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.979479 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.979484 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.979491 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979496 | orchestrator | 2026-04-18 00:56:45.979503 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-18 00:56:45.979516 | orchestrator | Saturday 18 April 2026 00:47:05 +0000 (0:00:00.245) 0:00:19.563 ******** 2026-04-18 00:56:45.979568 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-18 00:47:01.299485', 'end': '2026-04-18 00:47:01.397577', 'delta': '0:00:00.098092', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.979578 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-18 00:47:02.020518', 'end': '2026-04-18 00:47:02.132179', 'delta': '0:00:00.111661', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.979582 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-18 00:47:03.090786', 'end': '2026-04-18 00:47:03.191938', 'delta': '0:00:00.101152', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.979586 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979590 | orchestrator | 2026-04-18 00:56:45.979598 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-18 00:56:45.979602 | orchestrator | Saturday 18 April 2026 00:47:06 +0000 (0:00:00.265) 0:00:19.829 ******** 2026-04-18 00:56:45.979606 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.979610 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.979613 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.979617 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.979621 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.979624 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.979628 | orchestrator | 2026-04-18 00:56:45.979632 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-18 00:56:45.979636 | orchestrator | Saturday 18 April 2026 00:47:08 +0000 (0:00:02.435) 0:00:22.265 ******** 2026-04-18 00:56:45.979639 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-18 00:56:45.979646 | orchestrator | 2026-04-18 00:56:45.979652 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-18 00:56:45.979657 | orchestrator | Saturday 18 April 2026 00:47:09 +0000 (0:00:00.764) 0:00:23.029 ******** 2026-04-18 00:56:45.979669 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979676 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.979713 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.979720 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.979727 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.979734 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.979747 | orchestrator | 2026-04-18 00:56:45.979754 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-18 00:56:45.979761 | orchestrator | Saturday 18 April 2026 00:47:10 +0000 (0:00:00.849) 0:00:23.879 ******** 2026-04-18 00:56:45.979768 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979775 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.979781 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.979788 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.979793 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.979797 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.979801 | orchestrator | 2026-04-18 00:56:45.979805 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-18 00:56:45.979809 | orchestrator | Saturday 18 April 2026 00:47:10 +0000 (0:00:00.717) 0:00:24.597 ******** 2026-04-18 00:56:45.979812 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979816 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.979820 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.979823 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.979827 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.979831 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.979834 | orchestrator | 2026-04-18 00:56:45.979838 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-18 00:56:45.979842 | orchestrator | Saturday 18 April 2026 00:47:11 +0000 (0:00:00.591) 0:00:25.189 ******** 2026-04-18 00:56:45.979846 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979849 | orchestrator | 2026-04-18 00:56:45.979853 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-18 00:56:45.979857 | orchestrator | Saturday 18 April 2026 00:47:11 +0000 (0:00:00.249) 0:00:25.438 ******** 2026-04-18 00:56:45.979861 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979864 | orchestrator | 2026-04-18 00:56:45.979868 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-18 00:56:45.979872 | orchestrator | Saturday 18 April 2026 00:47:11 +0000 (0:00:00.153) 0:00:25.592 ******** 2026-04-18 00:56:45.979875 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979879 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.979883 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.979886 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.979890 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.979894 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.979898 | orchestrator | 2026-04-18 00:56:45.979901 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-18 00:56:45.979905 | orchestrator | Saturday 18 April 2026 00:47:12 +0000 (0:00:00.492) 0:00:26.085 ******** 2026-04-18 00:56:45.979909 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979913 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.979916 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.979920 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.979924 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.979927 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.979931 | orchestrator | 2026-04-18 00:56:45.979935 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-18 00:56:45.979939 | orchestrator | Saturday 18 April 2026 00:47:13 +0000 (0:00:00.731) 0:00:26.817 ******** 2026-04-18 00:56:45.979943 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.979948 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.979954 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.979959 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.979965 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.979972 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.979978 | orchestrator | 2026-04-18 00:56:45.979984 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-18 00:56:45.979987 | orchestrator | Saturday 18 April 2026 00:47:14 +0000 (0:00:01.086) 0:00:27.903 ******** 2026-04-18 00:56:45.979997 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.980001 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.980005 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.980008 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.980012 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.980016 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.980019 | orchestrator | 2026-04-18 00:56:45.980023 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-18 00:56:45.980027 | orchestrator | Saturday 18 April 2026 00:47:15 +0000 (0:00:00.861) 0:00:28.765 ******** 2026-04-18 00:56:45.980031 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.980034 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.980038 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.980042 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.980045 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.980049 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.980053 | orchestrator | 2026-04-18 00:56:45.980061 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-18 00:56:45.980064 | orchestrator | Saturday 18 April 2026 00:47:15 +0000 (0:00:00.500) 0:00:29.266 ******** 2026-04-18 00:56:45.980068 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.980072 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.980075 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.980079 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.980083 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.980086 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.980090 | orchestrator | 2026-04-18 00:56:45.980094 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-18 00:56:45.980098 | orchestrator | Saturday 18 April 2026 00:47:16 +0000 (0:00:00.860) 0:00:30.126 ******** 2026-04-18 00:56:45.980102 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.980106 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.980109 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.980113 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.980121 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.980126 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.980130 | orchestrator | 2026-04-18 00:56:45.980135 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-18 00:56:45.980142 | orchestrator | Saturday 18 April 2026 00:47:17 +0000 (0:00:00.720) 0:00:30.846 ******** 2026-04-18 00:56:45.980171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93b19634--3a0b--57aa--985a--342cbb17f88c-osd--block--93b19634--3a0b--57aa--985a--342cbb17f88c', 'dm-uuid-LVM-YK0myVOptkexU4yylGGexJ0jaYs9lCfjfP61t0d7zNHbihuSC1ZAuo5tCihsfvgP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--97728c5d--edf3--594c--abdf--329078c85e67-osd--block--97728c5d--edf3--594c--abdf--329078c85e67', 'dm-uuid-LVM-qFE9dGHppFmKhOdCc4qDZz73myZWhdPMxadEiFiH5AIaaC87PH1zQ9oHlyxIc5o5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part1', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part14', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part15', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part16', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.980352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--93b19634--3a0b--57aa--985a--342cbb17f88c-osd--block--93b19634--3a0b--57aa--985a--342cbb17f88c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k487kU-yaqc-BXs0-GMkW-925l-J1IB-Afg7h4', 'scsi-0QEMU_QEMU_HARDDISK_60a8bfc5-b9d0-4eee-9755-9876332cb447', 'scsi-SQEMU_QEMU_HARDDISK_60a8bfc5-b9d0-4eee-9755-9876332cb447'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.980364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9fd71a58--43ec--5e10--bd02--c7d805355b61-osd--block--9fd71a58--43ec--5e10--bd02--c7d805355b61', 'dm-uuid-LVM-TtJhrF2y0VyO4Sh6OfA0FLDMI90y59xBP29nR0p0I5oBou0AHhJqIV9AFvUXCxcb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--97728c5d--edf3--594c--abdf--329078c85e67-osd--block--97728c5d--edf3--594c--abdf--329078c85e67'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BQAqj4-jyAs-QeiJ-JOpD-ZItP-vhcO-2cvDia', 'scsi-0QEMU_QEMU_HARDDISK_7641f05d-52e5-4bbf-9f1e-20500225c31b', 'scsi-SQEMU_QEMU_HARDDISK_7641f05d-52e5-4bbf-9f1e-20500225c31b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.980373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a-osd--block--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a', 'dm-uuid-LVM-Xijz2X2U8n0Ed5NsAddEMEPKqTbNbDZmSThiyKvySp3zJkhCcVlExaDMUO7aJD8G'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3bae2c-a706-4f5f-b58d-9951f9e0de3e', 'scsi-SQEMU_QEMU_HARDDISK_7f3bae2c-a706-4f5f-b58d-9951f9e0de3e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.980389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.980401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.980463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part1', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part14', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part15', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part16', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9fd71a58--43ec--5e10--bd02--c7d805355b61-osd--block--9fd71a58--43ec--5e10--bd02--c7d805355b61'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vd4WMr-taxH-kNJc-ee1o-JT06-W3Lq-V6APgm', 'scsi-0QEMU_QEMU_HARDDISK_ca3776c9-6de1-440c-b162-bababc8c77c0', 'scsi-SQEMU_QEMU_HARDDISK_ca3776c9-6de1-440c-b162-bababc8c77c0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a-osd--block--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xmEPRe-iMQ2-oq0P-9Wbs-J2QW-AhjF-qTStBM', 'scsi-0QEMU_QEMU_HARDDISK_3edcf3c9-33ad-4391-9ec0-9b04bbd0d527', 'scsi-SQEMU_QEMU_HARDDISK_3edcf3c9-33ad-4391-9ec0-9b04bbd0d527'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_599ac768-9444-46f8-8891-4fc4025f5c6d', 'scsi-SQEMU_QEMU_HARDDISK_599ac768-9444-46f8-8891-4fc4025f5c6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fe91ca0a--93bc--5e10--8732--62b62acecb68-osd--block--fe91ca0a--93bc--5e10--8732--62b62acecb68', 'dm-uuid-LVM-vxAXaLnoTR3noG6kTA5PQ91VH1N03DYFPTBDRXpGVHe0ELkRBtjb0x5wCQTXtTcQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a409408a--9332--5b4b--a953--28c1be45fb12-osd--block--a409408a--9332--5b4b--a953--28c1be45fb12', 'dm-uuid-LVM-dsseoypKsuamXePlZC8Uc3qwD7RMbzZPjnImWAbzG6PG7EKQVP1eJvcLfiXWqcDJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981204 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.981208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fe91ca0a--93bc--5e10--8732--62b62acecb68-osd--block--fe91ca0a--93bc--5e10--8732--62b62acecb68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XjJwIb-fx48-eBoK-2o3I-RfXz-21zi-Yu5tj7', 'scsi-0QEMU_QEMU_HARDDISK_73edcc3e-3dbf-4300-a5bd-7c80b242e92a', 'scsi-SQEMU_QEMU_HARDDISK_73edcc3e-3dbf-4300-a5bd-7c80b242e92a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a409408a--9332--5b4b--a953--28c1be45fb12-osd--block--a409408a--9332--5b4b--a953--28c1be45fb12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3SXKxf-RChQ-pDHv-9T2b-NCQG-Gfi0-74c3lw', 'scsi-0QEMU_QEMU_HARDDISK_b8f980bd-a337-4427-8cb0-8c7c2531a389', 'scsi-SQEMU_QEMU_HARDDISK_b8f980bd-a337-4427-8cb0-8c7c2531a389'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5963e3c2-4ca5-403e-a695-14ba4ecb8231', 'scsi-SQEMU_QEMU_HARDDISK_5963e3c2-4ca5-403e-a695-14ba4ecb8231'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9', 'scsi-SQEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part1', 'scsi-SQEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part14', 'scsi-SQEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part15', 'scsi-SQEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part16', 'scsi-SQEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981362 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.981369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981426 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.981435 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.981442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d', 'scsi-SQEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part1', 'scsi-SQEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part14', 'scsi-SQEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part15', 'scsi-SQEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part16', 'scsi-SQEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981503 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.981510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:56:45.981548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e', 'scsi-SQEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:56:45.981563 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.981567 | orchestrator | 2026-04-18 00:56:45.981572 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-18 00:56:45.981576 | orchestrator | Saturday 18 April 2026 00:47:19 +0000 (0:00:01.914) 0:00:32.761 ******** 2026-04-18 00:56:45.981581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93b19634--3a0b--57aa--985a--342cbb17f88c-osd--block--93b19634--3a0b--57aa--985a--342cbb17f88c', 'dm-uuid-LVM-YK0myVOptkexU4yylGGexJ0jaYs9lCfjfP61t0d7zNHbihuSC1ZAuo5tCihsfvgP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981589 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--97728c5d--edf3--594c--abdf--329078c85e67-osd--block--97728c5d--edf3--594c--abdf--329078c85e67', 'dm-uuid-LVM-qFE9dGHppFmKhOdCc4qDZz73myZWhdPMxadEiFiH5AIaaC87PH1zQ9oHlyxIc5o5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981600 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981606 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9fd71a58--43ec--5e10--bd02--c7d805355b61-osd--block--9fd71a58--43ec--5e10--bd02--c7d805355b61', 'dm-uuid-LVM-TtJhrF2y0VyO4Sh6OfA0FLDMI90y59xBP29nR0p0I5oBou0AHhJqIV9AFvUXCxcb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981613 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981619 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a-osd--block--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a', 'dm-uuid-LVM-Xijz2X2U8n0Ed5NsAddEMEPKqTbNbDZmSThiyKvySp3zJkhCcVlExaDMUO7aJD8G'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981667 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981687 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981696 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981709 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981713 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981718 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981738 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981743 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part1', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part14', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part15', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part16', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981761 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--93b19634--3a0b--57aa--985a--342cbb17f88c-osd--block--93b19634--3a0b--57aa--985a--342cbb17f88c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k487kU-yaqc-BXs0-GMkW-925l-J1IB-Afg7h4', 'scsi-0QEMU_QEMU_HARDDISK_60a8bfc5-b9d0-4eee-9755-9876332cb447', 'scsi-SQEMU_QEMU_HARDDISK_60a8bfc5-b9d0-4eee-9755-9876332cb447'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--97728c5d--edf3--594c--abdf--329078c85e67-osd--block--97728c5d--edf3--594c--abdf--329078c85e67'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BQAqj4-jyAs-QeiJ-JOpD-ZItP-vhcO-2cvDia', 'scsi-0QEMU_QEMU_HARDDISK_7641f05d-52e5-4bbf-9f1e-20500225c31b', 'scsi-SQEMU_QEMU_HARDDISK_7641f05d-52e5-4bbf-9f1e-20500225c31b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981781 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981786 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3bae2c-a706-4f5f-b58d-9951f9e0de3e', 'scsi-SQEMU_QEMU_HARDDISK_7f3bae2c-a706-4f5f-b58d-9951f9e0de3e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981796 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981803 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981808 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.981813 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fe91ca0a--93bc--5e10--8732--62b62acecb68-osd--block--fe91ca0a--93bc--5e10--8732--62b62acecb68', 'dm-uuid-LVM-vxAXaLnoTR3noG6kTA5PQ91VH1N03DYFPTBDRXpGVHe0ELkRBtjb0x5wCQTXtTcQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981821 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part1', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part14', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part15', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part16', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981833 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9fd71a58--43ec--5e10--bd02--c7d805355b61-osd--block--9fd71a58--43ec--5e10--bd02--c7d805355b61'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vd4WMr-taxH-kNJc-ee1o-JT06-W3Lq-V6APgm', 'scsi-0QEMU_QEMU_HARDDISK_ca3776c9-6de1-440c-b162-bababc8c77c0', 'scsi-SQEMU_QEMU_HARDDISK_ca3776c9-6de1-440c-b162-bababc8c77c0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981839 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a409408a--9332--5b4b--a953--28c1be45fb12-osd--block--a409408a--9332--5b4b--a953--28c1be45fb12', 'dm-uuid-LVM-dsseoypKsuamXePlZC8Uc3qwD7RMbzZPjnImWAbzG6PG7EKQVP1eJvcLfiXWqcDJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a-osd--block--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xmEPRe-iMQ2-oq0P-9Wbs-J2QW-AhjF-qTStBM', 'scsi-0QEMU_QEMU_HARDDISK_3edcf3c9-33ad-4391-9ec0-9b04bbd0d527', 'scsi-SQEMU_QEMU_HARDDISK_3edcf3c9-33ad-4391-9ec0-9b04bbd0d527'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981848 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_599ac768-9444-46f8-8891-4fc4025f5c6d', 'scsi-SQEMU_QEMU_HARDDISK_599ac768-9444-46f8-8891-4fc4025f5c6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981861 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981871 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981878 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981949 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981958 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981965 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981977 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981985 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981993 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.981999 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982006 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982073 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982086 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982093 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982722 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9', 'scsi-SQEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part1', 'scsi-SQEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part14', 'scsi-SQEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part15', 'scsi-SQEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part16', 'scsi-SQEMU_QEMU_HARDDISK_d62b803e-00a4-4d44-9f80-4fab0bc8c0f9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982792 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982821 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.982828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982845 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982863 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982871 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982877 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982884 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982896 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982902 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982919 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982925 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982933 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982938 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fe91ca0a--93bc--5e10--8732--62b62acecb68-osd--block--fe91ca0a--93bc--5e10--8732--62b62acecb68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XjJwIb-fx48-eBoK-2o3I-RfXz-21zi-Yu5tj7', 'scsi-0QEMU_QEMU_HARDDISK_73edcc3e-3dbf-4300-a5bd-7c80b242e92a', 'scsi-SQEMU_QEMU_HARDDISK_73edcc3e-3dbf-4300-a5bd-7c80b242e92a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982945 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a409408a--9332--5b4b--a953--28c1be45fb12-osd--block--a409408a--9332--5b4b--a953--28c1be45fb12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3SXKxf-RChQ-pDHv-9T2b-NCQG-Gfi0-74c3lw', 'scsi-0QEMU_QEMU_HARDDISK_b8f980bd-a337-4427-8cb0-8c7c2531a389', 'scsi-SQEMU_QEMU_HARDDISK_b8f980bd-a337-4427-8cb0-8c7c2531a389'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5963e3c2-4ca5-403e-a695-14ba4ecb8231', 'scsi-SQEMU_QEMU_HARDDISK_5963e3c2-4ca5-403e-a695-14ba4ecb8231'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.982995 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d', 'scsi-SQEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part1', 'scsi-SQEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part14', 'scsi-SQEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part15', 'scsi-SQEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part16', 'scsi-SQEMU_QEMU_HARDDISK_9213db1e-4236-43b7-9dd1-3c155c1e850d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983001 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.983006 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983010 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983020 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.983027 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.983033 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983042 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983055 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983067 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983074 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983081 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983092 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983098 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983114 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e', 'scsi-SQEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a56528a-54c0-4b72-af2f-9bfd74ddf29e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983122 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:56:45.983135 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.983142 | orchestrator | 2026-04-18 00:56:45.983149 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-18 00:56:45.983157 | orchestrator | Saturday 18 April 2026 00:47:20 +0000 (0:00:01.396) 0:00:34.157 ******** 2026-04-18 00:56:45.983163 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.983171 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.983178 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.983184 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.983191 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.983195 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.983198 | orchestrator | 2026-04-18 00:56:45.983202 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-18 00:56:45.983206 | orchestrator | Saturday 18 April 2026 00:47:22 +0000 (0:00:01.774) 0:00:35.932 ******** 2026-04-18 00:56:45.983210 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.983214 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.983217 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.983222 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.983225 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.983229 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.983233 | orchestrator | 2026-04-18 00:56:45.983238 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-18 00:56:45.983242 | orchestrator | Saturday 18 April 2026 00:47:23 +0000 (0:00:01.266) 0:00:37.199 ******** 2026-04-18 00:56:45.983246 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.983250 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.983298 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.983309 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.983315 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.983322 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.983329 | orchestrator | 2026-04-18 00:56:45.983336 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-18 00:56:45.983343 | orchestrator | Saturday 18 April 2026 00:47:23 +0000 (0:00:00.533) 0:00:37.732 ******** 2026-04-18 00:56:45.983349 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.983354 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.983358 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.983363 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.983367 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.983372 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.983376 | orchestrator | 2026-04-18 00:56:45.983380 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-18 00:56:45.983385 | orchestrator | Saturday 18 April 2026 00:47:24 +0000 (0:00:00.865) 0:00:38.598 ******** 2026-04-18 00:56:45.983389 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.983399 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.983403 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.983408 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.983412 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.983417 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.983422 | orchestrator | 2026-04-18 00:56:45.983429 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-18 00:56:45.983435 | orchestrator | Saturday 18 April 2026 00:47:25 +0000 (0:00:00.797) 0:00:39.396 ******** 2026-04-18 00:56:45.983457 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.983465 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.983471 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.983477 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.983483 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.983489 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.983496 | orchestrator | 2026-04-18 00:56:45.983502 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-18 00:56:45.983509 | orchestrator | Saturday 18 April 2026 00:47:26 +0000 (0:00:00.866) 0:00:40.262 ******** 2026-04-18 00:56:45.983523 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-18 00:56:45.983530 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-18 00:56:45.983537 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-18 00:56:45.983543 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-18 00:56:45.983549 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-18 00:56:45.983555 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-18 00:56:45.983562 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-18 00:56:45.983568 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-18 00:56:45.983574 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-18 00:56:45.983579 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-18 00:56:45.983585 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-18 00:56:45.983592 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-18 00:56:45.983598 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-18 00:56:45.983605 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-18 00:56:45.983611 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-18 00:56:45.983618 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-18 00:56:45.983624 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-18 00:56:45.983631 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-18 00:56:45.983638 | orchestrator | 2026-04-18 00:56:45.983644 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-18 00:56:45.983648 | orchestrator | Saturday 18 April 2026 00:47:29 +0000 (0:00:03.204) 0:00:43.467 ******** 2026-04-18 00:56:45.983652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-18 00:56:45.983657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-18 00:56:45.983661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-18 00:56:45.983664 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.983669 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-18 00:56:45.983673 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-18 00:56:45.983677 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-18 00:56:45.983681 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.983689 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-18 00:56:45.983694 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-18 00:56:45.983700 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-18 00:56:45.983706 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.983711 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-18 00:56:45.983717 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-18 00:56:45.983722 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-18 00:56:45.983728 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.983735 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-18 00:56:45.983742 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-18 00:56:45.983748 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-18 00:56:45.983764 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.983769 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-18 00:56:45.983773 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-18 00:56:45.983777 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-18 00:56:45.983781 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.983785 | orchestrator | 2026-04-18 00:56:45.983789 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-18 00:56:45.983793 | orchestrator | Saturday 18 April 2026 00:47:30 +0000 (0:00:01.037) 0:00:44.505 ******** 2026-04-18 00:56:45.983797 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.983801 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.983805 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.983810 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.983814 | orchestrator | 2026-04-18 00:56:45.983818 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-18 00:56:45.983822 | orchestrator | Saturday 18 April 2026 00:47:31 +0000 (0:00:01.208) 0:00:45.713 ******** 2026-04-18 00:56:45.983826 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.983829 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.983834 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.983838 | orchestrator | 2026-04-18 00:56:45.983848 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-18 00:56:45.983852 | orchestrator | Saturday 18 April 2026 00:47:32 +0000 (0:00:00.489) 0:00:46.203 ******** 2026-04-18 00:56:45.983856 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.983860 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.983864 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.983868 | orchestrator | 2026-04-18 00:56:45.983871 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-18 00:56:45.983875 | orchestrator | Saturday 18 April 2026 00:47:32 +0000 (0:00:00.308) 0:00:46.511 ******** 2026-04-18 00:56:45.983880 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.983884 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.983888 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.983892 | orchestrator | 2026-04-18 00:56:45.983896 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-18 00:56:45.983900 | orchestrator | Saturday 18 April 2026 00:47:33 +0000 (0:00:00.549) 0:00:47.061 ******** 2026-04-18 00:56:45.983904 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.983908 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.983918 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.983922 | orchestrator | 2026-04-18 00:56:45.983926 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-18 00:56:45.983929 | orchestrator | Saturday 18 April 2026 00:47:34 +0000 (0:00:00.732) 0:00:47.793 ******** 2026-04-18 00:56:45.983933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.983937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.983941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.983945 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.983949 | orchestrator | 2026-04-18 00:56:45.983953 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-18 00:56:45.983957 | orchestrator | Saturday 18 April 2026 00:47:34 +0000 (0:00:00.948) 0:00:48.742 ******** 2026-04-18 00:56:45.983960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.983964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.983968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.983972 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.983981 | orchestrator | 2026-04-18 00:56:45.983984 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-18 00:56:45.983988 | orchestrator | Saturday 18 April 2026 00:47:35 +0000 (0:00:00.455) 0:00:49.198 ******** 2026-04-18 00:56:45.983992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.983998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.984004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.984013 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.984021 | orchestrator | 2026-04-18 00:56:45.984027 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-18 00:56:45.984032 | orchestrator | Saturday 18 April 2026 00:47:35 +0000 (0:00:00.473) 0:00:49.671 ******** 2026-04-18 00:56:45.984038 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.984044 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.984049 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.984055 | orchestrator | 2026-04-18 00:56:45.984060 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-18 00:56:45.984066 | orchestrator | Saturday 18 April 2026 00:47:36 +0000 (0:00:00.281) 0:00:49.953 ******** 2026-04-18 00:56:45.984072 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-18 00:56:45.984078 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-18 00:56:45.984084 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-18 00:56:45.984090 | orchestrator | 2026-04-18 00:56:45.984096 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-18 00:56:45.984102 | orchestrator | Saturday 18 April 2026 00:47:37 +0000 (0:00:01.336) 0:00:51.289 ******** 2026-04-18 00:56:45.984108 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-18 00:56:45.984116 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-18 00:56:45.984122 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-18 00:56:45.984128 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-18 00:56:45.984134 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-18 00:56:45.984140 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-18 00:56:45.984146 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-18 00:56:45.984152 | orchestrator | 2026-04-18 00:56:45.984158 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-18 00:56:45.984165 | orchestrator | Saturday 18 April 2026 00:47:38 +0000 (0:00:00.911) 0:00:52.201 ******** 2026-04-18 00:56:45.984170 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-18 00:56:45.984176 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-18 00:56:45.984183 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-18 00:56:45.984189 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-18 00:56:45.984196 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-18 00:56:45.984202 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-18 00:56:45.984208 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-18 00:56:45.984215 | orchestrator | 2026-04-18 00:56:45.984221 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-18 00:56:45.984232 | orchestrator | Saturday 18 April 2026 00:47:40 +0000 (0:00:02.493) 0:00:54.694 ******** 2026-04-18 00:56:45.984238 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.984251 | orchestrator | 2026-04-18 00:56:45.984254 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-18 00:56:45.984284 | orchestrator | Saturday 18 April 2026 00:47:42 +0000 (0:00:01.227) 0:00:55.922 ******** 2026-04-18 00:56:45.984288 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.984292 | orchestrator | 2026-04-18 00:56:45.984296 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-18 00:56:45.984307 | orchestrator | Saturday 18 April 2026 00:47:43 +0000 (0:00:01.035) 0:00:56.957 ******** 2026-04-18 00:56:45.984311 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.984315 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.984319 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.984326 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.984334 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.984341 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.984349 | orchestrator | 2026-04-18 00:56:45.984355 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-18 00:56:45.984361 | orchestrator | Saturday 18 April 2026 00:47:44 +0000 (0:00:01.281) 0:00:58.238 ******** 2026-04-18 00:56:45.984367 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.984373 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.984378 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.984385 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.984391 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.984397 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.984403 | orchestrator | 2026-04-18 00:56:45.984410 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-18 00:56:45.984416 | orchestrator | Saturday 18 April 2026 00:47:45 +0000 (0:00:00.940) 0:00:59.178 ******** 2026-04-18 00:56:45.984422 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.984429 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.984436 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.984443 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.984450 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.984456 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.984462 | orchestrator | 2026-04-18 00:56:45.984471 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-18 00:56:45.984482 | orchestrator | Saturday 18 April 2026 00:47:46 +0000 (0:00:00.808) 0:00:59.987 ******** 2026-04-18 00:56:45.984488 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.984493 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.984499 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.984506 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.984511 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.984517 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.984523 | orchestrator | 2026-04-18 00:56:45.984529 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-18 00:56:45.984535 | orchestrator | Saturday 18 April 2026 00:47:47 +0000 (0:00:01.117) 0:01:01.105 ******** 2026-04-18 00:56:45.984542 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.984548 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.984555 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.984561 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.984568 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.984574 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.984580 | orchestrator | 2026-04-18 00:56:45.984588 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-18 00:56:45.984592 | orchestrator | Saturday 18 April 2026 00:47:48 +0000 (0:00:01.068) 0:01:02.174 ******** 2026-04-18 00:56:45.984596 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.984600 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.984604 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.984609 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.984620 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.984623 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.984627 | orchestrator | 2026-04-18 00:56:45.984632 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-18 00:56:45.984636 | orchestrator | Saturday 18 April 2026 00:47:49 +0000 (0:00:00.768) 0:01:02.942 ******** 2026-04-18 00:56:45.984640 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.984644 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.984647 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.984651 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.984655 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.984659 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.984663 | orchestrator | 2026-04-18 00:56:45.984666 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-18 00:56:45.984670 | orchestrator | Saturday 18 April 2026 00:47:49 +0000 (0:00:00.624) 0:01:03.566 ******** 2026-04-18 00:56:45.984674 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.984678 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.984682 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.984686 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.984690 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.984694 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.984698 | orchestrator | 2026-04-18 00:56:45.984702 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-18 00:56:45.984706 | orchestrator | Saturday 18 April 2026 00:47:51 +0000 (0:00:01.442) 0:01:05.009 ******** 2026-04-18 00:56:45.984710 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.984714 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.984718 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.984722 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.984728 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.984734 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.984739 | orchestrator | 2026-04-18 00:56:45.984745 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-18 00:56:45.984757 | orchestrator | Saturday 18 April 2026 00:47:52 +0000 (0:00:01.638) 0:01:06.648 ******** 2026-04-18 00:56:45.984764 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.984770 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.984777 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.984784 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.984792 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.984796 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.984800 | orchestrator | 2026-04-18 00:56:45.984804 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-18 00:56:45.984808 | orchestrator | Saturday 18 April 2026 00:47:53 +0000 (0:00:00.705) 0:01:07.353 ******** 2026-04-18 00:56:45.984812 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.984816 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.984860 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.984864 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.984868 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.984872 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.984875 | orchestrator | 2026-04-18 00:56:45.984879 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-18 00:56:45.984891 | orchestrator | Saturday 18 April 2026 00:47:54 +0000 (0:00:00.603) 0:01:07.957 ******** 2026-04-18 00:56:45.984895 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.984899 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.984903 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.984906 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.984910 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.984914 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.984918 | orchestrator | 2026-04-18 00:56:45.984922 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-18 00:56:45.984933 | orchestrator | Saturday 18 April 2026 00:47:55 +0000 (0:00:00.852) 0:01:08.810 ******** 2026-04-18 00:56:45.984937 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.984941 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.984945 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.984949 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.984953 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.984957 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.984961 | orchestrator | 2026-04-18 00:56:45.984964 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-18 00:56:45.984968 | orchestrator | Saturday 18 April 2026 00:47:55 +0000 (0:00:00.657) 0:01:09.467 ******** 2026-04-18 00:56:45.984972 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.984976 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.984980 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.984984 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.984988 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.984992 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.984996 | orchestrator | 2026-04-18 00:56:45.985000 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-18 00:56:45.985003 | orchestrator | Saturday 18 April 2026 00:47:56 +0000 (0:00:00.630) 0:01:10.098 ******** 2026-04-18 00:56:45.985008 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985011 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985015 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985019 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.985023 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.985026 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.985030 | orchestrator | 2026-04-18 00:56:45.985034 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-18 00:56:45.985038 | orchestrator | Saturday 18 April 2026 00:47:56 +0000 (0:00:00.490) 0:01:10.589 ******** 2026-04-18 00:56:45.985041 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985045 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985050 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985053 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.985057 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.985061 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.985065 | orchestrator | 2026-04-18 00:56:45.985069 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-18 00:56:45.985072 | orchestrator | Saturday 18 April 2026 00:47:57 +0000 (0:00:00.602) 0:01:11.192 ******** 2026-04-18 00:56:45.985076 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985080 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985084 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985088 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.985092 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.985095 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.985099 | orchestrator | 2026-04-18 00:56:45.985103 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-18 00:56:45.985107 | orchestrator | Saturday 18 April 2026 00:47:57 +0000 (0:00:00.433) 0:01:11.625 ******** 2026-04-18 00:56:45.985111 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.985115 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.985119 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.985122 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.985126 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.985130 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.985133 | orchestrator | 2026-04-18 00:56:45.985137 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-18 00:56:45.985141 | orchestrator | Saturday 18 April 2026 00:47:58 +0000 (0:00:00.567) 0:01:12.193 ******** 2026-04-18 00:56:45.985145 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.985149 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.985158 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.985162 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.985166 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.985170 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.985173 | orchestrator | 2026-04-18 00:56:45.985177 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-18 00:56:45.985181 | orchestrator | Saturday 18 April 2026 00:47:59 +0000 (0:00:01.187) 0:01:13.381 ******** 2026-04-18 00:56:45.985186 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.985190 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.985194 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.985198 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.985201 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.985205 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.985209 | orchestrator | 2026-04-18 00:56:45.985213 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-18 00:56:45.985221 | orchestrator | Saturday 18 April 2026 00:48:01 +0000 (0:00:01.598) 0:01:14.980 ******** 2026-04-18 00:56:45.985225 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.985229 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.985232 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.985236 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.985240 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.985244 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.985247 | orchestrator | 2026-04-18 00:56:45.985252 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-18 00:56:45.985270 | orchestrator | Saturday 18 April 2026 00:48:03 +0000 (0:00:02.589) 0:01:17.569 ******** 2026-04-18 00:56:45.985276 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.985281 | orchestrator | 2026-04-18 00:56:45.985284 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-18 00:56:45.985292 | orchestrator | Saturday 18 April 2026 00:48:05 +0000 (0:00:01.261) 0:01:18.830 ******** 2026-04-18 00:56:45.985296 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985300 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985304 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985308 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.985312 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.985315 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.985319 | orchestrator | 2026-04-18 00:56:45.985323 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-18 00:56:45.985326 | orchestrator | Saturday 18 April 2026 00:48:05 +0000 (0:00:00.726) 0:01:19.556 ******** 2026-04-18 00:56:45.985330 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985334 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985338 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985342 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.985345 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.985349 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.985353 | orchestrator | 2026-04-18 00:56:45.985357 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-18 00:56:45.985360 | orchestrator | Saturday 18 April 2026 00:48:06 +0000 (0:00:00.557) 0:01:20.113 ******** 2026-04-18 00:56:45.985364 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-18 00:56:45.985368 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-18 00:56:45.985372 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-18 00:56:45.985376 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-18 00:56:45.985380 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-18 00:56:45.985390 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-18 00:56:45.985394 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-18 00:56:45.985398 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-18 00:56:45.985402 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-18 00:56:45.985406 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-18 00:56:45.985410 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-18 00:56:45.985413 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-18 00:56:45.985417 | orchestrator | 2026-04-18 00:56:45.985421 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-18 00:56:45.985425 | orchestrator | Saturday 18 April 2026 00:48:07 +0000 (0:00:01.484) 0:01:21.598 ******** 2026-04-18 00:56:45.985428 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.985432 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.985436 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.985440 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.985444 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.985447 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.985451 | orchestrator | 2026-04-18 00:56:45.985455 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-18 00:56:45.985459 | orchestrator | Saturday 18 April 2026 00:48:08 +0000 (0:00:01.092) 0:01:22.691 ******** 2026-04-18 00:56:45.985463 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985467 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985471 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985475 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.985478 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.985483 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.985487 | orchestrator | 2026-04-18 00:56:45.985490 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-18 00:56:45.985494 | orchestrator | Saturday 18 April 2026 00:48:09 +0000 (0:00:00.803) 0:01:23.494 ******** 2026-04-18 00:56:45.985498 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985503 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985508 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985515 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.985520 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.985524 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.985528 | orchestrator | 2026-04-18 00:56:45.985532 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-18 00:56:45.985536 | orchestrator | Saturday 18 April 2026 00:48:10 +0000 (0:00:00.408) 0:01:23.903 ******** 2026-04-18 00:56:45.985540 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985544 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985548 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985555 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.985559 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.985565 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.985572 | orchestrator | 2026-04-18 00:56:45.985578 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-18 00:56:45.985584 | orchestrator | Saturday 18 April 2026 00:48:10 +0000 (0:00:00.605) 0:01:24.508 ******** 2026-04-18 00:56:45.985594 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.985602 | orchestrator | 2026-04-18 00:56:45.985610 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-18 00:56:45.985616 | orchestrator | Saturday 18 April 2026 00:48:11 +0000 (0:00:01.067) 0:01:25.575 ******** 2026-04-18 00:56:45.985628 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.985635 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.985641 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.985647 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.985659 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.985666 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.985672 | orchestrator | 2026-04-18 00:56:45.985680 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-18 00:56:45.985686 | orchestrator | Saturday 18 April 2026 00:48:55 +0000 (0:00:43.542) 0:02:09.117 ******** 2026-04-18 00:56:45.985693 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-18 00:56:45.985699 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-18 00:56:45.985705 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-18 00:56:45.985712 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985718 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-18 00:56:45.985725 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-18 00:56:45.985731 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-18 00:56:45.985737 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985743 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-18 00:56:45.985750 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-18 00:56:45.985758 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-18 00:56:45.985764 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985770 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-18 00:56:45.985776 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-18 00:56:45.985782 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-18 00:56:45.985789 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.985795 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-18 00:56:45.985801 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-18 00:56:45.985808 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-18 00:56:45.985814 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.985821 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-18 00:56:45.985825 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-18 00:56:45.985830 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-18 00:56:45.985833 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.985837 | orchestrator | 2026-04-18 00:56:45.985841 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-18 00:56:45.985846 | orchestrator | Saturday 18 April 2026 00:48:56 +0000 (0:00:00.876) 0:02:09.994 ******** 2026-04-18 00:56:45.985849 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985853 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985857 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985861 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.985865 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.985869 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.985873 | orchestrator | 2026-04-18 00:56:45.985877 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-18 00:56:45.985880 | orchestrator | Saturday 18 April 2026 00:48:56 +0000 (0:00:00.580) 0:02:10.574 ******** 2026-04-18 00:56:45.985884 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985888 | orchestrator | 2026-04-18 00:56:45.985899 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-18 00:56:45.985903 | orchestrator | Saturday 18 April 2026 00:48:56 +0000 (0:00:00.140) 0:02:10.715 ******** 2026-04-18 00:56:45.985907 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985911 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985915 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985919 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.985922 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.985926 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.985930 | orchestrator | 2026-04-18 00:56:45.985934 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-18 00:56:45.985937 | orchestrator | Saturday 18 April 2026 00:48:57 +0000 (0:00:00.811) 0:02:11.526 ******** 2026-04-18 00:56:45.985941 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985945 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985949 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985953 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.985957 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.985961 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.985965 | orchestrator | 2026-04-18 00:56:45.985976 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-18 00:56:45.985980 | orchestrator | Saturday 18 April 2026 00:48:58 +0000 (0:00:00.564) 0:02:12.091 ******** 2026-04-18 00:56:45.985984 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.985987 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.985991 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.985995 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.985999 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.986003 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.986007 | orchestrator | 2026-04-18 00:56:45.986010 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-18 00:56:45.986069 | orchestrator | Saturday 18 April 2026 00:48:59 +0000 (0:00:00.702) 0:02:12.794 ******** 2026-04-18 00:56:45.986074 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.986078 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.986082 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.986086 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.986090 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.986093 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.986097 | orchestrator | 2026-04-18 00:56:45.986108 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-18 00:56:45.986112 | orchestrator | Saturday 18 April 2026 00:49:02 +0000 (0:00:03.942) 0:02:16.737 ******** 2026-04-18 00:56:45.986116 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.986119 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.986123 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.986127 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.986131 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.986135 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.986138 | orchestrator | 2026-04-18 00:56:45.986142 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-18 00:56:45.986146 | orchestrator | Saturday 18 April 2026 00:49:03 +0000 (0:00:00.623) 0:02:17.361 ******** 2026-04-18 00:56:45.986151 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.986156 | orchestrator | 2026-04-18 00:56:45.986159 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-18 00:56:45.986163 | orchestrator | Saturday 18 April 2026 00:49:04 +0000 (0:00:00.987) 0:02:18.348 ******** 2026-04-18 00:56:45.986167 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.986171 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.986175 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.986179 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.986187 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.986191 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.986195 | orchestrator | 2026-04-18 00:56:45.986199 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-18 00:56:45.986202 | orchestrator | Saturday 18 April 2026 00:49:05 +0000 (0:00:00.502) 0:02:18.850 ******** 2026-04-18 00:56:45.986206 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.986210 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.986214 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.986218 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.986221 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.986225 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.986229 | orchestrator | 2026-04-18 00:56:45.986233 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-18 00:56:45.986237 | orchestrator | Saturday 18 April 2026 00:49:05 +0000 (0:00:00.724) 0:02:19.574 ******** 2026-04-18 00:56:45.986240 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.986244 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.986248 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.986252 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.986278 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.986283 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.986287 | orchestrator | 2026-04-18 00:56:45.986291 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-18 00:56:45.986294 | orchestrator | Saturday 18 April 2026 00:49:06 +0000 (0:00:00.519) 0:02:20.093 ******** 2026-04-18 00:56:45.986298 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.986302 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.986306 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.986309 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.986313 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.986317 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.986321 | orchestrator | 2026-04-18 00:56:45.986325 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-18 00:56:45.986328 | orchestrator | Saturday 18 April 2026 00:49:06 +0000 (0:00:00.611) 0:02:20.705 ******** 2026-04-18 00:56:45.986333 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.986336 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.986340 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.986344 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.986348 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.986351 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.986355 | orchestrator | 2026-04-18 00:56:45.986359 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-18 00:56:45.986363 | orchestrator | Saturday 18 April 2026 00:49:07 +0000 (0:00:00.490) 0:02:21.196 ******** 2026-04-18 00:56:45.986366 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.986370 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.986374 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.986378 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.986381 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.986385 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.986389 | orchestrator | 2026-04-18 00:56:45.986393 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-18 00:56:45.986396 | orchestrator | Saturday 18 April 2026 00:49:08 +0000 (0:00:00.639) 0:02:21.836 ******** 2026-04-18 00:56:45.986400 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.986404 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.986408 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.986412 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.986416 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.986423 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.986431 | orchestrator | 2026-04-18 00:56:45.986435 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-18 00:56:45.986439 | orchestrator | Saturday 18 April 2026 00:49:08 +0000 (0:00:00.533) 0:02:22.369 ******** 2026-04-18 00:56:45.986443 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.986446 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.986450 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.986454 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.986458 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.986461 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.986465 | orchestrator | 2026-04-18 00:56:45.986469 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-18 00:56:45.986473 | orchestrator | Saturday 18 April 2026 00:49:09 +0000 (0:00:00.693) 0:02:23.063 ******** 2026-04-18 00:56:45.986476 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.986481 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.986484 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.986488 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.986503 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.986507 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.986511 | orchestrator | 2026-04-18 00:56:45.986515 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-18 00:56:45.986519 | orchestrator | Saturday 18 April 2026 00:49:10 +0000 (0:00:00.880) 0:02:23.944 ******** 2026-04-18 00:56:45.986523 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.986527 | orchestrator | 2026-04-18 00:56:45.986531 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-18 00:56:45.986535 | orchestrator | Saturday 18 April 2026 00:49:11 +0000 (0:00:01.148) 0:02:25.093 ******** 2026-04-18 00:56:45.986538 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-18 00:56:45.986542 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-18 00:56:45.986546 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-18 00:56:45.986550 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-18 00:56:45.986554 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-18 00:56:45.986559 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-18 00:56:45.986562 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-18 00:56:45.986566 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-18 00:56:45.986570 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-18 00:56:45.986574 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-18 00:56:45.986578 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-18 00:56:45.986581 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-18 00:56:45.986585 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-18 00:56:45.986589 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-18 00:56:45.986593 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-18 00:56:45.986597 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-18 00:56:45.986601 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-18 00:56:45.986604 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-18 00:56:45.986608 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-18 00:56:45.986612 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-18 00:56:45.986616 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-18 00:56:45.986620 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-18 00:56:45.986623 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-18 00:56:45.986627 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-18 00:56:45.986635 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-18 00:56:45.986639 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-18 00:56:45.986643 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-18 00:56:45.986646 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-18 00:56:45.986650 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-18 00:56:45.986654 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-18 00:56:45.986658 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-18 00:56:45.986662 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-18 00:56:45.986666 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-18 00:56:45.986670 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-18 00:56:45.986674 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-18 00:56:45.986678 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-18 00:56:45.986681 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-18 00:56:45.986685 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-18 00:56:45.986689 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-18 00:56:45.986693 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-18 00:56:45.986697 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-18 00:56:45.986701 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-18 00:56:45.986704 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-18 00:56:45.986711 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-18 00:56:45.986715 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-18 00:56:45.986719 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-18 00:56:45.986723 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-18 00:56:45.986726 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-18 00:56:45.986730 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-18 00:56:45.986734 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-18 00:56:45.986738 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-18 00:56:45.986742 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-18 00:56:45.986746 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-18 00:56:45.986750 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-18 00:56:45.986763 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-18 00:56:45.986767 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-18 00:56:45.986771 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-18 00:56:45.986775 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-18 00:56:45.986778 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-18 00:56:45.986784 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-18 00:56:45.986790 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-18 00:56:45.986796 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-18 00:56:45.986802 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-18 00:56:45.986808 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-18 00:56:45.986814 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-18 00:56:45.986827 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-18 00:56:45.986833 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-18 00:56:45.986840 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-18 00:56:45.986846 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-18 00:56:45.986852 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-18 00:56:45.986858 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-18 00:56:45.986862 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-18 00:56:45.986865 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-18 00:56:45.986869 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-18 00:56:45.986873 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-18 00:56:45.986877 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-18 00:56:45.986881 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-18 00:56:45.986884 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-18 00:56:45.986888 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-18 00:56:45.986892 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-18 00:56:45.986896 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-18 00:56:45.986899 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-18 00:56:45.986903 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-18 00:56:45.986907 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-18 00:56:45.986911 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-18 00:56:45.986915 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-18 00:56:45.986918 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-18 00:56:45.986922 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-18 00:56:45.986926 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-18 00:56:45.986930 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-18 00:56:45.986933 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-18 00:56:45.986937 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-18 00:56:45.986941 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-18 00:56:45.986945 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-18 00:56:45.986948 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-18 00:56:45.986952 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-18 00:56:45.986956 | orchestrator | 2026-04-18 00:56:45.986959 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-18 00:56:45.986963 | orchestrator | Saturday 18 April 2026 00:49:18 +0000 (0:00:07.453) 0:02:32.546 ******** 2026-04-18 00:56:45.986967 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.986971 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.986974 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.986982 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.986986 | orchestrator | 2026-04-18 00:56:45.986989 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-18 00:56:45.986993 | orchestrator | Saturday 18 April 2026 00:49:19 +0000 (0:00:00.994) 0:02:33.540 ******** 2026-04-18 00:56:45.986997 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.987005 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.987009 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.987013 | orchestrator | 2026-04-18 00:56:45.987021 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-18 00:56:45.987025 | orchestrator | Saturday 18 April 2026 00:49:20 +0000 (0:00:00.768) 0:02:34.309 ******** 2026-04-18 00:56:45.987029 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.987032 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.987036 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.987040 | orchestrator | 2026-04-18 00:56:45.987044 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-18 00:56:45.987047 | orchestrator | Saturday 18 April 2026 00:49:21 +0000 (0:00:01.295) 0:02:35.604 ******** 2026-04-18 00:56:45.987051 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.987055 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.987058 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.987062 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987066 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987070 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987073 | orchestrator | 2026-04-18 00:56:45.987077 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-18 00:56:45.987081 | orchestrator | Saturday 18 April 2026 00:49:22 +0000 (0:00:00.626) 0:02:36.231 ******** 2026-04-18 00:56:45.987085 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.987089 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.987092 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.987096 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987099 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987103 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987107 | orchestrator | 2026-04-18 00:56:45.987111 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-18 00:56:45.987115 | orchestrator | Saturday 18 April 2026 00:49:22 +0000 (0:00:00.461) 0:02:36.692 ******** 2026-04-18 00:56:45.987119 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987122 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987126 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987130 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987134 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987138 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987141 | orchestrator | 2026-04-18 00:56:45.987145 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-18 00:56:45.987149 | orchestrator | Saturday 18 April 2026 00:49:23 +0000 (0:00:00.590) 0:02:37.283 ******** 2026-04-18 00:56:45.987153 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987156 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987160 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987164 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987168 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987171 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987175 | orchestrator | 2026-04-18 00:56:45.987179 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-18 00:56:45.987182 | orchestrator | Saturday 18 April 2026 00:49:24 +0000 (0:00:00.526) 0:02:37.809 ******** 2026-04-18 00:56:45.987186 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987190 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987198 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987201 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987205 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987209 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987213 | orchestrator | 2026-04-18 00:56:45.987216 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-18 00:56:45.987220 | orchestrator | Saturday 18 April 2026 00:49:24 +0000 (0:00:00.852) 0:02:38.662 ******** 2026-04-18 00:56:45.987224 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987228 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987231 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987235 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987238 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987242 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987246 | orchestrator | 2026-04-18 00:56:45.987250 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-18 00:56:45.987254 | orchestrator | Saturday 18 April 2026 00:49:25 +0000 (0:00:00.635) 0:02:39.297 ******** 2026-04-18 00:56:45.987278 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987282 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987285 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987289 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987293 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987331 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987337 | orchestrator | 2026-04-18 00:56:45.987340 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-18 00:56:45.987348 | orchestrator | Saturday 18 April 2026 00:49:26 +0000 (0:00:00.737) 0:02:40.035 ******** 2026-04-18 00:56:45.987352 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987356 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987359 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987363 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987367 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987371 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987374 | orchestrator | 2026-04-18 00:56:45.987378 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-18 00:56:45.987382 | orchestrator | Saturday 18 April 2026 00:49:26 +0000 (0:00:00.480) 0:02:40.516 ******** 2026-04-18 00:56:45.987386 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987390 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987393 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987397 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.987401 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.987405 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.987409 | orchestrator | 2026-04-18 00:56:45.987417 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-18 00:56:45.987421 | orchestrator | Saturday 18 April 2026 00:49:29 +0000 (0:00:02.766) 0:02:43.282 ******** 2026-04-18 00:56:45.987425 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.987429 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.987433 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.987436 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987440 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987444 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987448 | orchestrator | 2026-04-18 00:56:45.987452 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-18 00:56:45.987456 | orchestrator | Saturday 18 April 2026 00:49:30 +0000 (0:00:00.548) 0:02:43.830 ******** 2026-04-18 00:56:45.987459 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.987463 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.987470 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.987477 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987488 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987498 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987505 | orchestrator | 2026-04-18 00:56:45.987513 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-18 00:56:45.987520 | orchestrator | Saturday 18 April 2026 00:49:30 +0000 (0:00:00.544) 0:02:44.374 ******** 2026-04-18 00:56:45.987527 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987534 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987540 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987547 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987553 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987559 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987565 | orchestrator | 2026-04-18 00:56:45.987571 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-18 00:56:45.987578 | orchestrator | Saturday 18 April 2026 00:49:31 +0000 (0:00:00.788) 0:02:45.163 ******** 2026-04-18 00:56:45.987584 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.987591 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.987598 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.987604 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987611 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987617 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987624 | orchestrator | 2026-04-18 00:56:45.987631 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-18 00:56:45.987638 | orchestrator | Saturday 18 April 2026 00:49:31 +0000 (0:00:00.504) 0:02:45.667 ******** 2026-04-18 00:56:45.987647 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-18 00:56:45.987657 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-18 00:56:45.987665 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987673 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-18 00:56:45.987680 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-18 00:56:45.987687 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987695 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-18 00:56:45.987699 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-18 00:56:45.987709 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987713 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987716 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987720 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987724 | orchestrator | 2026-04-18 00:56:45.987733 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-18 00:56:45.987737 | orchestrator | Saturday 18 April 2026 00:49:32 +0000 (0:00:00.673) 0:02:46.341 ******** 2026-04-18 00:56:45.987743 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987749 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987755 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987760 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987766 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987772 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987778 | orchestrator | 2026-04-18 00:56:45.987785 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-18 00:56:45.987791 | orchestrator | Saturday 18 April 2026 00:49:33 +0000 (0:00:00.455) 0:02:46.797 ******** 2026-04-18 00:56:45.987795 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987799 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987804 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987810 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987815 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987820 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987826 | orchestrator | 2026-04-18 00:56:45.987831 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-18 00:56:45.987838 | orchestrator | Saturday 18 April 2026 00:49:33 +0000 (0:00:00.706) 0:02:47.503 ******** 2026-04-18 00:56:45.987844 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987851 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987857 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987863 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987868 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987872 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987876 | orchestrator | 2026-04-18 00:56:45.987879 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-18 00:56:45.987883 | orchestrator | Saturday 18 April 2026 00:49:34 +0000 (0:00:00.554) 0:02:48.058 ******** 2026-04-18 00:56:45.987887 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987891 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987895 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987898 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987902 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987906 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987910 | orchestrator | 2026-04-18 00:56:45.987913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-18 00:56:45.987917 | orchestrator | Saturday 18 April 2026 00:49:35 +0000 (0:00:00.721) 0:02:48.780 ******** 2026-04-18 00:56:45.987921 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.987925 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.987928 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.987932 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987936 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987939 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987943 | orchestrator | 2026-04-18 00:56:45.987947 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-18 00:56:45.987950 | orchestrator | Saturday 18 April 2026 00:49:35 +0000 (0:00:00.596) 0:02:49.376 ******** 2026-04-18 00:56:45.987954 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.987958 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.987961 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.987970 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.987974 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.987978 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.987981 | orchestrator | 2026-04-18 00:56:45.987985 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-18 00:56:45.987989 | orchestrator | Saturday 18 April 2026 00:49:36 +0000 (0:00:00.810) 0:02:50.187 ******** 2026-04-18 00:56:45.987993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.987996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.988000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.988004 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988008 | orchestrator | 2026-04-18 00:56:45.988011 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-18 00:56:45.988015 | orchestrator | Saturday 18 April 2026 00:49:36 +0000 (0:00:00.363) 0:02:50.550 ******** 2026-04-18 00:56:45.988019 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.988023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.988027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.988031 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988034 | orchestrator | 2026-04-18 00:56:45.988038 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-18 00:56:45.988042 | orchestrator | Saturday 18 April 2026 00:49:37 +0000 (0:00:00.370) 0:02:50.921 ******** 2026-04-18 00:56:45.988052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.988056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.988060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.988063 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988067 | orchestrator | 2026-04-18 00:56:45.988071 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-18 00:56:45.988074 | orchestrator | Saturday 18 April 2026 00:49:37 +0000 (0:00:00.340) 0:02:51.262 ******** 2026-04-18 00:56:45.988078 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.988082 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.988086 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.988089 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.988093 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.988097 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.988101 | orchestrator | 2026-04-18 00:56:45.988104 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-18 00:56:45.988108 | orchestrator | Saturday 18 April 2026 00:49:38 +0000 (0:00:00.736) 0:02:51.999 ******** 2026-04-18 00:56:45.988117 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-18 00:56:45.988121 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-18 00:56:45.988125 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-18 00:56:45.988128 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-18 00:56:45.988132 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.988136 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-18 00:56:45.988140 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.988144 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-18 00:56:45.988147 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.988151 | orchestrator | 2026-04-18 00:56:45.988155 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-18 00:56:45.988158 | orchestrator | Saturday 18 April 2026 00:49:40 +0000 (0:00:02.351) 0:02:54.350 ******** 2026-04-18 00:56:45.988162 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.988166 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.988170 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.988174 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.988177 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.988185 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.988189 | orchestrator | 2026-04-18 00:56:45.988193 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-18 00:56:45.988197 | orchestrator | Saturday 18 April 2026 00:49:42 +0000 (0:00:02.367) 0:02:56.718 ******** 2026-04-18 00:56:45.988200 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.988204 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.988208 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.988212 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.988215 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.988219 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.988223 | orchestrator | 2026-04-18 00:56:45.988227 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-18 00:56:45.988231 | orchestrator | Saturday 18 April 2026 00:49:44 +0000 (0:00:01.102) 0:02:57.821 ******** 2026-04-18 00:56:45.988234 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988238 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.988242 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.988246 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.988251 | orchestrator | 2026-04-18 00:56:45.988254 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-18 00:56:45.988277 | orchestrator | Saturday 18 April 2026 00:49:45 +0000 (0:00:01.077) 0:02:58.899 ******** 2026-04-18 00:56:45.988281 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.988285 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.988289 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.988293 | orchestrator | 2026-04-18 00:56:45.988296 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-18 00:56:45.988300 | orchestrator | Saturday 18 April 2026 00:49:45 +0000 (0:00:00.321) 0:02:59.220 ******** 2026-04-18 00:56:45.988304 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.988308 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.988311 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.988315 | orchestrator | 2026-04-18 00:56:45.988319 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-18 00:56:45.988323 | orchestrator | Saturday 18 April 2026 00:49:46 +0000 (0:00:01.366) 0:03:00.586 ******** 2026-04-18 00:56:45.988326 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-18 00:56:45.988330 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-18 00:56:45.988334 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-18 00:56:45.988338 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.988341 | orchestrator | 2026-04-18 00:56:45.988345 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-18 00:56:45.988349 | orchestrator | Saturday 18 April 2026 00:49:47 +0000 (0:00:00.535) 0:03:01.122 ******** 2026-04-18 00:56:45.988353 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.988357 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.988360 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.988364 | orchestrator | 2026-04-18 00:56:45.988368 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-18 00:56:45.988372 | orchestrator | Saturday 18 April 2026 00:49:47 +0000 (0:00:00.267) 0:03:01.389 ******** 2026-04-18 00:56:45.988375 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.988379 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.988383 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.988387 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.988390 | orchestrator | 2026-04-18 00:56:45.988394 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-18 00:56:45.988398 | orchestrator | Saturday 18 April 2026 00:49:48 +0000 (0:00:00.785) 0:03:02.174 ******** 2026-04-18 00:56:45.988407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.988413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.988418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.988421 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988425 | orchestrator | 2026-04-18 00:56:45.988429 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-18 00:56:45.988432 | orchestrator | Saturday 18 April 2026 00:49:48 +0000 (0:00:00.348) 0:03:02.523 ******** 2026-04-18 00:56:45.988436 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988440 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.988443 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.988447 | orchestrator | 2026-04-18 00:56:45.988451 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-18 00:56:45.988455 | orchestrator | Saturday 18 April 2026 00:49:49 +0000 (0:00:00.318) 0:03:02.841 ******** 2026-04-18 00:56:45.988458 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988463 | orchestrator | 2026-04-18 00:56:45.988466 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-18 00:56:45.988474 | orchestrator | Saturday 18 April 2026 00:49:49 +0000 (0:00:00.182) 0:03:03.024 ******** 2026-04-18 00:56:45.988478 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988482 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.988486 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.988489 | orchestrator | 2026-04-18 00:56:45.988493 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-18 00:56:45.988497 | orchestrator | Saturday 18 April 2026 00:49:49 +0000 (0:00:00.242) 0:03:03.267 ******** 2026-04-18 00:56:45.988501 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988504 | orchestrator | 2026-04-18 00:56:45.988508 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-18 00:56:45.988512 | orchestrator | Saturday 18 April 2026 00:49:49 +0000 (0:00:00.176) 0:03:03.444 ******** 2026-04-18 00:56:45.988516 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988520 | orchestrator | 2026-04-18 00:56:45.988523 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-18 00:56:45.988527 | orchestrator | Saturday 18 April 2026 00:49:49 +0000 (0:00:00.177) 0:03:03.621 ******** 2026-04-18 00:56:45.988531 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988535 | orchestrator | 2026-04-18 00:56:45.988538 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-18 00:56:45.988542 | orchestrator | Saturday 18 April 2026 00:49:50 +0000 (0:00:00.231) 0:03:03.853 ******** 2026-04-18 00:56:45.988546 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988550 | orchestrator | 2026-04-18 00:56:45.988553 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-18 00:56:45.988557 | orchestrator | Saturday 18 April 2026 00:49:50 +0000 (0:00:00.180) 0:03:04.034 ******** 2026-04-18 00:56:45.988561 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988564 | orchestrator | 2026-04-18 00:56:45.988568 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-18 00:56:45.988572 | orchestrator | Saturday 18 April 2026 00:49:50 +0000 (0:00:00.193) 0:03:04.228 ******** 2026-04-18 00:56:45.988576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.988580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.988584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.988587 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988591 | orchestrator | 2026-04-18 00:56:45.988595 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-18 00:56:45.988598 | orchestrator | Saturday 18 April 2026 00:49:50 +0000 (0:00:00.341) 0:03:04.569 ******** 2026-04-18 00:56:45.988602 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988612 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.988616 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.988619 | orchestrator | 2026-04-18 00:56:45.988623 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-18 00:56:45.988627 | orchestrator | Saturday 18 April 2026 00:49:51 +0000 (0:00:00.296) 0:03:04.866 ******** 2026-04-18 00:56:45.988631 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988634 | orchestrator | 2026-04-18 00:56:45.988638 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-18 00:56:45.988642 | orchestrator | Saturday 18 April 2026 00:49:51 +0000 (0:00:00.178) 0:03:05.044 ******** 2026-04-18 00:56:45.988646 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988650 | orchestrator | 2026-04-18 00:56:45.988654 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-18 00:56:45.988658 | orchestrator | Saturday 18 April 2026 00:49:51 +0000 (0:00:00.173) 0:03:05.218 ******** 2026-04-18 00:56:45.988661 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.988665 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.988669 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.988673 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.988676 | orchestrator | 2026-04-18 00:56:45.988680 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-18 00:56:45.988684 | orchestrator | Saturday 18 April 2026 00:49:52 +0000 (0:00:00.740) 0:03:05.958 ******** 2026-04-18 00:56:45.988688 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.988692 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.988696 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.988699 | orchestrator | 2026-04-18 00:56:45.988703 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-18 00:56:45.988707 | orchestrator | Saturday 18 April 2026 00:49:52 +0000 (0:00:00.263) 0:03:06.222 ******** 2026-04-18 00:56:45.988711 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.988715 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.988718 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.988722 | orchestrator | 2026-04-18 00:56:45.988726 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-18 00:56:45.988730 | orchestrator | Saturday 18 April 2026 00:49:53 +0000 (0:00:01.505) 0:03:07.727 ******** 2026-04-18 00:56:45.988737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.988741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.988745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.988748 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988752 | orchestrator | 2026-04-18 00:56:45.988756 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-18 00:56:45.988760 | orchestrator | Saturday 18 April 2026 00:49:54 +0000 (0:00:00.722) 0:03:08.450 ******** 2026-04-18 00:56:45.988763 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.988767 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.988773 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.988779 | orchestrator | 2026-04-18 00:56:45.988785 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-18 00:56:45.988790 | orchestrator | Saturday 18 April 2026 00:49:54 +0000 (0:00:00.231) 0:03:08.681 ******** 2026-04-18 00:56:45.988796 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.988802 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.988813 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.988820 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.988825 | orchestrator | 2026-04-18 00:56:45.988832 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-18 00:56:45.988839 | orchestrator | Saturday 18 April 2026 00:49:55 +0000 (0:00:00.674) 0:03:09.356 ******** 2026-04-18 00:56:45.988850 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.988856 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.988862 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.988868 | orchestrator | 2026-04-18 00:56:45.988874 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-18 00:56:45.988880 | orchestrator | Saturday 18 April 2026 00:49:55 +0000 (0:00:00.248) 0:03:09.604 ******** 2026-04-18 00:56:45.988886 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.988893 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.988899 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.988905 | orchestrator | 2026-04-18 00:56:45.988913 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-18 00:56:45.988917 | orchestrator | Saturday 18 April 2026 00:49:56 +0000 (0:00:00.993) 0:03:10.597 ******** 2026-04-18 00:56:45.988921 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.988925 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.988929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.988932 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988936 | orchestrator | 2026-04-18 00:56:45.988940 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-18 00:56:45.988944 | orchestrator | Saturday 18 April 2026 00:49:57 +0000 (0:00:00.531) 0:03:11.129 ******** 2026-04-18 00:56:45.988947 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.988951 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.988956 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.988959 | orchestrator | 2026-04-18 00:56:45.988963 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-18 00:56:45.988967 | orchestrator | Saturday 18 April 2026 00:49:57 +0000 (0:00:00.277) 0:03:11.407 ******** 2026-04-18 00:56:45.988971 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.988974 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.988978 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.988982 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.988986 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.988989 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.988993 | orchestrator | 2026-04-18 00:56:45.988997 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-18 00:56:45.989001 | orchestrator | Saturday 18 April 2026 00:49:58 +0000 (0:00:00.680) 0:03:12.088 ******** 2026-04-18 00:56:45.989004 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.989008 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.989012 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.989015 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.989019 | orchestrator | 2026-04-18 00:56:45.989023 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-18 00:56:45.989027 | orchestrator | Saturday 18 April 2026 00:49:59 +0000 (0:00:00.860) 0:03:12.949 ******** 2026-04-18 00:56:45.989030 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989034 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989038 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989042 | orchestrator | 2026-04-18 00:56:45.989046 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-18 00:56:45.989050 | orchestrator | Saturday 18 April 2026 00:49:59 +0000 (0:00:00.295) 0:03:13.244 ******** 2026-04-18 00:56:45.989053 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.989057 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.989061 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.989065 | orchestrator | 2026-04-18 00:56:45.989069 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-18 00:56:45.989072 | orchestrator | Saturday 18 April 2026 00:50:00 +0000 (0:00:01.048) 0:03:14.292 ******** 2026-04-18 00:56:45.989076 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-18 00:56:45.989085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-18 00:56:45.989089 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-18 00:56:45.989093 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989096 | orchestrator | 2026-04-18 00:56:45.989100 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-18 00:56:45.989105 | orchestrator | Saturday 18 April 2026 00:50:01 +0000 (0:00:00.518) 0:03:14.811 ******** 2026-04-18 00:56:45.989108 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989112 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989116 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989120 | orchestrator | 2026-04-18 00:56:45.989128 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-18 00:56:45.989132 | orchestrator | 2026-04-18 00:56:45.989136 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-18 00:56:45.989140 | orchestrator | Saturday 18 April 2026 00:50:01 +0000 (0:00:00.655) 0:03:15.467 ******** 2026-04-18 00:56:45.989144 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.989148 | orchestrator | 2026-04-18 00:56:45.989151 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-18 00:56:45.989155 | orchestrator | Saturday 18 April 2026 00:50:02 +0000 (0:00:00.444) 0:03:15.911 ******** 2026-04-18 00:56:45.989159 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.989163 | orchestrator | 2026-04-18 00:56:45.989167 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-18 00:56:45.989175 | orchestrator | Saturday 18 April 2026 00:50:02 +0000 (0:00:00.454) 0:03:16.365 ******** 2026-04-18 00:56:45.989179 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989183 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989186 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989190 | orchestrator | 2026-04-18 00:56:45.989194 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-18 00:56:45.989197 | orchestrator | Saturday 18 April 2026 00:50:03 +0000 (0:00:00.810) 0:03:17.176 ******** 2026-04-18 00:56:45.989201 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989205 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.989208 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.989212 | orchestrator | 2026-04-18 00:56:45.989216 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-18 00:56:45.989220 | orchestrator | Saturday 18 April 2026 00:50:03 +0000 (0:00:00.247) 0:03:17.424 ******** 2026-04-18 00:56:45.989223 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989227 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.989231 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.989234 | orchestrator | 2026-04-18 00:56:45.989238 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-18 00:56:45.989242 | orchestrator | Saturday 18 April 2026 00:50:03 +0000 (0:00:00.240) 0:03:17.664 ******** 2026-04-18 00:56:45.989245 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989249 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.989253 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.989299 | orchestrator | 2026-04-18 00:56:45.989305 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-18 00:56:45.989309 | orchestrator | Saturday 18 April 2026 00:50:04 +0000 (0:00:00.222) 0:03:17.887 ******** 2026-04-18 00:56:45.989313 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989317 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989320 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989324 | orchestrator | 2026-04-18 00:56:45.989328 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-18 00:56:45.989332 | orchestrator | Saturday 18 April 2026 00:50:04 +0000 (0:00:00.818) 0:03:18.706 ******** 2026-04-18 00:56:45.989340 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989344 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.989348 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.989351 | orchestrator | 2026-04-18 00:56:45.989355 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-18 00:56:45.989359 | orchestrator | Saturday 18 April 2026 00:50:05 +0000 (0:00:00.260) 0:03:18.966 ******** 2026-04-18 00:56:45.989363 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989367 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.989371 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.989374 | orchestrator | 2026-04-18 00:56:45.989378 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-18 00:56:45.989382 | orchestrator | Saturday 18 April 2026 00:50:05 +0000 (0:00:00.252) 0:03:19.218 ******** 2026-04-18 00:56:45.989386 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989390 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989393 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989397 | orchestrator | 2026-04-18 00:56:45.989401 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-18 00:56:45.989405 | orchestrator | Saturday 18 April 2026 00:50:06 +0000 (0:00:00.640) 0:03:19.859 ******** 2026-04-18 00:56:45.989408 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989412 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989416 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989420 | orchestrator | 2026-04-18 00:56:45.989424 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-18 00:56:45.989428 | orchestrator | Saturday 18 April 2026 00:50:07 +0000 (0:00:00.920) 0:03:20.779 ******** 2026-04-18 00:56:45.989431 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989435 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.989439 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.989443 | orchestrator | 2026-04-18 00:56:45.989446 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-18 00:56:45.989450 | orchestrator | Saturday 18 April 2026 00:50:07 +0000 (0:00:00.304) 0:03:21.083 ******** 2026-04-18 00:56:45.989454 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989458 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989461 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989465 | orchestrator | 2026-04-18 00:56:45.989469 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-18 00:56:45.989473 | orchestrator | Saturday 18 April 2026 00:50:07 +0000 (0:00:00.303) 0:03:21.387 ******** 2026-04-18 00:56:45.989477 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989481 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.989484 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.989488 | orchestrator | 2026-04-18 00:56:45.989492 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-18 00:56:45.989496 | orchestrator | Saturday 18 April 2026 00:50:07 +0000 (0:00:00.281) 0:03:21.668 ******** 2026-04-18 00:56:45.989500 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989512 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.989523 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.989529 | orchestrator | 2026-04-18 00:56:45.989535 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-18 00:56:45.989541 | orchestrator | Saturday 18 April 2026 00:50:08 +0000 (0:00:00.289) 0:03:21.958 ******** 2026-04-18 00:56:45.989548 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989553 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.989558 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.989564 | orchestrator | 2026-04-18 00:56:45.989569 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-18 00:56:45.989575 | orchestrator | Saturday 18 April 2026 00:50:08 +0000 (0:00:00.525) 0:03:22.483 ******** 2026-04-18 00:56:45.989587 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989593 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.989599 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.989605 | orchestrator | 2026-04-18 00:56:45.989611 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-18 00:56:45.989623 | orchestrator | Saturday 18 April 2026 00:50:09 +0000 (0:00:00.293) 0:03:22.777 ******** 2026-04-18 00:56:45.989629 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989635 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.989641 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.989647 | orchestrator | 2026-04-18 00:56:45.989653 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-18 00:56:45.989660 | orchestrator | Saturday 18 April 2026 00:50:09 +0000 (0:00:00.296) 0:03:23.074 ******** 2026-04-18 00:56:45.989666 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989673 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989679 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989685 | orchestrator | 2026-04-18 00:56:45.989691 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-18 00:56:45.989697 | orchestrator | Saturday 18 April 2026 00:50:09 +0000 (0:00:00.305) 0:03:23.380 ******** 2026-04-18 00:56:45.989704 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989710 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989715 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989719 | orchestrator | 2026-04-18 00:56:45.989723 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-18 00:56:45.989727 | orchestrator | Saturday 18 April 2026 00:50:10 +0000 (0:00:00.518) 0:03:23.899 ******** 2026-04-18 00:56:45.989731 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989734 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989738 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989742 | orchestrator | 2026-04-18 00:56:45.989746 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-18 00:56:45.989749 | orchestrator | Saturday 18 April 2026 00:50:10 +0000 (0:00:00.499) 0:03:24.398 ******** 2026-04-18 00:56:45.989753 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989757 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989761 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989765 | orchestrator | 2026-04-18 00:56:45.989769 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-18 00:56:45.989772 | orchestrator | Saturday 18 April 2026 00:50:10 +0000 (0:00:00.326) 0:03:24.725 ******** 2026-04-18 00:56:45.989776 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.989780 | orchestrator | 2026-04-18 00:56:45.989784 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-18 00:56:45.989788 | orchestrator | Saturday 18 April 2026 00:50:11 +0000 (0:00:00.721) 0:03:25.447 ******** 2026-04-18 00:56:45.989791 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.989795 | orchestrator | 2026-04-18 00:56:45.989799 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-18 00:56:45.989802 | orchestrator | Saturday 18 April 2026 00:50:11 +0000 (0:00:00.153) 0:03:25.601 ******** 2026-04-18 00:56:45.989806 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-18 00:56:45.989810 | orchestrator | 2026-04-18 00:56:45.989814 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-18 00:56:45.989817 | orchestrator | Saturday 18 April 2026 00:50:12 +0000 (0:00:01.060) 0:03:26.661 ******** 2026-04-18 00:56:45.989821 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989825 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989829 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989832 | orchestrator | 2026-04-18 00:56:45.989836 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-18 00:56:45.989840 | orchestrator | Saturday 18 April 2026 00:50:13 +0000 (0:00:00.317) 0:03:26.979 ******** 2026-04-18 00:56:45.989848 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989854 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989862 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989870 | orchestrator | 2026-04-18 00:56:45.989876 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-18 00:56:45.989882 | orchestrator | Saturday 18 April 2026 00:50:13 +0000 (0:00:00.331) 0:03:27.310 ******** 2026-04-18 00:56:45.989888 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.989893 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.989899 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.989905 | orchestrator | 2026-04-18 00:56:45.989910 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-18 00:56:45.989916 | orchestrator | Saturday 18 April 2026 00:50:15 +0000 (0:00:01.518) 0:03:28.829 ******** 2026-04-18 00:56:45.989922 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.989928 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.989934 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.989941 | orchestrator | 2026-04-18 00:56:45.989947 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-18 00:56:45.989954 | orchestrator | Saturday 18 April 2026 00:50:15 +0000 (0:00:00.836) 0:03:29.666 ******** 2026-04-18 00:56:45.989960 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.989964 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.989967 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.989971 | orchestrator | 2026-04-18 00:56:45.989975 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-18 00:56:45.989984 | orchestrator | Saturday 18 April 2026 00:50:16 +0000 (0:00:00.752) 0:03:30.418 ******** 2026-04-18 00:56:45.989987 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.989991 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.989995 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.989999 | orchestrator | 2026-04-18 00:56:45.990002 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-18 00:56:45.990006 | orchestrator | Saturday 18 April 2026 00:50:17 +0000 (0:00:00.792) 0:03:31.210 ******** 2026-04-18 00:56:45.990010 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.990047 | orchestrator | 2026-04-18 00:56:45.990051 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-18 00:56:45.990055 | orchestrator | Saturday 18 April 2026 00:50:18 +0000 (0:00:01.065) 0:03:32.276 ******** 2026-04-18 00:56:45.990059 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.990096 | orchestrator | 2026-04-18 00:56:45.990100 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-18 00:56:45.990104 | orchestrator | Saturday 18 April 2026 00:50:19 +0000 (0:00:00.726) 0:03:33.002 ******** 2026-04-18 00:56:45.990108 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-18 00:56:45.990118 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.990122 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.990126 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-18 00:56:45.990130 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-18 00:56:45.990134 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-18 00:56:45.990137 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-18 00:56:45.990141 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-18 00:56:45.990145 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-18 00:56:45.990149 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-18 00:56:45.990153 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-18 00:56:45.990156 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-18 00:56:45.990160 | orchestrator | 2026-04-18 00:56:45.990164 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-18 00:56:45.990173 | orchestrator | Saturday 18 April 2026 00:50:22 +0000 (0:00:03.121) 0:03:36.123 ******** 2026-04-18 00:56:45.990177 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.990181 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.990184 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.990188 | orchestrator | 2026-04-18 00:56:45.990192 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-18 00:56:45.990196 | orchestrator | Saturday 18 April 2026 00:50:23 +0000 (0:00:01.047) 0:03:37.171 ******** 2026-04-18 00:56:45.990200 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.990204 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.990207 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.990211 | orchestrator | 2026-04-18 00:56:45.990215 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-18 00:56:45.990218 | orchestrator | Saturday 18 April 2026 00:50:23 +0000 (0:00:00.280) 0:03:37.451 ******** 2026-04-18 00:56:45.990222 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.990226 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.990230 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.990233 | orchestrator | 2026-04-18 00:56:45.990237 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-18 00:56:45.990241 | orchestrator | Saturday 18 April 2026 00:50:23 +0000 (0:00:00.250) 0:03:37.702 ******** 2026-04-18 00:56:45.990245 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.990248 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.990252 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.990401 | orchestrator | 2026-04-18 00:56:45.990424 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-18 00:56:45.990429 | orchestrator | Saturday 18 April 2026 00:50:25 +0000 (0:00:01.663) 0:03:39.365 ******** 2026-04-18 00:56:45.990433 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.990436 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.990440 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.990444 | orchestrator | 2026-04-18 00:56:45.990448 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-18 00:56:45.990452 | orchestrator | Saturday 18 April 2026 00:50:26 +0000 (0:00:01.104) 0:03:40.470 ******** 2026-04-18 00:56:45.990455 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.990459 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.990463 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.990467 | orchestrator | 2026-04-18 00:56:45.990470 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-18 00:56:45.990474 | orchestrator | Saturday 18 April 2026 00:50:27 +0000 (0:00:00.322) 0:03:40.792 ******** 2026-04-18 00:56:45.990478 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.990482 | orchestrator | 2026-04-18 00:56:45.990485 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-18 00:56:45.990489 | orchestrator | Saturday 18 April 2026 00:50:27 +0000 (0:00:00.693) 0:03:41.485 ******** 2026-04-18 00:56:45.990493 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.990497 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.990501 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.990504 | orchestrator | 2026-04-18 00:56:45.990508 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-18 00:56:45.990512 | orchestrator | Saturday 18 April 2026 00:50:28 +0000 (0:00:00.325) 0:03:41.811 ******** 2026-04-18 00:56:45.990516 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.990519 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.990523 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.990527 | orchestrator | 2026-04-18 00:56:45.990530 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-18 00:56:45.990534 | orchestrator | Saturday 18 April 2026 00:50:28 +0000 (0:00:00.295) 0:03:42.107 ******** 2026-04-18 00:56:45.990599 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.990603 | orchestrator | 2026-04-18 00:56:45.990607 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-18 00:56:45.990610 | orchestrator | Saturday 18 April 2026 00:50:28 +0000 (0:00:00.442) 0:03:42.549 ******** 2026-04-18 00:56:45.990614 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.990618 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.990622 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.990625 | orchestrator | 2026-04-18 00:56:45.990629 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-18 00:56:45.990633 | orchestrator | Saturday 18 April 2026 00:50:30 +0000 (0:00:01.873) 0:03:44.423 ******** 2026-04-18 00:56:45.990637 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.990640 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.990644 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.990648 | orchestrator | 2026-04-18 00:56:45.990652 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-18 00:56:45.990669 | orchestrator | Saturday 18 April 2026 00:50:31 +0000 (0:00:01.149) 0:03:45.572 ******** 2026-04-18 00:56:45.990673 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.990677 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.990680 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.990692 | orchestrator | 2026-04-18 00:56:45.990696 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-18 00:56:45.990705 | orchestrator | Saturday 18 April 2026 00:50:33 +0000 (0:00:02.061) 0:03:47.634 ******** 2026-04-18 00:56:45.990709 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.990713 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.990717 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.990720 | orchestrator | 2026-04-18 00:56:45.990724 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-18 00:56:45.990728 | orchestrator | Saturday 18 April 2026 00:50:35 +0000 (0:00:02.037) 0:03:49.672 ******** 2026-04-18 00:56:45.990732 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.990736 | orchestrator | 2026-04-18 00:56:45.990739 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-18 00:56:45.990743 | orchestrator | Saturday 18 April 2026 00:50:36 +0000 (0:00:00.668) 0:03:50.340 ******** 2026-04-18 00:56:45.990747 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.990751 | orchestrator | 2026-04-18 00:56:45.990754 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-18 00:56:45.990758 | orchestrator | Saturday 18 April 2026 00:50:37 +0000 (0:00:01.319) 0:03:51.659 ******** 2026-04-18 00:56:45.990762 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.990766 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.990769 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.990773 | orchestrator | 2026-04-18 00:56:45.990777 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-18 00:56:45.990781 | orchestrator | Saturday 18 April 2026 00:50:46 +0000 (0:00:08.863) 0:04:00.523 ******** 2026-04-18 00:56:45.990785 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.990788 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.990792 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.990796 | orchestrator | 2026-04-18 00:56:45.990800 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-18 00:56:45.990804 | orchestrator | Saturday 18 April 2026 00:50:47 +0000 (0:00:00.296) 0:04:00.820 ******** 2026-04-18 00:56:45.990809 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__214217f144f4caef34280d1e40c507ef89dcc686'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-18 00:56:45.990819 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__214217f144f4caef34280d1e40c507ef89dcc686'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-18 00:56:45.990824 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__214217f144f4caef34280d1e40c507ef89dcc686'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-18 00:56:45.990830 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__214217f144f4caef34280d1e40c507ef89dcc686'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-18 00:56:45.990837 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__214217f144f4caef34280d1e40c507ef89dcc686'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-18 00:56:45.990843 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__214217f144f4caef34280d1e40c507ef89dcc686'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__214217f144f4caef34280d1e40c507ef89dcc686'}])  2026-04-18 00:56:45.990848 | orchestrator | 2026-04-18 00:56:45.990852 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-18 00:56:45.990860 | orchestrator | Saturday 18 April 2026 00:50:59 +0000 (0:00:12.916) 0:04:13.736 ******** 2026-04-18 00:56:45.990864 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.990868 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.990871 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.990875 | orchestrator | 2026-04-18 00:56:45.990879 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-18 00:56:45.990883 | orchestrator | Saturday 18 April 2026 00:51:00 +0000 (0:00:00.272) 0:04:14.008 ******** 2026-04-18 00:56:45.990886 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.990890 | orchestrator | 2026-04-18 00:56:45.990894 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-18 00:56:45.990898 | orchestrator | Saturday 18 April 2026 00:51:00 +0000 (0:00:00.498) 0:04:14.507 ******** 2026-04-18 00:56:45.990902 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.990905 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.990909 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.990913 | orchestrator | 2026-04-18 00:56:45.990917 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-18 00:56:45.990920 | orchestrator | Saturday 18 April 2026 00:51:01 +0000 (0:00:00.273) 0:04:14.780 ******** 2026-04-18 00:56:45.990924 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.990928 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.990932 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.990935 | orchestrator | 2026-04-18 00:56:45.990939 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-18 00:56:45.990947 | orchestrator | Saturday 18 April 2026 00:51:01 +0000 (0:00:00.274) 0:04:15.055 ******** 2026-04-18 00:56:45.990950 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-18 00:56:45.990954 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-18 00:56:45.990958 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-18 00:56:45.990962 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.990966 | orchestrator | 2026-04-18 00:56:45.990969 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-18 00:56:45.990973 | orchestrator | Saturday 18 April 2026 00:51:01 +0000 (0:00:00.673) 0:04:15.728 ******** 2026-04-18 00:56:45.990977 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.990981 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.990985 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.990989 | orchestrator | 2026-04-18 00:56:45.990992 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-18 00:56:45.990996 | orchestrator | 2026-04-18 00:56:45.991000 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-18 00:56:45.991004 | orchestrator | Saturday 18 April 2026 00:51:02 +0000 (0:00:00.650) 0:04:16.379 ******** 2026-04-18 00:56:45.991008 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.991012 | orchestrator | 2026-04-18 00:56:45.991016 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-18 00:56:45.991020 | orchestrator | Saturday 18 April 2026 00:51:03 +0000 (0:00:00.409) 0:04:16.789 ******** 2026-04-18 00:56:45.991024 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.991027 | orchestrator | 2026-04-18 00:56:45.991031 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-18 00:56:45.991035 | orchestrator | Saturday 18 April 2026 00:51:03 +0000 (0:00:00.557) 0:04:17.347 ******** 2026-04-18 00:56:45.991039 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.991042 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.991046 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.991050 | orchestrator | 2026-04-18 00:56:45.991054 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-18 00:56:45.991058 | orchestrator | Saturday 18 April 2026 00:51:04 +0000 (0:00:00.747) 0:04:18.094 ******** 2026-04-18 00:56:45.991062 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991065 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991069 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991073 | orchestrator | 2026-04-18 00:56:45.991077 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-18 00:56:45.991081 | orchestrator | Saturday 18 April 2026 00:51:04 +0000 (0:00:00.270) 0:04:18.365 ******** 2026-04-18 00:56:45.991084 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991088 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991092 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991095 | orchestrator | 2026-04-18 00:56:45.991099 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-18 00:56:45.991103 | orchestrator | Saturday 18 April 2026 00:51:04 +0000 (0:00:00.250) 0:04:18.616 ******** 2026-04-18 00:56:45.991107 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991110 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991118 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991121 | orchestrator | 2026-04-18 00:56:45.991125 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-18 00:56:45.991129 | orchestrator | Saturday 18 April 2026 00:51:05 +0000 (0:00:00.235) 0:04:18.852 ******** 2026-04-18 00:56:45.991133 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.991137 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.991141 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.991144 | orchestrator | 2026-04-18 00:56:45.991152 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-18 00:56:45.991156 | orchestrator | Saturday 18 April 2026 00:51:06 +0000 (0:00:00.934) 0:04:19.786 ******** 2026-04-18 00:56:45.991159 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991163 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991167 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991171 | orchestrator | 2026-04-18 00:56:45.991174 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-18 00:56:45.991178 | orchestrator | Saturday 18 April 2026 00:51:06 +0000 (0:00:00.282) 0:04:20.068 ******** 2026-04-18 00:56:45.991185 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991189 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991193 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991196 | orchestrator | 2026-04-18 00:56:45.991200 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-18 00:56:45.991204 | orchestrator | Saturday 18 April 2026 00:51:06 +0000 (0:00:00.244) 0:04:20.313 ******** 2026-04-18 00:56:45.991208 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.991211 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.991215 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.991219 | orchestrator | 2026-04-18 00:56:45.991223 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-18 00:56:45.991227 | orchestrator | Saturday 18 April 2026 00:51:07 +0000 (0:00:00.702) 0:04:21.016 ******** 2026-04-18 00:56:45.991231 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.991234 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.991238 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.991242 | orchestrator | 2026-04-18 00:56:45.991245 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-18 00:56:45.991249 | orchestrator | Saturday 18 April 2026 00:51:08 +0000 (0:00:00.762) 0:04:21.778 ******** 2026-04-18 00:56:45.991253 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991273 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991279 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991285 | orchestrator | 2026-04-18 00:56:45.991291 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-18 00:56:45.991297 | orchestrator | Saturday 18 April 2026 00:51:08 +0000 (0:00:00.275) 0:04:22.054 ******** 2026-04-18 00:56:45.991303 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.991309 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.991315 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.991320 | orchestrator | 2026-04-18 00:56:45.991324 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-18 00:56:45.991328 | orchestrator | Saturday 18 April 2026 00:51:08 +0000 (0:00:00.300) 0:04:22.354 ******** 2026-04-18 00:56:45.991332 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991336 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991339 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991343 | orchestrator | 2026-04-18 00:56:45.991347 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-18 00:56:45.991351 | orchestrator | Saturday 18 April 2026 00:51:08 +0000 (0:00:00.252) 0:04:22.606 ******** 2026-04-18 00:56:45.991355 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991358 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991362 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991366 | orchestrator | 2026-04-18 00:56:45.991370 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-18 00:56:45.991374 | orchestrator | Saturday 18 April 2026 00:51:09 +0000 (0:00:00.432) 0:04:23.039 ******** 2026-04-18 00:56:45.991378 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991381 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991385 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991389 | orchestrator | 2026-04-18 00:56:45.991393 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-18 00:56:45.991402 | orchestrator | Saturday 18 April 2026 00:51:09 +0000 (0:00:00.245) 0:04:23.284 ******** 2026-04-18 00:56:45.991405 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991409 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991413 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991417 | orchestrator | 2026-04-18 00:56:45.991420 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-18 00:56:45.991424 | orchestrator | Saturday 18 April 2026 00:51:09 +0000 (0:00:00.268) 0:04:23.553 ******** 2026-04-18 00:56:45.991428 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991432 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991436 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991439 | orchestrator | 2026-04-18 00:56:45.991443 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-18 00:56:45.991447 | orchestrator | Saturday 18 April 2026 00:51:10 +0000 (0:00:00.242) 0:04:23.795 ******** 2026-04-18 00:56:45.991450 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.991454 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.991458 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.991462 | orchestrator | 2026-04-18 00:56:45.991466 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-18 00:56:45.991469 | orchestrator | Saturday 18 April 2026 00:51:10 +0000 (0:00:00.255) 0:04:24.050 ******** 2026-04-18 00:56:45.991473 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.991477 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.991481 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.991485 | orchestrator | 2026-04-18 00:56:45.991489 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-18 00:56:45.991492 | orchestrator | Saturday 18 April 2026 00:51:10 +0000 (0:00:00.457) 0:04:24.508 ******** 2026-04-18 00:56:45.991496 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.991500 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.991504 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.991508 | orchestrator | 2026-04-18 00:56:45.991515 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-18 00:56:45.991519 | orchestrator | Saturday 18 April 2026 00:51:11 +0000 (0:00:00.464) 0:04:24.972 ******** 2026-04-18 00:56:45.991523 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-18 00:56:45.991527 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-18 00:56:45.991530 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-18 00:56:45.991534 | orchestrator | 2026-04-18 00:56:45.991538 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-18 00:56:45.991542 | orchestrator | Saturday 18 April 2026 00:51:11 +0000 (0:00:00.713) 0:04:25.686 ******** 2026-04-18 00:56:45.991546 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.991550 | orchestrator | 2026-04-18 00:56:45.991553 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-18 00:56:45.991561 | orchestrator | Saturday 18 April 2026 00:51:12 +0000 (0:00:00.577) 0:04:26.263 ******** 2026-04-18 00:56:45.991565 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.991569 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.991573 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.991576 | orchestrator | 2026-04-18 00:56:45.991580 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-18 00:56:45.991584 | orchestrator | Saturday 18 April 2026 00:51:13 +0000 (0:00:00.548) 0:04:26.811 ******** 2026-04-18 00:56:45.991588 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991591 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991595 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991599 | orchestrator | 2026-04-18 00:56:45.991602 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-18 00:56:45.991606 | orchestrator | Saturday 18 April 2026 00:51:13 +0000 (0:00:00.255) 0:04:27.067 ******** 2026-04-18 00:56:45.991615 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-18 00:56:45.991619 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-18 00:56:45.991623 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-18 00:56:45.991626 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-18 00:56:45.991631 | orchestrator | 2026-04-18 00:56:45.991634 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-18 00:56:45.991638 | orchestrator | Saturday 18 April 2026 00:51:22 +0000 (0:00:09.515) 0:04:36.582 ******** 2026-04-18 00:56:45.991642 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.991646 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.991650 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.991654 | orchestrator | 2026-04-18 00:56:45.991658 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-18 00:56:45.991662 | orchestrator | Saturday 18 April 2026 00:51:23 +0000 (0:00:00.586) 0:04:37.169 ******** 2026-04-18 00:56:45.991665 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-18 00:56:45.991669 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-18 00:56:45.991673 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-18 00:56:45.991677 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-18 00:56:45.991681 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.991684 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.991688 | orchestrator | 2026-04-18 00:56:45.991692 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-18 00:56:45.991695 | orchestrator | Saturday 18 April 2026 00:51:25 +0000 (0:00:02.222) 0:04:39.391 ******** 2026-04-18 00:56:45.991699 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-18 00:56:45.991703 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-18 00:56:45.991707 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-18 00:56:45.991711 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-18 00:56:45.991714 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-18 00:56:45.991718 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-18 00:56:45.991722 | orchestrator | 2026-04-18 00:56:45.991726 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-18 00:56:45.991730 | orchestrator | Saturday 18 April 2026 00:51:26 +0000 (0:00:01.355) 0:04:40.747 ******** 2026-04-18 00:56:45.991734 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.991737 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.991741 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.991745 | orchestrator | 2026-04-18 00:56:45.991749 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-18 00:56:45.991753 | orchestrator | Saturday 18 April 2026 00:51:27 +0000 (0:00:00.699) 0:04:41.446 ******** 2026-04-18 00:56:45.991756 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991760 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991764 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991768 | orchestrator | 2026-04-18 00:56:45.991772 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-18 00:56:45.991776 | orchestrator | Saturday 18 April 2026 00:51:27 +0000 (0:00:00.292) 0:04:41.738 ******** 2026-04-18 00:56:45.991780 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991783 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991787 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991791 | orchestrator | 2026-04-18 00:56:45.991795 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-18 00:56:45.991798 | orchestrator | Saturday 18 April 2026 00:51:28 +0000 (0:00:00.504) 0:04:42.243 ******** 2026-04-18 00:56:45.991802 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.991810 | orchestrator | 2026-04-18 00:56:45.991814 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-18 00:56:45.991820 | orchestrator | Saturday 18 April 2026 00:51:28 +0000 (0:00:00.503) 0:04:42.746 ******** 2026-04-18 00:56:45.991824 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991828 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991832 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991835 | orchestrator | 2026-04-18 00:56:45.991839 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-18 00:56:45.991843 | orchestrator | Saturday 18 April 2026 00:51:29 +0000 (0:00:00.317) 0:04:43.064 ******** 2026-04-18 00:56:45.991847 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991851 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991855 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.991859 | orchestrator | 2026-04-18 00:56:45.991863 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-18 00:56:45.991867 | orchestrator | Saturday 18 April 2026 00:51:29 +0000 (0:00:00.550) 0:04:43.614 ******** 2026-04-18 00:56:45.991871 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.991874 | orchestrator | 2026-04-18 00:56:45.991882 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-18 00:56:45.991886 | orchestrator | Saturday 18 April 2026 00:51:30 +0000 (0:00:00.494) 0:04:44.108 ******** 2026-04-18 00:56:45.991889 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.991893 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.991897 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.991901 | orchestrator | 2026-04-18 00:56:45.991905 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-18 00:56:45.991908 | orchestrator | Saturday 18 April 2026 00:51:31 +0000 (0:00:01.330) 0:04:45.439 ******** 2026-04-18 00:56:45.991912 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.991916 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.991920 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.991924 | orchestrator | 2026-04-18 00:56:45.991927 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-18 00:56:45.991931 | orchestrator | Saturday 18 April 2026 00:51:33 +0000 (0:00:01.412) 0:04:46.852 ******** 2026-04-18 00:56:45.991935 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.991939 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.991942 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.991946 | orchestrator | 2026-04-18 00:56:45.991950 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-18 00:56:45.991954 | orchestrator | Saturday 18 April 2026 00:51:34 +0000 (0:00:01.857) 0:04:48.710 ******** 2026-04-18 00:56:45.991957 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.991961 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.991965 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.991969 | orchestrator | 2026-04-18 00:56:45.991972 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-18 00:56:45.991976 | orchestrator | Saturday 18 April 2026 00:51:36 +0000 (0:00:01.942) 0:04:50.652 ******** 2026-04-18 00:56:45.991980 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.991984 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.991988 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-18 00:56:45.991991 | orchestrator | 2026-04-18 00:56:45.991995 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-18 00:56:45.991999 | orchestrator | Saturday 18 April 2026 00:51:37 +0000 (0:00:00.425) 0:04:51.077 ******** 2026-04-18 00:56:45.992003 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-18 00:56:45.992007 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-18 00:56:45.992015 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-18 00:56:45.992019 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-18 00:56:45.992023 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-04-18 00:56:45.992027 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-04-18 00:56:45.992031 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-18 00:56:45.992034 | orchestrator | 2026-04-18 00:56:45.992038 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-18 00:56:45.992042 | orchestrator | Saturday 18 April 2026 00:52:13 +0000 (0:00:36.479) 0:05:27.557 ******** 2026-04-18 00:56:45.992045 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-18 00:56:45.992049 | orchestrator | 2026-04-18 00:56:45.992053 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-18 00:56:45.992057 | orchestrator | Saturday 18 April 2026 00:52:15 +0000 (0:00:01.669) 0:05:29.227 ******** 2026-04-18 00:56:45.992060 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.992064 | orchestrator | 2026-04-18 00:56:45.992068 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-18 00:56:45.992072 | orchestrator | Saturday 18 April 2026 00:52:15 +0000 (0:00:00.308) 0:05:29.536 ******** 2026-04-18 00:56:45.992076 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.992079 | orchestrator | 2026-04-18 00:56:45.992083 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-18 00:56:45.992087 | orchestrator | Saturday 18 April 2026 00:52:15 +0000 (0:00:00.146) 0:05:29.682 ******** 2026-04-18 00:56:45.992097 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-18 00:56:45.992101 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-18 00:56:45.992104 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-18 00:56:45.992108 | orchestrator | 2026-04-18 00:56:45.992115 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-18 00:56:45.992119 | orchestrator | Saturday 18 April 2026 00:52:22 +0000 (0:00:06.452) 0:05:36.134 ******** 2026-04-18 00:56:45.992122 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-18 00:56:45.992126 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-18 00:56:45.992130 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-18 00:56:45.992134 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-18 00:56:45.992137 | orchestrator | 2026-04-18 00:56:45.992141 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-18 00:56:45.992145 | orchestrator | Saturday 18 April 2026 00:52:27 +0000 (0:00:04.796) 0:05:40.931 ******** 2026-04-18 00:56:45.992149 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.992152 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.992156 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.992160 | orchestrator | 2026-04-18 00:56:45.992167 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-18 00:56:45.992171 | orchestrator | Saturday 18 April 2026 00:52:27 +0000 (0:00:00.776) 0:05:41.707 ******** 2026-04-18 00:56:45.992175 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.992179 | orchestrator | 2026-04-18 00:56:45.992183 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-18 00:56:45.992186 | orchestrator | Saturday 18 April 2026 00:52:28 +0000 (0:00:00.441) 0:05:42.148 ******** 2026-04-18 00:56:45.992190 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.992197 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.992201 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.992205 | orchestrator | 2026-04-18 00:56:45.992209 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-18 00:56:45.992212 | orchestrator | Saturday 18 April 2026 00:52:28 +0000 (0:00:00.254) 0:05:42.403 ******** 2026-04-18 00:56:45.992216 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.992220 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.992224 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.992227 | orchestrator | 2026-04-18 00:56:45.992231 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-18 00:56:45.992235 | orchestrator | Saturday 18 April 2026 00:52:29 +0000 (0:00:01.187) 0:05:43.591 ******** 2026-04-18 00:56:45.992239 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-18 00:56:45.992243 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-18 00:56:45.992247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-18 00:56:45.992250 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.992254 | orchestrator | 2026-04-18 00:56:45.992288 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-18 00:56:45.992292 | orchestrator | Saturday 18 April 2026 00:52:30 +0000 (0:00:00.860) 0:05:44.452 ******** 2026-04-18 00:56:45.992296 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.992300 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.992304 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.992307 | orchestrator | 2026-04-18 00:56:45.992311 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-18 00:56:45.992315 | orchestrator | 2026-04-18 00:56:45.992319 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-18 00:56:45.992323 | orchestrator | Saturday 18 April 2026 00:52:31 +0000 (0:00:00.494) 0:05:44.946 ******** 2026-04-18 00:56:45.992327 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.992330 | orchestrator | 2026-04-18 00:56:45.992334 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-18 00:56:45.992338 | orchestrator | Saturday 18 April 2026 00:52:31 +0000 (0:00:00.594) 0:05:45.540 ******** 2026-04-18 00:56:45.992342 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.992346 | orchestrator | 2026-04-18 00:56:45.992349 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-18 00:56:45.992353 | orchestrator | Saturday 18 April 2026 00:52:32 +0000 (0:00:00.446) 0:05:45.987 ******** 2026-04-18 00:56:45.992357 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.992361 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.992364 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.992368 | orchestrator | 2026-04-18 00:56:45.992372 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-18 00:56:45.992376 | orchestrator | Saturday 18 April 2026 00:52:32 +0000 (0:00:00.338) 0:05:46.326 ******** 2026-04-18 00:56:45.992379 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992383 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992387 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992391 | orchestrator | 2026-04-18 00:56:45.992394 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-18 00:56:45.992398 | orchestrator | Saturday 18 April 2026 00:52:33 +0000 (0:00:00.887) 0:05:47.213 ******** 2026-04-18 00:56:45.992402 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992406 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992409 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992413 | orchestrator | 2026-04-18 00:56:45.992417 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-18 00:56:45.992421 | orchestrator | Saturday 18 April 2026 00:52:34 +0000 (0:00:00.750) 0:05:47.964 ******** 2026-04-18 00:56:45.992430 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992433 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992437 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992441 | orchestrator | 2026-04-18 00:56:45.992445 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-18 00:56:45.992448 | orchestrator | Saturday 18 April 2026 00:52:34 +0000 (0:00:00.713) 0:05:48.678 ******** 2026-04-18 00:56:45.992452 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.992459 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.992463 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.992467 | orchestrator | 2026-04-18 00:56:45.992470 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-18 00:56:45.992474 | orchestrator | Saturday 18 April 2026 00:52:35 +0000 (0:00:00.304) 0:05:48.983 ******** 2026-04-18 00:56:45.992478 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.992482 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.992485 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.992489 | orchestrator | 2026-04-18 00:56:45.992493 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-18 00:56:45.992497 | orchestrator | Saturday 18 April 2026 00:52:35 +0000 (0:00:00.517) 0:05:49.500 ******** 2026-04-18 00:56:45.992500 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.992504 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.992508 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.992512 | orchestrator | 2026-04-18 00:56:45.992515 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-18 00:56:45.992522 | orchestrator | Saturday 18 April 2026 00:52:36 +0000 (0:00:00.288) 0:05:49.788 ******** 2026-04-18 00:56:45.992526 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992530 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992534 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992537 | orchestrator | 2026-04-18 00:56:45.992541 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-18 00:56:45.992545 | orchestrator | Saturday 18 April 2026 00:52:36 +0000 (0:00:00.728) 0:05:50.517 ******** 2026-04-18 00:56:45.992549 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992552 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992556 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992560 | orchestrator | 2026-04-18 00:56:45.992564 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-18 00:56:45.992567 | orchestrator | Saturday 18 April 2026 00:52:37 +0000 (0:00:00.793) 0:05:51.310 ******** 2026-04-18 00:56:45.992571 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.992575 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.992579 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.992582 | orchestrator | 2026-04-18 00:56:45.992586 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-18 00:56:45.992590 | orchestrator | Saturday 18 April 2026 00:52:38 +0000 (0:00:00.488) 0:05:51.799 ******** 2026-04-18 00:56:45.992594 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.992597 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.992601 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.992605 | orchestrator | 2026-04-18 00:56:45.992608 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-18 00:56:45.992612 | orchestrator | Saturday 18 April 2026 00:52:38 +0000 (0:00:00.298) 0:05:52.098 ******** 2026-04-18 00:56:45.992616 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992620 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992623 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992627 | orchestrator | 2026-04-18 00:56:45.992631 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-18 00:56:45.992635 | orchestrator | Saturday 18 April 2026 00:52:38 +0000 (0:00:00.287) 0:05:52.385 ******** 2026-04-18 00:56:45.992639 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992646 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992649 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992653 | orchestrator | 2026-04-18 00:56:45.992657 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-18 00:56:45.992661 | orchestrator | Saturday 18 April 2026 00:52:38 +0000 (0:00:00.295) 0:05:52.681 ******** 2026-04-18 00:56:45.992664 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992668 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992672 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992676 | orchestrator | 2026-04-18 00:56:45.992680 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-18 00:56:45.992684 | orchestrator | Saturday 18 April 2026 00:52:39 +0000 (0:00:00.504) 0:05:53.185 ******** 2026-04-18 00:56:45.992687 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.992691 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.992695 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.992699 | orchestrator | 2026-04-18 00:56:45.992703 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-18 00:56:45.992706 | orchestrator | Saturday 18 April 2026 00:52:39 +0000 (0:00:00.301) 0:05:53.486 ******** 2026-04-18 00:56:45.992710 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.992714 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.992718 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.992721 | orchestrator | 2026-04-18 00:56:45.992725 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-18 00:56:45.992729 | orchestrator | Saturday 18 April 2026 00:52:40 +0000 (0:00:00.295) 0:05:53.782 ******** 2026-04-18 00:56:45.992733 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.992737 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.992740 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.992744 | orchestrator | 2026-04-18 00:56:45.992748 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-18 00:56:45.992751 | orchestrator | Saturday 18 April 2026 00:52:40 +0000 (0:00:00.361) 0:05:54.143 ******** 2026-04-18 00:56:45.992755 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992759 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992763 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992767 | orchestrator | 2026-04-18 00:56:45.992770 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-18 00:56:45.992774 | orchestrator | Saturday 18 April 2026 00:52:40 +0000 (0:00:00.571) 0:05:54.715 ******** 2026-04-18 00:56:45.992778 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992782 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992785 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992789 | orchestrator | 2026-04-18 00:56:45.992793 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-18 00:56:45.992797 | orchestrator | Saturday 18 April 2026 00:52:41 +0000 (0:00:00.532) 0:05:55.248 ******** 2026-04-18 00:56:45.992800 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992804 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992808 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992812 | orchestrator | 2026-04-18 00:56:45.992818 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-18 00:56:45.992822 | orchestrator | Saturday 18 April 2026 00:52:41 +0000 (0:00:00.297) 0:05:55.545 ******** 2026-04-18 00:56:45.992826 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-18 00:56:45.992830 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-18 00:56:45.992833 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-18 00:56:45.992837 | orchestrator | 2026-04-18 00:56:45.992841 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-18 00:56:45.992845 | orchestrator | Saturday 18 April 2026 00:52:42 +0000 (0:00:00.855) 0:05:56.400 ******** 2026-04-18 00:56:45.992849 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.992856 | orchestrator | 2026-04-18 00:56:45.992863 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-18 00:56:45.992867 | orchestrator | Saturday 18 April 2026 00:52:43 +0000 (0:00:00.745) 0:05:57.145 ******** 2026-04-18 00:56:45.992871 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.992875 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.992879 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.992882 | orchestrator | 2026-04-18 00:56:45.992886 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-18 00:56:45.992890 | orchestrator | Saturday 18 April 2026 00:52:43 +0000 (0:00:00.282) 0:05:57.427 ******** 2026-04-18 00:56:45.992893 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.992897 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.992901 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.992905 | orchestrator | 2026-04-18 00:56:45.992908 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-18 00:56:45.992912 | orchestrator | Saturday 18 April 2026 00:52:43 +0000 (0:00:00.292) 0:05:57.720 ******** 2026-04-18 00:56:45.992916 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992920 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992923 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992927 | orchestrator | 2026-04-18 00:56:45.992931 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-18 00:56:45.992935 | orchestrator | Saturday 18 April 2026 00:52:44 +0000 (0:00:00.929) 0:05:58.650 ******** 2026-04-18 00:56:45.992939 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.992942 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.992946 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.992950 | orchestrator | 2026-04-18 00:56:45.992954 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-18 00:56:45.992957 | orchestrator | Saturday 18 April 2026 00:52:45 +0000 (0:00:00.325) 0:05:58.975 ******** 2026-04-18 00:56:45.992961 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-18 00:56:45.992965 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-18 00:56:45.992969 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-18 00:56:45.992972 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-18 00:56:45.992976 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-18 00:56:45.992980 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-18 00:56:45.992984 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-18 00:56:45.992987 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-18 00:56:45.992991 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-18 00:56:45.992995 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-18 00:56:45.992999 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-18 00:56:45.993003 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-18 00:56:45.993006 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-18 00:56:45.993010 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-18 00:56:45.993014 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-18 00:56:45.993018 | orchestrator | 2026-04-18 00:56:45.993021 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-18 00:56:45.993029 | orchestrator | Saturday 18 April 2026 00:52:49 +0000 (0:00:04.307) 0:06:03.283 ******** 2026-04-18 00:56:45.993032 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993037 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.993043 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.993048 | orchestrator | 2026-04-18 00:56:45.993054 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-18 00:56:45.993060 | orchestrator | Saturday 18 April 2026 00:52:49 +0000 (0:00:00.281) 0:06:03.564 ******** 2026-04-18 00:56:45.993066 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.993073 | orchestrator | 2026-04-18 00:56:45.993077 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-18 00:56:45.993081 | orchestrator | Saturday 18 April 2026 00:52:50 +0000 (0:00:00.712) 0:06:04.277 ******** 2026-04-18 00:56:45.993088 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-18 00:56:45.993092 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-18 00:56:45.993096 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-18 00:56:45.993099 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-18 00:56:45.993103 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-18 00:56:45.993107 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-18 00:56:45.993111 | orchestrator | 2026-04-18 00:56:45.993114 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-18 00:56:45.993118 | orchestrator | Saturday 18 April 2026 00:52:51 +0000 (0:00:01.051) 0:06:05.329 ******** 2026-04-18 00:56:45.993122 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.993126 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-18 00:56:45.993130 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-18 00:56:45.993134 | orchestrator | 2026-04-18 00:56:45.993141 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-18 00:56:45.993145 | orchestrator | Saturday 18 April 2026 00:52:53 +0000 (0:00:02.167) 0:06:07.497 ******** 2026-04-18 00:56:45.993149 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-18 00:56:45.993153 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-18 00:56:45.993156 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.993160 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-18 00:56:45.993164 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-18 00:56:45.993168 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.993172 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-18 00:56:45.993175 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-18 00:56:45.993179 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.993183 | orchestrator | 2026-04-18 00:56:45.993187 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-18 00:56:45.993191 | orchestrator | Saturday 18 April 2026 00:52:55 +0000 (0:00:01.278) 0:06:08.775 ******** 2026-04-18 00:56:45.993194 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-18 00:56:45.993198 | orchestrator | 2026-04-18 00:56:45.993202 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-18 00:56:45.993206 | orchestrator | Saturday 18 April 2026 00:52:57 +0000 (0:00:02.893) 0:06:11.668 ******** 2026-04-18 00:56:45.993210 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.993213 | orchestrator | 2026-04-18 00:56:45.993217 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-18 00:56:45.993221 | orchestrator | Saturday 18 April 2026 00:52:58 +0000 (0:00:00.745) 0:06:12.413 ******** 2026-04-18 00:56:45.993225 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9fd71a58-43ec-5e10-bd02-c7d805355b61', 'data_vg': 'ceph-9fd71a58-43ec-5e10-bd02-c7d805355b61'}) 2026-04-18 00:56:45.993234 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-93b19634-3a0b-57aa-985a-342cbb17f88c', 'data_vg': 'ceph-93b19634-3a0b-57aa-985a-342cbb17f88c'}) 2026-04-18 00:56:45.993238 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fe91ca0a-93bc-5e10-8732-62b62acecb68', 'data_vg': 'ceph-fe91ca0a-93bc-5e10-8732-62b62acecb68'}) 2026-04-18 00:56:45.993242 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-97728c5d-edf3-594c-abdf-329078c85e67', 'data_vg': 'ceph-97728c5d-edf3-594c-abdf-329078c85e67'}) 2026-04-18 00:56:45.993246 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a', 'data_vg': 'ceph-0a0ecf7f-ac15-597c-a1da-c22b9ec93d1a'}) 2026-04-18 00:56:45.993250 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a409408a-9332-5b4b-a953-28c1be45fb12', 'data_vg': 'ceph-a409408a-9332-5b4b-a953-28c1be45fb12'}) 2026-04-18 00:56:45.993253 | orchestrator | 2026-04-18 00:56:45.993272 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-18 00:56:45.993276 | orchestrator | Saturday 18 April 2026 00:53:35 +0000 (0:00:36.625) 0:06:49.039 ******** 2026-04-18 00:56:45.993280 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993283 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.993287 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.993291 | orchestrator | 2026-04-18 00:56:45.993295 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-18 00:56:45.993299 | orchestrator | Saturday 18 April 2026 00:53:35 +0000 (0:00:00.280) 0:06:49.320 ******** 2026-04-18 00:56:45.993302 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.993306 | orchestrator | 2026-04-18 00:56:45.993310 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-18 00:56:45.993314 | orchestrator | Saturday 18 April 2026 00:53:36 +0000 (0:00:00.631) 0:06:49.951 ******** 2026-04-18 00:56:45.993317 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.993321 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.993325 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.993329 | orchestrator | 2026-04-18 00:56:45.993333 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-18 00:56:45.993336 | orchestrator | Saturday 18 April 2026 00:53:36 +0000 (0:00:00.556) 0:06:50.508 ******** 2026-04-18 00:56:45.993340 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.993344 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.993348 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.993351 | orchestrator | 2026-04-18 00:56:45.993359 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-18 00:56:45.993363 | orchestrator | Saturday 18 April 2026 00:53:39 +0000 (0:00:02.336) 0:06:52.844 ******** 2026-04-18 00:56:45.993367 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.993370 | orchestrator | 2026-04-18 00:56:45.993374 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-18 00:56:45.993378 | orchestrator | Saturday 18 April 2026 00:53:39 +0000 (0:00:00.801) 0:06:53.646 ******** 2026-04-18 00:56:45.993382 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.993385 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.993389 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.993393 | orchestrator | 2026-04-18 00:56:45.993397 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-18 00:56:45.993400 | orchestrator | Saturday 18 April 2026 00:53:40 +0000 (0:00:01.103) 0:06:54.749 ******** 2026-04-18 00:56:45.993404 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.993411 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.993415 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.993422 | orchestrator | 2026-04-18 00:56:45.993426 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-18 00:56:45.993430 | orchestrator | Saturday 18 April 2026 00:53:42 +0000 (0:00:01.100) 0:06:55.850 ******** 2026-04-18 00:56:45.993433 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.993437 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.993441 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.993444 | orchestrator | 2026-04-18 00:56:45.993448 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-18 00:56:45.993452 | orchestrator | Saturday 18 April 2026 00:53:44 +0000 (0:00:01.929) 0:06:57.779 ******** 2026-04-18 00:56:45.993456 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993459 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.993463 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.993467 | orchestrator | 2026-04-18 00:56:45.993470 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-18 00:56:45.993474 | orchestrator | Saturday 18 April 2026 00:53:44 +0000 (0:00:00.330) 0:06:58.109 ******** 2026-04-18 00:56:45.993478 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993482 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.993485 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.993489 | orchestrator | 2026-04-18 00:56:45.993493 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-18 00:56:45.993496 | orchestrator | Saturday 18 April 2026 00:53:44 +0000 (0:00:00.253) 0:06:58.363 ******** 2026-04-18 00:56:45.993500 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-04-18 00:56:45.993504 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-04-18 00:56:45.993508 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-18 00:56:45.993511 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-18 00:56:45.993515 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-18 00:56:45.993519 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-18 00:56:45.993522 | orchestrator | 2026-04-18 00:56:45.993526 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-18 00:56:45.993530 | orchestrator | Saturday 18 April 2026 00:53:45 +0000 (0:00:00.934) 0:06:59.297 ******** 2026-04-18 00:56:45.993534 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-18 00:56:45.993537 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-18 00:56:45.993541 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-18 00:56:45.993545 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-18 00:56:45.993548 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-18 00:56:45.993552 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-18 00:56:45.993556 | orchestrator | 2026-04-18 00:56:45.993567 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-18 00:56:45.993571 | orchestrator | Saturday 18 April 2026 00:53:47 +0000 (0:00:02.299) 0:07:01.597 ******** 2026-04-18 00:56:45.993575 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-04-18 00:56:45.993579 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-04-18 00:56:45.993583 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-18 00:56:45.993592 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-18 00:56:45.993596 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-18 00:56:45.993600 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-18 00:56:45.993604 | orchestrator | 2026-04-18 00:56:45.993608 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-18 00:56:45.993612 | orchestrator | Saturday 18 April 2026 00:53:51 +0000 (0:00:03.817) 0:07:05.415 ******** 2026-04-18 00:56:45.993615 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993619 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.993623 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-18 00:56:45.993626 | orchestrator | 2026-04-18 00:56:45.993630 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-18 00:56:45.993638 | orchestrator | Saturday 18 April 2026 00:53:55 +0000 (0:00:03.445) 0:07:08.860 ******** 2026-04-18 00:56:45.993642 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993646 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.993649 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-18 00:56:45.993653 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-18 00:56:45.993657 | orchestrator | 2026-04-18 00:56:45.993661 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-18 00:56:45.993664 | orchestrator | Saturday 18 April 2026 00:54:07 +0000 (0:00:12.575) 0:07:21.436 ******** 2026-04-18 00:56:45.993668 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993672 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.993676 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.993680 | orchestrator | 2026-04-18 00:56:45.993683 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-18 00:56:45.993690 | orchestrator | Saturday 18 April 2026 00:54:08 +0000 (0:00:00.895) 0:07:22.332 ******** 2026-04-18 00:56:45.993694 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993698 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.993701 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.993705 | orchestrator | 2026-04-18 00:56:45.993709 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-18 00:56:45.993713 | orchestrator | Saturday 18 April 2026 00:54:08 +0000 (0:00:00.265) 0:07:22.598 ******** 2026-04-18 00:56:45.993717 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.993720 | orchestrator | 2026-04-18 00:56:45.993724 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-18 00:56:45.993728 | orchestrator | Saturday 18 April 2026 00:54:09 +0000 (0:00:00.598) 0:07:23.196 ******** 2026-04-18 00:56:45.993732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.993735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.993742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.993746 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993750 | orchestrator | 2026-04-18 00:56:45.993754 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-18 00:56:45.993757 | orchestrator | Saturday 18 April 2026 00:54:09 +0000 (0:00:00.353) 0:07:23.550 ******** 2026-04-18 00:56:45.993761 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993765 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.993768 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.993772 | orchestrator | 2026-04-18 00:56:45.993776 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-18 00:56:45.993780 | orchestrator | Saturday 18 April 2026 00:54:10 +0000 (0:00:00.260) 0:07:23.811 ******** 2026-04-18 00:56:45.993783 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993787 | orchestrator | 2026-04-18 00:56:45.993791 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-18 00:56:45.993794 | orchestrator | Saturday 18 April 2026 00:54:10 +0000 (0:00:00.192) 0:07:24.004 ******** 2026-04-18 00:56:45.993798 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993802 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.993806 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.993809 | orchestrator | 2026-04-18 00:56:45.993814 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-18 00:56:45.993821 | orchestrator | Saturday 18 April 2026 00:54:10 +0000 (0:00:00.423) 0:07:24.427 ******** 2026-04-18 00:56:45.993827 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993835 | orchestrator | 2026-04-18 00:56:45.993841 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-18 00:56:45.993847 | orchestrator | Saturday 18 April 2026 00:54:10 +0000 (0:00:00.207) 0:07:24.634 ******** 2026-04-18 00:56:45.993857 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993864 | orchestrator | 2026-04-18 00:56:45.993870 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-18 00:56:45.993876 | orchestrator | Saturday 18 April 2026 00:54:11 +0000 (0:00:00.198) 0:07:24.832 ******** 2026-04-18 00:56:45.993882 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993888 | orchestrator | 2026-04-18 00:56:45.993895 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-18 00:56:45.993901 | orchestrator | Saturday 18 April 2026 00:54:11 +0000 (0:00:00.106) 0:07:24.939 ******** 2026-04-18 00:56:45.993908 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993916 | orchestrator | 2026-04-18 00:56:45.993922 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-18 00:56:45.993929 | orchestrator | Saturday 18 April 2026 00:54:11 +0000 (0:00:00.194) 0:07:25.133 ******** 2026-04-18 00:56:45.993935 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993942 | orchestrator | 2026-04-18 00:56:45.993947 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-18 00:56:45.993954 | orchestrator | Saturday 18 April 2026 00:54:11 +0000 (0:00:00.196) 0:07:25.330 ******** 2026-04-18 00:56:45.993961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.993967 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.993974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.993981 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.993985 | orchestrator | 2026-04-18 00:56:45.993989 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-18 00:56:45.993993 | orchestrator | Saturday 18 April 2026 00:54:11 +0000 (0:00:00.351) 0:07:25.681 ******** 2026-04-18 00:56:45.993996 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.994000 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.994004 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.994008 | orchestrator | 2026-04-18 00:56:45.994113 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-18 00:56:45.994120 | orchestrator | Saturday 18 April 2026 00:54:12 +0000 (0:00:00.268) 0:07:25.950 ******** 2026-04-18 00:56:45.994130 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.994134 | orchestrator | 2026-04-18 00:56:45.994138 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-18 00:56:45.994142 | orchestrator | Saturday 18 April 2026 00:54:12 +0000 (0:00:00.204) 0:07:26.154 ******** 2026-04-18 00:56:45.994146 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.994149 | orchestrator | 2026-04-18 00:56:45.994153 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-18 00:56:45.994157 | orchestrator | 2026-04-18 00:56:45.994161 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-18 00:56:45.994164 | orchestrator | Saturday 18 April 2026 00:54:13 +0000 (0:00:00.904) 0:07:27.059 ******** 2026-04-18 00:56:45.994169 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.994174 | orchestrator | 2026-04-18 00:56:45.994182 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-18 00:56:45.994186 | orchestrator | Saturday 18 April 2026 00:54:14 +0000 (0:00:00.945) 0:07:28.005 ******** 2026-04-18 00:56:45.994190 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.994194 | orchestrator | 2026-04-18 00:56:45.994197 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-18 00:56:45.994201 | orchestrator | Saturday 18 April 2026 00:54:15 +0000 (0:00:01.007) 0:07:29.013 ******** 2026-04-18 00:56:45.994205 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.994214 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.994218 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.994221 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.994225 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.994229 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.994233 | orchestrator | 2026-04-18 00:56:45.994245 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-18 00:56:45.994249 | orchestrator | Saturday 18 April 2026 00:54:16 +0000 (0:00:00.919) 0:07:29.933 ******** 2026-04-18 00:56:45.994252 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.994272 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.994276 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.994280 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.994284 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.994287 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.994291 | orchestrator | 2026-04-18 00:56:45.994295 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-18 00:56:45.994299 | orchestrator | Saturday 18 April 2026 00:54:17 +0000 (0:00:00.969) 0:07:30.903 ******** 2026-04-18 00:56:45.994302 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.994306 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.994310 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.994313 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.994317 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.994321 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.994325 | orchestrator | 2026-04-18 00:56:45.994328 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-18 00:56:45.994332 | orchestrator | Saturday 18 April 2026 00:54:17 +0000 (0:00:00.685) 0:07:31.588 ******** 2026-04-18 00:56:45.994336 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.994340 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.994343 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.994347 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.994351 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.994354 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.994358 | orchestrator | 2026-04-18 00:56:45.994362 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-18 00:56:45.994366 | orchestrator | Saturday 18 April 2026 00:54:18 +0000 (0:00:00.983) 0:07:32.572 ******** 2026-04-18 00:56:45.994369 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.994373 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.994377 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.994380 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.994384 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.994387 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.994391 | orchestrator | 2026-04-18 00:56:45.994395 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-18 00:56:45.994399 | orchestrator | Saturday 18 April 2026 00:54:19 +0000 (0:00:01.000) 0:07:33.572 ******** 2026-04-18 00:56:45.994402 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.994406 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.994410 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.994413 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.994417 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.994420 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.994424 | orchestrator | 2026-04-18 00:56:45.994428 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-18 00:56:45.994432 | orchestrator | Saturday 18 April 2026 00:54:20 +0000 (0:00:00.808) 0:07:34.381 ******** 2026-04-18 00:56:45.994435 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.994439 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.994443 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.994446 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.994450 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.994461 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.994465 | orchestrator | 2026-04-18 00:56:45.994468 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-18 00:56:45.994472 | orchestrator | Saturday 18 April 2026 00:54:21 +0000 (0:00:00.499) 0:07:34.880 ******** 2026-04-18 00:56:45.994476 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.994480 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.994483 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.994487 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.994491 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.994494 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.994498 | orchestrator | 2026-04-18 00:56:45.994502 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-18 00:56:45.994506 | orchestrator | Saturday 18 April 2026 00:54:22 +0000 (0:00:01.097) 0:07:35.977 ******** 2026-04-18 00:56:45.994510 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.994513 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.994517 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.994521 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.994524 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.994528 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.994532 | orchestrator | 2026-04-18 00:56:45.994536 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-18 00:56:45.994540 | orchestrator | Saturday 18 April 2026 00:54:23 +0000 (0:00:01.008) 0:07:36.985 ******** 2026-04-18 00:56:45.994543 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.994547 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.994551 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.994555 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.994558 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.994562 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.994565 | orchestrator | 2026-04-18 00:56:45.994572 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-18 00:56:45.994576 | orchestrator | Saturday 18 April 2026 00:54:23 +0000 (0:00:00.616) 0:07:37.602 ******** 2026-04-18 00:56:45.994580 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.994583 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.994587 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.994590 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.994594 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.994598 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.994602 | orchestrator | 2026-04-18 00:56:45.994605 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-18 00:56:45.994609 | orchestrator | Saturday 18 April 2026 00:54:24 +0000 (0:00:00.512) 0:07:38.114 ******** 2026-04-18 00:56:45.994613 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.994616 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.994620 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.994624 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.994627 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.994631 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.994635 | orchestrator | 2026-04-18 00:56:45.994643 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-18 00:56:45.994646 | orchestrator | Saturday 18 April 2026 00:54:25 +0000 (0:00:00.647) 0:07:38.762 ******** 2026-04-18 00:56:45.994650 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.994654 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.994657 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.994661 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.994665 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.994669 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.994672 | orchestrator | 2026-04-18 00:56:45.994676 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-18 00:56:45.994680 | orchestrator | Saturday 18 April 2026 00:54:25 +0000 (0:00:00.484) 0:07:39.246 ******** 2026-04-18 00:56:45.994691 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.994694 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.994698 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.994702 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.994705 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.994709 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.994713 | orchestrator | 2026-04-18 00:56:45.994716 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-18 00:56:45.994720 | orchestrator | Saturday 18 April 2026 00:54:26 +0000 (0:00:00.627) 0:07:39.874 ******** 2026-04-18 00:56:45.994724 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.994727 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.994731 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.994735 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.994738 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.994742 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.994746 | orchestrator | 2026-04-18 00:56:45.994750 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-18 00:56:45.994753 | orchestrator | Saturday 18 April 2026 00:54:26 +0000 (0:00:00.486) 0:07:40.361 ******** 2026-04-18 00:56:45.994757 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.994761 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.994765 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.994768 | orchestrator | skipping: [testbed-node-0] 2026-04-18 00:56:45.994772 | orchestrator | skipping: [testbed-node-1] 2026-04-18 00:56:45.994775 | orchestrator | skipping: [testbed-node-2] 2026-04-18 00:56:45.994779 | orchestrator | 2026-04-18 00:56:45.994783 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-18 00:56:45.994787 | orchestrator | Saturday 18 April 2026 00:54:27 +0000 (0:00:00.633) 0:07:40.994 ******** 2026-04-18 00:56:45.994790 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.994794 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.994798 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.994801 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.994805 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.994809 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.994812 | orchestrator | 2026-04-18 00:56:45.994816 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-18 00:56:45.994820 | orchestrator | Saturday 18 April 2026 00:54:27 +0000 (0:00:00.505) 0:07:41.499 ******** 2026-04-18 00:56:45.994824 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.994827 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.994831 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.994835 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.994838 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.994842 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.994846 | orchestrator | 2026-04-18 00:56:45.994850 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-18 00:56:45.994853 | orchestrator | Saturday 18 April 2026 00:54:28 +0000 (0:00:00.648) 0:07:42.148 ******** 2026-04-18 00:56:45.994857 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.994861 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.994864 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.994868 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.994872 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.994875 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.994879 | orchestrator | 2026-04-18 00:56:45.994883 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-18 00:56:45.994887 | orchestrator | Saturday 18 April 2026 00:54:29 +0000 (0:00:01.173) 0:07:43.321 ******** 2026-04-18 00:56:45.994890 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-18 00:56:45.994894 | orchestrator | 2026-04-18 00:56:45.994898 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-18 00:56:45.994906 | orchestrator | Saturday 18 April 2026 00:54:33 +0000 (0:00:04.398) 0:07:47.720 ******** 2026-04-18 00:56:45.994910 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-18 00:56:45.994913 | orchestrator | 2026-04-18 00:56:45.994917 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-18 00:56:45.994921 | orchestrator | Saturday 18 April 2026 00:54:35 +0000 (0:00:02.019) 0:07:49.739 ******** 2026-04-18 00:56:45.994924 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.994928 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.994932 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.994936 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.994942 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.994946 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.994950 | orchestrator | 2026-04-18 00:56:45.994954 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-18 00:56:45.994957 | orchestrator | Saturday 18 April 2026 00:54:37 +0000 (0:00:01.463) 0:07:51.202 ******** 2026-04-18 00:56:45.994961 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.994965 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.994968 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.994972 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.994976 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.994979 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.994983 | orchestrator | 2026-04-18 00:56:45.994987 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-18 00:56:45.994990 | orchestrator | Saturday 18 April 2026 00:54:38 +0000 (0:00:01.349) 0:07:52.552 ******** 2026-04-18 00:56:45.994997 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.995002 | orchestrator | 2026-04-18 00:56:45.995006 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-18 00:56:45.995010 | orchestrator | Saturday 18 April 2026 00:54:39 +0000 (0:00:01.203) 0:07:53.756 ******** 2026-04-18 00:56:45.995013 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.995017 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.995021 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.995024 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.995028 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.995032 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.995035 | orchestrator | 2026-04-18 00:56:45.995039 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-18 00:56:45.995043 | orchestrator | Saturday 18 April 2026 00:54:41 +0000 (0:00:01.436) 0:07:55.192 ******** 2026-04-18 00:56:45.995046 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.995050 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.995054 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.995057 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.995061 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.995065 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.995068 | orchestrator | 2026-04-18 00:56:45.995072 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-18 00:56:45.995076 | orchestrator | Saturday 18 April 2026 00:54:45 +0000 (0:00:03.708) 0:07:58.901 ******** 2026-04-18 00:56:45.995080 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:56:45.995084 | orchestrator | 2026-04-18 00:56:45.995088 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-18 00:56:45.995091 | orchestrator | Saturday 18 April 2026 00:54:46 +0000 (0:00:01.195) 0:08:00.096 ******** 2026-04-18 00:56:45.995095 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.995099 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.995102 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.995112 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.995116 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.995119 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.995123 | orchestrator | 2026-04-18 00:56:45.995127 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-18 00:56:45.995131 | orchestrator | Saturday 18 April 2026 00:54:46 +0000 (0:00:00.606) 0:08:00.703 ******** 2026-04-18 00:56:45.995135 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.995139 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.995142 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.995146 | orchestrator | changed: [testbed-node-1] 2026-04-18 00:56:45.995150 | orchestrator | changed: [testbed-node-2] 2026-04-18 00:56:45.995153 | orchestrator | changed: [testbed-node-0] 2026-04-18 00:56:45.995157 | orchestrator | 2026-04-18 00:56:45.995161 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-18 00:56:45.995164 | orchestrator | Saturday 18 April 2026 00:54:49 +0000 (0:00:02.241) 0:08:02.945 ******** 2026-04-18 00:56:45.995168 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.995172 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.995175 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.995179 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:56:45.995183 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:56:45.995186 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:56:45.995190 | orchestrator | 2026-04-18 00:56:45.995194 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-18 00:56:45.995198 | orchestrator | 2026-04-18 00:56:45.995201 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-18 00:56:45.995205 | orchestrator | Saturday 18 April 2026 00:54:50 +0000 (0:00:01.034) 0:08:03.980 ******** 2026-04-18 00:56:45.995209 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.995212 | orchestrator | 2026-04-18 00:56:45.995216 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-18 00:56:45.995220 | orchestrator | Saturday 18 April 2026 00:54:50 +0000 (0:00:00.698) 0:08:04.678 ******** 2026-04-18 00:56:45.995224 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.995227 | orchestrator | 2026-04-18 00:56:45.995231 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-18 00:56:45.995235 | orchestrator | Saturday 18 April 2026 00:54:51 +0000 (0:00:00.506) 0:08:05.184 ******** 2026-04-18 00:56:45.995238 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.995242 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.995246 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.995250 | orchestrator | 2026-04-18 00:56:45.995277 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-18 00:56:45.995287 | orchestrator | Saturday 18 April 2026 00:54:51 +0000 (0:00:00.306) 0:08:05.491 ******** 2026-04-18 00:56:45.995294 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.995298 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.995302 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.995305 | orchestrator | 2026-04-18 00:56:45.995309 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-18 00:56:45.995313 | orchestrator | Saturday 18 April 2026 00:54:52 +0000 (0:00:00.910) 0:08:06.402 ******** 2026-04-18 00:56:45.995317 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.995320 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.995324 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.995328 | orchestrator | 2026-04-18 00:56:45.995331 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-18 00:56:45.995335 | orchestrator | Saturday 18 April 2026 00:54:53 +0000 (0:00:00.983) 0:08:07.385 ******** 2026-04-18 00:56:45.995339 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.995343 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.995350 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.995354 | orchestrator | 2026-04-18 00:56:45.995358 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-18 00:56:45.995365 | orchestrator | Saturday 18 April 2026 00:54:54 +0000 (0:00:00.767) 0:08:08.153 ******** 2026-04-18 00:56:45.995369 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.995372 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.995376 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.995380 | orchestrator | 2026-04-18 00:56:45.995383 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-18 00:56:45.995387 | orchestrator | Saturday 18 April 2026 00:54:54 +0000 (0:00:00.335) 0:08:08.488 ******** 2026-04-18 00:56:45.995391 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.995394 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.995398 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.995402 | orchestrator | 2026-04-18 00:56:45.995405 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-18 00:56:45.995409 | orchestrator | Saturday 18 April 2026 00:54:55 +0000 (0:00:00.538) 0:08:09.027 ******** 2026-04-18 00:56:45.995413 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.995417 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.995420 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.995424 | orchestrator | 2026-04-18 00:56:45.995428 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-18 00:56:45.995432 | orchestrator | Saturday 18 April 2026 00:54:55 +0000 (0:00:00.296) 0:08:09.324 ******** 2026-04-18 00:56:45.995435 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.995439 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.995443 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.995447 | orchestrator | 2026-04-18 00:56:45.995450 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-18 00:56:45.995454 | orchestrator | Saturday 18 April 2026 00:54:56 +0000 (0:00:00.747) 0:08:10.071 ******** 2026-04-18 00:56:45.995458 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.995462 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.995465 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.995469 | orchestrator | 2026-04-18 00:56:45.995473 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-18 00:56:45.995477 | orchestrator | Saturday 18 April 2026 00:54:57 +0000 (0:00:00.720) 0:08:10.791 ******** 2026-04-18 00:56:45.995480 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.995484 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.995488 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.995492 | orchestrator | 2026-04-18 00:56:45.995495 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-18 00:56:45.995499 | orchestrator | Saturday 18 April 2026 00:54:57 +0000 (0:00:00.539) 0:08:11.331 ******** 2026-04-18 00:56:45.995503 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.995506 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.995510 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.995514 | orchestrator | 2026-04-18 00:56:45.995518 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-18 00:56:45.995521 | orchestrator | Saturday 18 April 2026 00:54:57 +0000 (0:00:00.279) 0:08:11.610 ******** 2026-04-18 00:56:45.995525 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.995529 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.995533 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.995536 | orchestrator | 2026-04-18 00:56:45.995540 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-18 00:56:45.995544 | orchestrator | Saturday 18 April 2026 00:54:58 +0000 (0:00:00.311) 0:08:11.922 ******** 2026-04-18 00:56:45.995548 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.995551 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.995555 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.995559 | orchestrator | 2026-04-18 00:56:45.995567 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-18 00:56:45.995571 | orchestrator | Saturday 18 April 2026 00:54:58 +0000 (0:00:00.309) 0:08:12.232 ******** 2026-04-18 00:56:45.995575 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.995578 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.995582 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.995586 | orchestrator | 2026-04-18 00:56:45.995589 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-18 00:56:45.995593 | orchestrator | Saturday 18 April 2026 00:54:58 +0000 (0:00:00.497) 0:08:12.729 ******** 2026-04-18 00:56:45.995597 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.995601 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.995604 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.995608 | orchestrator | 2026-04-18 00:56:45.995612 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-18 00:56:45.995616 | orchestrator | Saturday 18 April 2026 00:54:59 +0000 (0:00:00.327) 0:08:13.057 ******** 2026-04-18 00:56:45.995619 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.995623 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.995627 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.995631 | orchestrator | 2026-04-18 00:56:45.995634 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-18 00:56:45.995639 | orchestrator | Saturday 18 April 2026 00:54:59 +0000 (0:00:00.282) 0:08:13.339 ******** 2026-04-18 00:56:45.995645 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.995651 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.995661 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.995667 | orchestrator | 2026-04-18 00:56:45.995673 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-18 00:56:45.995677 | orchestrator | Saturday 18 April 2026 00:54:59 +0000 (0:00:00.327) 0:08:13.666 ******** 2026-04-18 00:56:45.995681 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.995685 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.995688 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.995692 | orchestrator | 2026-04-18 00:56:45.995696 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-18 00:56:45.995700 | orchestrator | Saturday 18 April 2026 00:55:00 +0000 (0:00:00.559) 0:08:14.226 ******** 2026-04-18 00:56:45.995703 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.995707 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.995711 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.995715 | orchestrator | 2026-04-18 00:56:45.995718 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-18 00:56:45.995722 | orchestrator | Saturday 18 April 2026 00:55:00 +0000 (0:00:00.517) 0:08:14.743 ******** 2026-04-18 00:56:45.995729 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.995733 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.995737 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-18 00:56:45.995740 | orchestrator | 2026-04-18 00:56:45.995744 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-18 00:56:45.995748 | orchestrator | Saturday 18 April 2026 00:55:01 +0000 (0:00:00.362) 0:08:15.106 ******** 2026-04-18 00:56:45.995752 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-18 00:56:45.995755 | orchestrator | 2026-04-18 00:56:45.995759 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-18 00:56:45.995763 | orchestrator | Saturday 18 April 2026 00:55:04 +0000 (0:00:02.803) 0:08:17.910 ******** 2026-04-18 00:56:45.995768 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-18 00:56:45.995773 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.995777 | orchestrator | 2026-04-18 00:56:45.995784 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-18 00:56:45.995788 | orchestrator | Saturday 18 April 2026 00:55:04 +0000 (0:00:00.209) 0:08:18.119 ******** 2026-04-18 00:56:45.995793 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-18 00:56:45.995803 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-18 00:56:45.995807 | orchestrator | 2026-04-18 00:56:45.995810 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-18 00:56:45.995814 | orchestrator | Saturday 18 April 2026 00:55:11 +0000 (0:00:07.149) 0:08:25.269 ******** 2026-04-18 00:56:45.995818 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-18 00:56:45.995822 | orchestrator | 2026-04-18 00:56:45.995826 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-18 00:56:45.995829 | orchestrator | Saturday 18 April 2026 00:55:15 +0000 (0:00:03.744) 0:08:29.013 ******** 2026-04-18 00:56:45.995833 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.995837 | orchestrator | 2026-04-18 00:56:45.995841 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-18 00:56:45.995844 | orchestrator | Saturday 18 April 2026 00:55:15 +0000 (0:00:00.452) 0:08:29.466 ******** 2026-04-18 00:56:45.995848 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-18 00:56:45.995852 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-18 00:56:45.995855 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-18 00:56:45.995859 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-18 00:56:45.995863 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-18 00:56:45.995867 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-18 00:56:45.995870 | orchestrator | 2026-04-18 00:56:45.995874 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-18 00:56:45.995878 | orchestrator | Saturday 18 April 2026 00:55:16 +0000 (0:00:01.269) 0:08:30.735 ******** 2026-04-18 00:56:45.995881 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.995886 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-18 00:56:45.995889 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-18 00:56:45.995893 | orchestrator | 2026-04-18 00:56:45.995897 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-18 00:56:45.995901 | orchestrator | Saturday 18 April 2026 00:55:19 +0000 (0:00:02.313) 0:08:33.049 ******** 2026-04-18 00:56:45.995905 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-18 00:56:45.995908 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-18 00:56:45.995912 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.995919 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-18 00:56:45.995923 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-18 00:56:45.995927 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.995933 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-18 00:56:45.995939 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-18 00:56:45.995945 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.995950 | orchestrator | 2026-04-18 00:56:45.995956 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-18 00:56:45.995961 | orchestrator | Saturday 18 April 2026 00:55:20 +0000 (0:00:01.049) 0:08:34.098 ******** 2026-04-18 00:56:45.995972 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.995978 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.995983 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.995990 | orchestrator | 2026-04-18 00:56:45.995995 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-18 00:56:45.996001 | orchestrator | Saturday 18 April 2026 00:55:22 +0000 (0:00:02.339) 0:08:36.438 ******** 2026-04-18 00:56:45.996011 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.996018 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.996024 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.996031 | orchestrator | 2026-04-18 00:56:45.996037 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-18 00:56:45.996043 | orchestrator | Saturday 18 April 2026 00:55:22 +0000 (0:00:00.306) 0:08:36.744 ******** 2026-04-18 00:56:45.996049 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.996054 | orchestrator | 2026-04-18 00:56:45.996060 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-18 00:56:45.996065 | orchestrator | Saturday 18 April 2026 00:55:23 +0000 (0:00:00.746) 0:08:37.491 ******** 2026-04-18 00:56:45.996071 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.996077 | orchestrator | 2026-04-18 00:56:45.996083 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-18 00:56:45.996089 | orchestrator | Saturday 18 April 2026 00:55:24 +0000 (0:00:00.490) 0:08:37.981 ******** 2026-04-18 00:56:45.996095 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.996099 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.996103 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.996106 | orchestrator | 2026-04-18 00:56:45.996110 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-18 00:56:45.996114 | orchestrator | Saturday 18 April 2026 00:55:25 +0000 (0:00:01.415) 0:08:39.397 ******** 2026-04-18 00:56:45.996118 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.996121 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.996125 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.996129 | orchestrator | 2026-04-18 00:56:45.996133 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-18 00:56:45.996136 | orchestrator | Saturday 18 April 2026 00:55:26 +0000 (0:00:01.086) 0:08:40.484 ******** 2026-04-18 00:56:45.996140 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.996144 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.996148 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.996151 | orchestrator | 2026-04-18 00:56:45.996155 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-18 00:56:45.996159 | orchestrator | Saturday 18 April 2026 00:55:28 +0000 (0:00:01.637) 0:08:42.121 ******** 2026-04-18 00:56:45.996163 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.996166 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.996170 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.996174 | orchestrator | 2026-04-18 00:56:45.996178 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-18 00:56:45.996181 | orchestrator | Saturday 18 April 2026 00:55:30 +0000 (0:00:01.967) 0:08:44.089 ******** 2026-04-18 00:56:45.996185 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996189 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996193 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996196 | orchestrator | 2026-04-18 00:56:45.996200 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-18 00:56:45.996204 | orchestrator | Saturday 18 April 2026 00:55:31 +0000 (0:00:01.568) 0:08:45.658 ******** 2026-04-18 00:56:45.996208 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.996211 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.996219 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.996223 | orchestrator | 2026-04-18 00:56:45.996227 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-18 00:56:45.996231 | orchestrator | Saturday 18 April 2026 00:55:32 +0000 (0:00:00.768) 0:08:46.427 ******** 2026-04-18 00:56:45.996235 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.996238 | orchestrator | 2026-04-18 00:56:45.996242 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-18 00:56:45.996246 | orchestrator | Saturday 18 April 2026 00:55:33 +0000 (0:00:00.756) 0:08:47.183 ******** 2026-04-18 00:56:45.996250 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996253 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996275 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996279 | orchestrator | 2026-04-18 00:56:45.996283 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-18 00:56:45.996287 | orchestrator | Saturday 18 April 2026 00:55:33 +0000 (0:00:00.295) 0:08:47.478 ******** 2026-04-18 00:56:45.996290 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.996294 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.996298 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.996302 | orchestrator | 2026-04-18 00:56:45.996305 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-18 00:56:45.996309 | orchestrator | Saturday 18 April 2026 00:55:34 +0000 (0:00:01.233) 0:08:48.711 ******** 2026-04-18 00:56:45.996313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.996320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.996324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.996328 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.996332 | orchestrator | 2026-04-18 00:56:45.996335 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-18 00:56:45.996339 | orchestrator | Saturday 18 April 2026 00:55:35 +0000 (0:00:01.000) 0:08:49.712 ******** 2026-04-18 00:56:45.996343 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996347 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996350 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996354 | orchestrator | 2026-04-18 00:56:45.996358 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-18 00:56:45.996362 | orchestrator | 2026-04-18 00:56:45.996365 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-18 00:56:45.996369 | orchestrator | Saturday 18 April 2026 00:55:36 +0000 (0:00:00.517) 0:08:50.230 ******** 2026-04-18 00:56:45.996376 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.996380 | orchestrator | 2026-04-18 00:56:45.996384 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-18 00:56:45.996388 | orchestrator | Saturday 18 April 2026 00:55:37 +0000 (0:00:00.675) 0:08:50.905 ******** 2026-04-18 00:56:45.996392 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.996395 | orchestrator | 2026-04-18 00:56:45.996399 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-18 00:56:45.996403 | orchestrator | Saturday 18 April 2026 00:55:37 +0000 (0:00:00.510) 0:08:51.415 ******** 2026-04-18 00:56:45.996406 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.996410 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.996414 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.996418 | orchestrator | 2026-04-18 00:56:45.996421 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-18 00:56:45.996425 | orchestrator | Saturday 18 April 2026 00:55:37 +0000 (0:00:00.277) 0:08:51.693 ******** 2026-04-18 00:56:45.996434 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996438 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996442 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996445 | orchestrator | 2026-04-18 00:56:45.996449 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-18 00:56:45.996453 | orchestrator | Saturday 18 April 2026 00:55:38 +0000 (0:00:00.890) 0:08:52.584 ******** 2026-04-18 00:56:45.996457 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996460 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996464 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996468 | orchestrator | 2026-04-18 00:56:45.996471 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-18 00:56:45.996475 | orchestrator | Saturday 18 April 2026 00:55:39 +0000 (0:00:00.730) 0:08:53.315 ******** 2026-04-18 00:56:45.996479 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996482 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996486 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996490 | orchestrator | 2026-04-18 00:56:45.996493 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-18 00:56:45.996497 | orchestrator | Saturday 18 April 2026 00:55:40 +0000 (0:00:00.707) 0:08:54.023 ******** 2026-04-18 00:56:45.996501 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.996505 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.996508 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.996512 | orchestrator | 2026-04-18 00:56:45.996516 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-18 00:56:45.996519 | orchestrator | Saturday 18 April 2026 00:55:40 +0000 (0:00:00.335) 0:08:54.358 ******** 2026-04-18 00:56:45.996523 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.996527 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.996531 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.996534 | orchestrator | 2026-04-18 00:56:45.996538 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-18 00:56:45.996542 | orchestrator | Saturday 18 April 2026 00:55:41 +0000 (0:00:00.508) 0:08:54.867 ******** 2026-04-18 00:56:45.996546 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.996549 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.996553 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.996557 | orchestrator | 2026-04-18 00:56:45.996561 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-18 00:56:45.996564 | orchestrator | Saturday 18 April 2026 00:55:41 +0000 (0:00:00.275) 0:08:55.143 ******** 2026-04-18 00:56:45.996568 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996572 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996576 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996579 | orchestrator | 2026-04-18 00:56:45.996583 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-18 00:56:45.996587 | orchestrator | Saturday 18 April 2026 00:55:42 +0000 (0:00:00.763) 0:08:55.907 ******** 2026-04-18 00:56:45.996591 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996594 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996598 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996602 | orchestrator | 2026-04-18 00:56:45.996606 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-18 00:56:45.996610 | orchestrator | Saturday 18 April 2026 00:55:42 +0000 (0:00:00.742) 0:08:56.649 ******** 2026-04-18 00:56:45.996613 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.996617 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.996621 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.996625 | orchestrator | 2026-04-18 00:56:45.996629 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-18 00:56:45.996632 | orchestrator | Saturday 18 April 2026 00:55:43 +0000 (0:00:00.500) 0:08:57.149 ******** 2026-04-18 00:56:45.996636 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.996640 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.996647 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.996651 | orchestrator | 2026-04-18 00:56:45.996657 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-18 00:56:45.996661 | orchestrator | Saturday 18 April 2026 00:55:43 +0000 (0:00:00.294) 0:08:57.444 ******** 2026-04-18 00:56:45.996665 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996669 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996672 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996676 | orchestrator | 2026-04-18 00:56:45.996680 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-18 00:56:45.996684 | orchestrator | Saturday 18 April 2026 00:55:44 +0000 (0:00:00.335) 0:08:57.780 ******** 2026-04-18 00:56:45.996688 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996695 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996701 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996707 | orchestrator | 2026-04-18 00:56:45.996713 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-18 00:56:45.996719 | orchestrator | Saturday 18 April 2026 00:55:44 +0000 (0:00:00.306) 0:08:58.087 ******** 2026-04-18 00:56:45.996726 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996732 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996740 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996747 | orchestrator | 2026-04-18 00:56:45.996757 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-18 00:56:45.996764 | orchestrator | Saturday 18 April 2026 00:55:44 +0000 (0:00:00.553) 0:08:58.640 ******** 2026-04-18 00:56:45.996770 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.996777 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.996783 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.996787 | orchestrator | 2026-04-18 00:56:45.996791 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-18 00:56:45.996794 | orchestrator | Saturday 18 April 2026 00:55:45 +0000 (0:00:00.295) 0:08:58.935 ******** 2026-04-18 00:56:45.996798 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.996802 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.996806 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.996809 | orchestrator | 2026-04-18 00:56:45.996813 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-18 00:56:45.996817 | orchestrator | Saturday 18 April 2026 00:55:45 +0000 (0:00:00.305) 0:08:59.241 ******** 2026-04-18 00:56:45.996821 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.996824 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.996828 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.996832 | orchestrator | 2026-04-18 00:56:45.996835 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-18 00:56:45.996839 | orchestrator | Saturday 18 April 2026 00:55:45 +0000 (0:00:00.268) 0:08:59.510 ******** 2026-04-18 00:56:45.996843 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996847 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996850 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996854 | orchestrator | 2026-04-18 00:56:45.996858 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-18 00:56:45.996862 | orchestrator | Saturday 18 April 2026 00:55:46 +0000 (0:00:00.285) 0:08:59.796 ******** 2026-04-18 00:56:45.996865 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.996869 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.996873 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.996877 | orchestrator | 2026-04-18 00:56:45.996881 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-18 00:56:45.996884 | orchestrator | Saturday 18 April 2026 00:55:46 +0000 (0:00:00.746) 0:09:00.542 ******** 2026-04-18 00:56:45.996888 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.996892 | orchestrator | 2026-04-18 00:56:45.996896 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-18 00:56:45.996904 | orchestrator | Saturday 18 April 2026 00:55:47 +0000 (0:00:00.492) 0:09:01.035 ******** 2026-04-18 00:56:45.996908 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.996911 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-18 00:56:45.996915 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-18 00:56:45.996919 | orchestrator | 2026-04-18 00:56:45.996922 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-18 00:56:45.996926 | orchestrator | Saturday 18 April 2026 00:55:49 +0000 (0:00:02.591) 0:09:03.627 ******** 2026-04-18 00:56:45.996930 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-18 00:56:45.996934 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-18 00:56:45.996938 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.996941 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-18 00:56:45.996945 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-18 00:56:45.996949 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.996953 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-18 00:56:45.996956 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-18 00:56:45.996961 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.996967 | orchestrator | 2026-04-18 00:56:45.996972 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-18 00:56:45.996978 | orchestrator | Saturday 18 April 2026 00:55:51 +0000 (0:00:01.258) 0:09:04.886 ******** 2026-04-18 00:56:45.996984 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.996989 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.996995 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.997000 | orchestrator | 2026-04-18 00:56:45.997006 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-18 00:56:45.997011 | orchestrator | Saturday 18 April 2026 00:55:51 +0000 (0:00:00.300) 0:09:05.186 ******** 2026-04-18 00:56:45.997017 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.997023 | orchestrator | 2026-04-18 00:56:45.997029 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-18 00:56:45.997035 | orchestrator | Saturday 18 April 2026 00:55:52 +0000 (0:00:00.687) 0:09:05.874 ******** 2026-04-18 00:56:45.997046 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.997052 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.997058 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.997064 | orchestrator | 2026-04-18 00:56:45.997070 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-18 00:56:45.997076 | orchestrator | Saturday 18 April 2026 00:55:52 +0000 (0:00:00.807) 0:09:06.681 ******** 2026-04-18 00:56:45.997083 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.997093 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-18 00:56:45.997099 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.997105 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-18 00:56:45.997109 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.997113 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-18 00:56:45.997122 | orchestrator | 2026-04-18 00:56:45.997125 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-18 00:56:45.997129 | orchestrator | Saturday 18 April 2026 00:55:57 +0000 (0:00:04.521) 0:09:11.203 ******** 2026-04-18 00:56:45.997133 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.997137 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-18 00:56:45.997141 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.997144 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-18 00:56:45.997148 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:56:45.997152 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-18 00:56:45.997156 | orchestrator | 2026-04-18 00:56:45.997159 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-18 00:56:45.997163 | orchestrator | Saturday 18 April 2026 00:55:59 +0000 (0:00:02.290) 0:09:13.493 ******** 2026-04-18 00:56:45.997167 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-18 00:56:45.997171 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.997174 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-18 00:56:45.997178 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.997182 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-18 00:56:45.997186 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.997189 | orchestrator | 2026-04-18 00:56:45.997193 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-18 00:56:45.997197 | orchestrator | Saturday 18 April 2026 00:56:01 +0000 (0:00:01.420) 0:09:14.914 ******** 2026-04-18 00:56:45.997201 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-18 00:56:45.997204 | orchestrator | 2026-04-18 00:56:45.997208 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-18 00:56:45.997212 | orchestrator | Saturday 18 April 2026 00:56:01 +0000 (0:00:00.222) 0:09:15.137 ******** 2026-04-18 00:56:45.997216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-18 00:56:45.997220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-18 00:56:45.997224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-18 00:56:45.997228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-18 00:56:45.997232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-18 00:56:45.997235 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.997239 | orchestrator | 2026-04-18 00:56:45.997243 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-18 00:56:45.997247 | orchestrator | Saturday 18 April 2026 00:56:01 +0000 (0:00:00.591) 0:09:15.728 ******** 2026-04-18 00:56:45.997250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-18 00:56:45.997254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-18 00:56:45.997276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-18 00:56:45.997287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-18 00:56:45.997293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-18 00:56:45.997304 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.997311 | orchestrator | 2026-04-18 00:56:45.997317 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-18 00:56:45.997322 | orchestrator | Saturday 18 April 2026 00:56:02 +0000 (0:00:00.557) 0:09:16.286 ******** 2026-04-18 00:56:45.997328 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-18 00:56:45.997337 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-18 00:56:45.997344 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-18 00:56:45.997349 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-18 00:56:45.997356 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-18 00:56:45.997362 | orchestrator | 2026-04-18 00:56:45.997369 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-18 00:56:45.997375 | orchestrator | Saturday 18 April 2026 00:56:33 +0000 (0:00:30.837) 0:09:47.124 ******** 2026-04-18 00:56:45.997381 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.997388 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.997392 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.997396 | orchestrator | 2026-04-18 00:56:45.997400 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-18 00:56:45.997404 | orchestrator | Saturday 18 April 2026 00:56:33 +0000 (0:00:00.246) 0:09:47.370 ******** 2026-04-18 00:56:45.997408 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.997411 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.997415 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.997419 | orchestrator | 2026-04-18 00:56:45.997422 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-18 00:56:45.997426 | orchestrator | Saturday 18 April 2026 00:56:33 +0000 (0:00:00.258) 0:09:47.628 ******** 2026-04-18 00:56:45.997430 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.997434 | orchestrator | 2026-04-18 00:56:45.997437 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-18 00:56:45.997441 | orchestrator | Saturday 18 April 2026 00:56:34 +0000 (0:00:00.615) 0:09:48.243 ******** 2026-04-18 00:56:45.997445 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.997449 | orchestrator | 2026-04-18 00:56:45.997453 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-18 00:56:45.997457 | orchestrator | Saturday 18 April 2026 00:56:34 +0000 (0:00:00.421) 0:09:48.665 ******** 2026-04-18 00:56:45.997461 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.997464 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.997468 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.997473 | orchestrator | 2026-04-18 00:56:45.997479 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-18 00:56:45.997485 | orchestrator | Saturday 18 April 2026 00:56:36 +0000 (0:00:01.422) 0:09:50.087 ******** 2026-04-18 00:56:45.997490 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.997496 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.997503 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.997509 | orchestrator | 2026-04-18 00:56:45.997515 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-18 00:56:45.997528 | orchestrator | Saturday 18 April 2026 00:56:37 +0000 (0:00:01.232) 0:09:51.320 ******** 2026-04-18 00:56:45.997535 | orchestrator | changed: [testbed-node-3] 2026-04-18 00:56:45.997539 | orchestrator | changed: [testbed-node-5] 2026-04-18 00:56:45.997543 | orchestrator | changed: [testbed-node-4] 2026-04-18 00:56:45.997546 | orchestrator | 2026-04-18 00:56:45.997550 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-18 00:56:45.997554 | orchestrator | Saturday 18 April 2026 00:56:39 +0000 (0:00:01.806) 0:09:53.126 ******** 2026-04-18 00:56:45.997558 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.997562 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.997565 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-18 00:56:45.997569 | orchestrator | 2026-04-18 00:56:45.997573 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-18 00:56:45.997577 | orchestrator | Saturday 18 April 2026 00:56:41 +0000 (0:00:02.599) 0:09:55.726 ******** 2026-04-18 00:56:45.997581 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.997584 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.997588 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.997592 | orchestrator | 2026-04-18 00:56:45.997599 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-18 00:56:45.997602 | orchestrator | Saturday 18 April 2026 00:56:42 +0000 (0:00:00.327) 0:09:56.053 ******** 2026-04-18 00:56:45.997606 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:56:45.997610 | orchestrator | 2026-04-18 00:56:45.997614 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-18 00:56:45.997618 | orchestrator | Saturday 18 April 2026 00:56:42 +0000 (0:00:00.691) 0:09:56.745 ******** 2026-04-18 00:56:45.997621 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.997625 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.997629 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.997632 | orchestrator | 2026-04-18 00:56:45.997637 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-18 00:56:45.997640 | orchestrator | Saturday 18 April 2026 00:56:43 +0000 (0:00:00.306) 0:09:57.051 ******** 2026-04-18 00:56:45.997644 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.997652 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:56:45.997656 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:56:45.997660 | orchestrator | 2026-04-18 00:56:45.997664 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-18 00:56:45.997667 | orchestrator | Saturday 18 April 2026 00:56:43 +0000 (0:00:00.339) 0:09:57.391 ******** 2026-04-18 00:56:45.997671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:56:45.997675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:56:45.997679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:56:45.997682 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:56:45.997686 | orchestrator | 2026-04-18 00:56:45.997690 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-18 00:56:45.997694 | orchestrator | Saturday 18 April 2026 00:56:44 +0000 (0:00:00.814) 0:09:58.206 ******** 2026-04-18 00:56:45.997697 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:56:45.997701 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:56:45.997705 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:56:45.997708 | orchestrator | 2026-04-18 00:56:45.997712 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:56:45.997716 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-18 00:56:45.997725 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-18 00:56:45.997729 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-18 00:56:45.997733 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-18 00:56:45.997737 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-18 00:56:45.997740 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-18 00:56:45.997744 | orchestrator | 2026-04-18 00:56:45.997748 | orchestrator | 2026-04-18 00:56:45.997752 | orchestrator | 2026-04-18 00:56:45.997755 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:56:45.997759 | orchestrator | Saturday 18 April 2026 00:56:44 +0000 (0:00:00.214) 0:09:58.420 ******** 2026-04-18 00:56:45.997763 | orchestrator | =============================================================================== 2026-04-18 00:56:45.997767 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 43.54s 2026-04-18 00:56:45.997771 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 36.63s 2026-04-18 00:56:45.997774 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.48s 2026-04-18 00:56:45.997778 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.84s 2026-04-18 00:56:45.997782 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 12.92s 2026-04-18 00:56:45.997786 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.58s 2026-04-18 00:56:45.997789 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.52s 2026-04-18 00:56:45.997793 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.86s 2026-04-18 00:56:45.997797 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.45s 2026-04-18 00:56:45.997801 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.15s 2026-04-18 00:56:45.997804 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.45s 2026-04-18 00:56:45.997809 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.80s 2026-04-18 00:56:45.997815 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.52s 2026-04-18 00:56:45.997821 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.40s 2026-04-18 00:56:45.997827 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.31s 2026-04-18 00:56:45.997834 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.94s 2026-04-18 00:56:45.997840 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.82s 2026-04-18 00:56:45.997849 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.74s 2026-04-18 00:56:45.997855 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.71s 2026-04-18 00:56:45.997861 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.45s 2026-04-18 00:56:45.997867 | orchestrator | 2026-04-18 00:56:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:45.997874 | orchestrator | 2026-04-18 00:56:45 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:45.997881 | orchestrator | 2026-04-18 00:56:45 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:45.997893 | orchestrator | 2026-04-18 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:49.036302 | orchestrator | 2026-04-18 00:56:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:49.037337 | orchestrator | 2026-04-18 00:56:49 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:49.037791 | orchestrator | 2026-04-18 00:56:49 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:49.039364 | orchestrator | 2026-04-18 00:56:49 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:56:49.039441 | orchestrator | 2026-04-18 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:52.088694 | orchestrator | 2026-04-18 00:56:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:52.090230 | orchestrator | 2026-04-18 00:56:52 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:52.091766 | orchestrator | 2026-04-18 00:56:52 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:52.093612 | orchestrator | 2026-04-18 00:56:52 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:56:52.093652 | orchestrator | 2026-04-18 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:55.131921 | orchestrator | 2026-04-18 00:56:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:55.134213 | orchestrator | 2026-04-18 00:56:55 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:55.136356 | orchestrator | 2026-04-18 00:56:55 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:55.138127 | orchestrator | 2026-04-18 00:56:55 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:56:55.138425 | orchestrator | 2026-04-18 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:56:58.178802 | orchestrator | 2026-04-18 00:56:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:56:58.180726 | orchestrator | 2026-04-18 00:56:58 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:56:58.183034 | orchestrator | 2026-04-18 00:56:58 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:56:58.184483 | orchestrator | 2026-04-18 00:56:58 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:56:58.184621 | orchestrator | 2026-04-18 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:01.225577 | orchestrator | 2026-04-18 00:57:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:01.227040 | orchestrator | 2026-04-18 00:57:01 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:57:01.228902 | orchestrator | 2026-04-18 00:57:01 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:57:01.231654 | orchestrator | 2026-04-18 00:57:01 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:01.231966 | orchestrator | 2026-04-18 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:04.269206 | orchestrator | 2026-04-18 00:57:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:04.269500 | orchestrator | 2026-04-18 00:57:04 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:57:04.270685 | orchestrator | 2026-04-18 00:57:04 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:57:04.271602 | orchestrator | 2026-04-18 00:57:04 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:04.271704 | orchestrator | 2026-04-18 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:07.308603 | orchestrator | 2026-04-18 00:57:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:07.310503 | orchestrator | 2026-04-18 00:57:07 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state STARTED 2026-04-18 00:57:07.311994 | orchestrator | 2026-04-18 00:57:07 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state STARTED 2026-04-18 00:57:07.314718 | orchestrator | 2026-04-18 00:57:07 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:07.314789 | orchestrator | 2026-04-18 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:10.353225 | orchestrator | 2026-04-18 00:57:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:10.353433 | orchestrator | 2026-04-18 00:57:10 | INFO  | Task 8e77cbc9-8b91-4f14-bc7a-2f948f5a7f42 is in state SUCCESS 2026-04-18 00:57:10.354235 | orchestrator | 2026-04-18 00:57:10 | INFO  | Task 664338ca-7b11-4571-a186-2fa2049f5d65 is in state SUCCESS 2026-04-18 00:57:10.358567 | orchestrator | 2026-04-18 00:57:10 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:10.358613 | orchestrator | 2026-04-18 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:13.408825 | orchestrator | 2026-04-18 00:57:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:13.409263 | orchestrator | 2026-04-18 00:57:13 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:13.409475 | orchestrator | 2026-04-18 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:16.454095 | orchestrator | 2026-04-18 00:57:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:16.456172 | orchestrator | 2026-04-18 00:57:16 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:16.456265 | orchestrator | 2026-04-18 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:19.493948 | orchestrator | 2026-04-18 00:57:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:19.496313 | orchestrator | 2026-04-18 00:57:19 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:19.496381 | orchestrator | 2026-04-18 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:22.543227 | orchestrator | 2026-04-18 00:57:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:22.544832 | orchestrator | 2026-04-18 00:57:22 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:22.544896 | orchestrator | 2026-04-18 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:25.587509 | orchestrator | 2026-04-18 00:57:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:25.589321 | orchestrator | 2026-04-18 00:57:25 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:25.589597 | orchestrator | 2026-04-18 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:28.631763 | orchestrator | 2026-04-18 00:57:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:28.632132 | orchestrator | 2026-04-18 00:57:28 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:28.632157 | orchestrator | 2026-04-18 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:31.673435 | orchestrator | 2026-04-18 00:57:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:31.674601 | orchestrator | 2026-04-18 00:57:31 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:31.674663 | orchestrator | 2026-04-18 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:34.713772 | orchestrator | 2026-04-18 00:57:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:34.715703 | orchestrator | 2026-04-18 00:57:34 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:34.715777 | orchestrator | 2026-04-18 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:37.759124 | orchestrator | 2026-04-18 00:57:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:37.761382 | orchestrator | 2026-04-18 00:57:37 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:37.761435 | orchestrator | 2026-04-18 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:40.807451 | orchestrator | 2026-04-18 00:57:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:40.809327 | orchestrator | 2026-04-18 00:57:40 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:40.809376 | orchestrator | 2026-04-18 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:43.855157 | orchestrator | 2026-04-18 00:57:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:43.857782 | orchestrator | 2026-04-18 00:57:43 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:43.857837 | orchestrator | 2026-04-18 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:46.900779 | orchestrator | 2026-04-18 00:57:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:46.902580 | orchestrator | 2026-04-18 00:57:46 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:46.902642 | orchestrator | 2026-04-18 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:49.942662 | orchestrator | 2026-04-18 00:57:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:49.944632 | orchestrator | 2026-04-18 00:57:49 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:49.944693 | orchestrator | 2026-04-18 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:52.986517 | orchestrator | 2026-04-18 00:57:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:52.989148 | orchestrator | 2026-04-18 00:57:52 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:52.989218 | orchestrator | 2026-04-18 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:56.029535 | orchestrator | 2026-04-18 00:57:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:56.031276 | orchestrator | 2026-04-18 00:57:56 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:56.031366 | orchestrator | 2026-04-18 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:57:59.073943 | orchestrator | 2026-04-18 00:57:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:57:59.074349 | orchestrator | 2026-04-18 00:57:59 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:57:59.074727 | orchestrator | 2026-04-18 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:02.113568 | orchestrator | 2026-04-18 00:58:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:02.114693 | orchestrator | 2026-04-18 00:58:02 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:02.114744 | orchestrator | 2026-04-18 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:05.156157 | orchestrator | 2026-04-18 00:58:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:05.157697 | orchestrator | 2026-04-18 00:58:05 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:05.157832 | orchestrator | 2026-04-18 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:08.199747 | orchestrator | 2026-04-18 00:58:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:08.201405 | orchestrator | 2026-04-18 00:58:08 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:08.201460 | orchestrator | 2026-04-18 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:11.249736 | orchestrator | 2026-04-18 00:58:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:11.252122 | orchestrator | 2026-04-18 00:58:11 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:11.252195 | orchestrator | 2026-04-18 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:14.293010 | orchestrator | 2026-04-18 00:58:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:14.295342 | orchestrator | 2026-04-18 00:58:14 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:14.295382 | orchestrator | 2026-04-18 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:17.341801 | orchestrator | 2026-04-18 00:58:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:17.343377 | orchestrator | 2026-04-18 00:58:17 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:17.343473 | orchestrator | 2026-04-18 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:20.390256 | orchestrator | 2026-04-18 00:58:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:20.392254 | orchestrator | 2026-04-18 00:58:20 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:20.392361 | orchestrator | 2026-04-18 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:23.432928 | orchestrator | 2026-04-18 00:58:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:23.434482 | orchestrator | 2026-04-18 00:58:23 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:23.434526 | orchestrator | 2026-04-18 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:26.477837 | orchestrator | 2026-04-18 00:58:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:26.480017 | orchestrator | 2026-04-18 00:58:26 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:26.480125 | orchestrator | 2026-04-18 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:29.519256 | orchestrator | 2026-04-18 00:58:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:29.521056 | orchestrator | 2026-04-18 00:58:29 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:29.521122 | orchestrator | 2026-04-18 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:32.561862 | orchestrator | 2026-04-18 00:58:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:32.562613 | orchestrator | 2026-04-18 00:58:32 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:32.562662 | orchestrator | 2026-04-18 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:35.591750 | orchestrator | 2026-04-18 00:58:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:35.593550 | orchestrator | 2026-04-18 00:58:35 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:35.593687 | orchestrator | 2026-04-18 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:38.636430 | orchestrator | 2026-04-18 00:58:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:38.637981 | orchestrator | 2026-04-18 00:58:38 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:38.638091 | orchestrator | 2026-04-18 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:41.680969 | orchestrator | 2026-04-18 00:58:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:41.682917 | orchestrator | 2026-04-18 00:58:41 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:41.682959 | orchestrator | 2026-04-18 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:44.724537 | orchestrator | 2026-04-18 00:58:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:44.726796 | orchestrator | 2026-04-18 00:58:44 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:44.726889 | orchestrator | 2026-04-18 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:47.763602 | orchestrator | 2026-04-18 00:58:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:47.765336 | orchestrator | 2026-04-18 00:58:47 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:47.765381 | orchestrator | 2026-04-18 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:50.809676 | orchestrator | 2026-04-18 00:58:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:50.812379 | orchestrator | 2026-04-18 00:58:50 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:50.812442 | orchestrator | 2026-04-18 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:53.851698 | orchestrator | 2026-04-18 00:58:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:53.853051 | orchestrator | 2026-04-18 00:58:53 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state STARTED 2026-04-18 00:58:53.853094 | orchestrator | 2026-04-18 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:56.896867 | orchestrator | 2026-04-18 00:58:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:56.898816 | orchestrator | 2026-04-18 00:58:56 | INFO  | Task 998b3b7d-9a39-4bd0-853d-410c38a1c2f1 is in state STARTED 2026-04-18 00:58:56.903284 | orchestrator | 2026-04-18 00:58:56 | INFO  | Task 334bcbd5-03d3-4484-92a7-bcf130b92771 is in state SUCCESS 2026-04-18 00:58:56.905000 | orchestrator | 2026-04-18 00:58:56.905053 | orchestrator | 2026-04-18 00:58:56.905061 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:58:56.905068 | orchestrator | 2026-04-18 00:58:56.905073 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:58:56.905079 | orchestrator | Saturday 18 April 2026 00:56:12 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-04-18 00:58:56.905084 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:58:56.905091 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:58:56.905096 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:58:56.905103 | orchestrator | 2026-04-18 00:58:56.905111 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:58:56.905120 | orchestrator | Saturday 18 April 2026 00:56:12 +0000 (0:00:00.264) 0:00:00.562 ******** 2026-04-18 00:58:56.905132 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-18 00:58:56.905145 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-18 00:58:56.905153 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-18 00:58:56.905208 | orchestrator | 2026-04-18 00:58:56.905220 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-18 00:58:56.905228 | orchestrator | 2026-04-18 00:58:56.905237 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-18 00:58:56.905332 | orchestrator | Saturday 18 April 2026 00:56:13 +0000 (0:00:00.294) 0:00:00.857 ******** 2026-04-18 00:58:56.905343 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:58:56.905353 | orchestrator | 2026-04-18 00:58:56.905362 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-18 00:58:56.905626 | orchestrator | Saturday 18 April 2026 00:56:13 +0000 (0:00:00.577) 0:00:01.435 ******** 2026-04-18 00:58:56.905641 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (5 retries left). 2026-04-18 00:58:56.905648 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (4 retries left). 2026-04-18 00:58:56.905655 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (3 retries left). 2026-04-18 00:58:56.905664 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (2 retries left). 2026-04-18 00:58:56.905672 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating/deleting services (1 retries left). 2026-04-18 00:58:56.905682 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:58:56.905692 | orchestrator | 2026-04-18 00:58:56.905700 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:58:56.905708 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-18 00:58:56.905718 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:58:56.905728 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:58:56.905737 | orchestrator | 2026-04-18 00:58:56.905746 | orchestrator | 2026-04-18 00:58:56.905754 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:58:56.905784 | orchestrator | Saturday 18 April 2026 00:57:07 +0000 (0:00:53.663) 0:00:55.098 ******** 2026-04-18 00:58:56.905792 | orchestrator | =============================================================================== 2026-04-18 00:58:56.905798 | orchestrator | service-ks-register : placement | Creating/deleting services ----------- 53.66s 2026-04-18 00:58:56.905803 | orchestrator | placement : include_tasks ----------------------------------------------- 0.58s 2026-04-18 00:58:56.905808 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2026-04-18 00:58:56.905813 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-04-18 00:58:56.905818 | orchestrator | 2026-04-18 00:58:56.905823 | orchestrator | 2026-04-18 00:58:56.905828 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 00:58:56.905833 | orchestrator | 2026-04-18 00:58:56.905839 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 00:58:56.905845 | orchestrator | Saturday 18 April 2026 00:56:13 +0000 (0:00:00.299) 0:00:00.299 ******** 2026-04-18 00:58:56.905850 | orchestrator | ok: [testbed-node-0] 2026-04-18 00:58:56.905856 | orchestrator | ok: [testbed-node-1] 2026-04-18 00:58:56.905861 | orchestrator | ok: [testbed-node-2] 2026-04-18 00:58:56.905866 | orchestrator | 2026-04-18 00:58:56.905871 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 00:58:56.905876 | orchestrator | Saturday 18 April 2026 00:56:13 +0000 (0:00:00.298) 0:00:00.598 ******** 2026-04-18 00:58:56.905881 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-18 00:58:56.905887 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-18 00:58:56.905892 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-18 00:58:56.905897 | orchestrator | 2026-04-18 00:58:56.905902 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-18 00:58:56.905907 | orchestrator | 2026-04-18 00:58:56.905922 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-18 00:58:56.905928 | orchestrator | Saturday 18 April 2026 00:56:13 +0000 (0:00:00.295) 0:00:00.893 ******** 2026-04-18 00:58:56.905933 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 00:58:56.905938 | orchestrator | 2026-04-18 00:58:56.905944 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-04-18 00:58:56.905949 | orchestrator | Saturday 18 April 2026 00:56:14 +0000 (0:00:00.622) 0:00:01.516 ******** 2026-04-18 00:58:56.906244 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (5 retries left). 2026-04-18 00:58:56.906260 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (4 retries left). 2026-04-18 00:58:56.906268 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (3 retries left). 2026-04-18 00:58:56.906276 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (2 retries left). 2026-04-18 00:58:56.906285 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating/deleting services (1 retries left). 2026-04-18 00:58:56.906399 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 00:58:56.906411 | orchestrator | 2026-04-18 00:58:56.906417 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:58:56.906422 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-04-18 00:58:56.906437 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:58:56.906443 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 00:58:56.906448 | orchestrator | 2026-04-18 00:58:56.906453 | orchestrator | 2026-04-18 00:58:56.906458 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:58:56.906463 | orchestrator | Saturday 18 April 2026 00:57:07 +0000 (0:00:53.555) 0:00:55.071 ******** 2026-04-18 00:58:56.906468 | orchestrator | =============================================================================== 2026-04-18 00:58:56.906473 | orchestrator | service-ks-register : magnum | Creating/deleting services -------------- 53.56s 2026-04-18 00:58:56.906478 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.62s 2026-04-18 00:58:56.906484 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-04-18 00:58:56.906489 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.30s 2026-04-18 00:58:56.906494 | orchestrator | 2026-04-18 00:58:56.906499 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-18 00:58:56.906504 | orchestrator | 2.16.14 2026-04-18 00:58:56.906510 | orchestrator | 2026-04-18 00:58:56.906515 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-18 00:58:56.906520 | orchestrator | 2026-04-18 00:58:56.906526 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-18 00:58:56.906531 | orchestrator | Saturday 18 April 2026 00:56:49 +0000 (0:00:00.530) 0:00:00.530 ******** 2026-04-18 00:58:56.906536 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:58:56.906541 | orchestrator | 2026-04-18 00:58:56.906546 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-18 00:58:56.906551 | orchestrator | Saturday 18 April 2026 00:56:50 +0000 (0:00:00.585) 0:00:01.116 ******** 2026-04-18 00:58:56.906556 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.906561 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.906566 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.906571 | orchestrator | 2026-04-18 00:58:56.906576 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-18 00:58:56.906581 | orchestrator | Saturday 18 April 2026 00:56:51 +0000 (0:00:00.977) 0:00:02.093 ******** 2026-04-18 00:58:56.906586 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.906591 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.906600 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.906605 | orchestrator | 2026-04-18 00:58:56.906610 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-18 00:58:56.906615 | orchestrator | Saturday 18 April 2026 00:56:51 +0000 (0:00:00.268) 0:00:02.361 ******** 2026-04-18 00:58:56.906620 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.906625 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.906630 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.906635 | orchestrator | 2026-04-18 00:58:56.906640 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-18 00:58:56.906645 | orchestrator | Saturday 18 April 2026 00:56:52 +0000 (0:00:00.771) 0:00:03.133 ******** 2026-04-18 00:58:56.906650 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.906655 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.906660 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.906665 | orchestrator | 2026-04-18 00:58:56.906670 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-18 00:58:56.906704 | orchestrator | Saturday 18 April 2026 00:56:52 +0000 (0:00:00.310) 0:00:03.443 ******** 2026-04-18 00:58:56.906711 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.906716 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.906721 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.906726 | orchestrator | 2026-04-18 00:58:56.906736 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-18 00:58:56.906741 | orchestrator | Saturday 18 April 2026 00:56:52 +0000 (0:00:00.282) 0:00:03.726 ******** 2026-04-18 00:58:56.906746 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.906751 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.906756 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.906760 | orchestrator | 2026-04-18 00:58:56.906765 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-18 00:58:56.906771 | orchestrator | Saturday 18 April 2026 00:56:52 +0000 (0:00:00.278) 0:00:04.005 ******** 2026-04-18 00:58:56.906776 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.906781 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.906786 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.906791 | orchestrator | 2026-04-18 00:58:56.906796 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-18 00:58:56.906801 | orchestrator | Saturday 18 April 2026 00:56:53 +0000 (0:00:00.465) 0:00:04.470 ******** 2026-04-18 00:58:56.906806 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.906811 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.906816 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.906821 | orchestrator | 2026-04-18 00:58:56.906826 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-18 00:58:56.906831 | orchestrator | Saturday 18 April 2026 00:56:53 +0000 (0:00:00.291) 0:00:04.762 ******** 2026-04-18 00:58:56.906836 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-18 00:58:56.906841 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-18 00:58:56.906846 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-18 00:58:56.906851 | orchestrator | 2026-04-18 00:58:56.906857 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-18 00:58:56.906866 | orchestrator | Saturday 18 April 2026 00:56:54 +0000 (0:00:00.601) 0:00:05.363 ******** 2026-04-18 00:58:56.906875 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.906884 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.906891 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.906900 | orchestrator | 2026-04-18 00:58:56.906909 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-18 00:58:56.906917 | orchestrator | Saturday 18 April 2026 00:56:54 +0000 (0:00:00.383) 0:00:05.746 ******** 2026-04-18 00:58:56.906926 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-18 00:58:56.906934 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-18 00:58:56.906943 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-18 00:58:56.906952 | orchestrator | 2026-04-18 00:58:56.906960 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-18 00:58:56.906970 | orchestrator | Saturday 18 April 2026 00:56:57 +0000 (0:00:02.925) 0:00:08.672 ******** 2026-04-18 00:58:56.906975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-18 00:58:56.906980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-18 00:58:56.906986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-18 00:58:56.906990 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.906995 | orchestrator | 2026-04-18 00:58:56.907001 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-18 00:58:56.907005 | orchestrator | Saturday 18 April 2026 00:56:57 +0000 (0:00:00.367) 0:00:09.040 ******** 2026-04-18 00:58:56.907012 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.907020 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.907030 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.907035 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907040 | orchestrator | 2026-04-18 00:58:56.907049 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-18 00:58:56.907054 | orchestrator | Saturday 18 April 2026 00:56:58 +0000 (0:00:00.797) 0:00:09.837 ******** 2026-04-18 00:58:56.907060 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.907088 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.907095 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.907100 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907105 | orchestrator | 2026-04-18 00:58:56.907110 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-18 00:58:56.907115 | orchestrator | Saturday 18 April 2026 00:56:58 +0000 (0:00:00.158) 0:00:09.995 ******** 2026-04-18 00:58:56.907122 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '543ee2eb341f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-18 00:56:55.582575', 'end': '2026-04-18 00:56:55.620328', 'delta': '0:00:00.037753', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['543ee2eb341f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-18 00:58:56.907131 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '68f86a3a9fd1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-18 00:56:56.589787', 'end': '2026-04-18 00:56:56.623936', 'delta': '0:00:00.034149', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['68f86a3a9fd1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-18 00:58:56.907141 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'da51d27e8faa', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-18 00:56:57.408675', 'end': '2026-04-18 00:56:57.449200', 'delta': '0:00:00.040525', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['da51d27e8faa'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-18 00:58:56.907147 | orchestrator | 2026-04-18 00:58:56.907152 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-18 00:58:56.907157 | orchestrator | Saturday 18 April 2026 00:56:59 +0000 (0:00:00.347) 0:00:10.343 ******** 2026-04-18 00:58:56.907162 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.907167 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.907175 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.907180 | orchestrator | 2026-04-18 00:58:56.907185 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-18 00:58:56.907190 | orchestrator | Saturday 18 April 2026 00:56:59 +0000 (0:00:00.399) 0:00:10.743 ******** 2026-04-18 00:58:56.907195 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-18 00:58:56.907200 | orchestrator | 2026-04-18 00:58:56.907205 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-18 00:58:56.907210 | orchestrator | Saturday 18 April 2026 00:57:01 +0000 (0:00:01.863) 0:00:12.607 ******** 2026-04-18 00:58:56.907215 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907220 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.907225 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.907230 | orchestrator | 2026-04-18 00:58:56.907235 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-18 00:58:56.907240 | orchestrator | Saturday 18 April 2026 00:57:01 +0000 (0:00:00.292) 0:00:12.899 ******** 2026-04-18 00:58:56.907261 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907267 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.907276 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.907285 | orchestrator | 2026-04-18 00:58:56.907294 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-18 00:58:56.907323 | orchestrator | Saturday 18 April 2026 00:57:02 +0000 (0:00:00.427) 0:00:13.327 ******** 2026-04-18 00:58:56.907335 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907344 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.907352 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.907363 | orchestrator | 2026-04-18 00:58:56.907371 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-18 00:58:56.907379 | orchestrator | Saturday 18 April 2026 00:57:02 +0000 (0:00:00.442) 0:00:13.769 ******** 2026-04-18 00:58:56.907387 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.907396 | orchestrator | 2026-04-18 00:58:56.907404 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-18 00:58:56.907413 | orchestrator | Saturday 18 April 2026 00:57:02 +0000 (0:00:00.125) 0:00:13.895 ******** 2026-04-18 00:58:56.907421 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907429 | orchestrator | 2026-04-18 00:58:56.907438 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-18 00:58:56.907446 | orchestrator | Saturday 18 April 2026 00:57:03 +0000 (0:00:00.227) 0:00:14.122 ******** 2026-04-18 00:58:56.907455 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907464 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.907474 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.907482 | orchestrator | 2026-04-18 00:58:56.907491 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-18 00:58:56.907507 | orchestrator | Saturday 18 April 2026 00:57:03 +0000 (0:00:00.344) 0:00:14.466 ******** 2026-04-18 00:58:56.907513 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907518 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.907523 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.907528 | orchestrator | 2026-04-18 00:58:56.907533 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-18 00:58:56.907538 | orchestrator | Saturday 18 April 2026 00:57:03 +0000 (0:00:00.285) 0:00:14.752 ******** 2026-04-18 00:58:56.907544 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907549 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.907554 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.907559 | orchestrator | 2026-04-18 00:58:56.907564 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-18 00:58:56.907569 | orchestrator | Saturday 18 April 2026 00:57:04 +0000 (0:00:00.452) 0:00:15.204 ******** 2026-04-18 00:58:56.907574 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907579 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.907584 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.907589 | orchestrator | 2026-04-18 00:58:56.907594 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-18 00:58:56.907600 | orchestrator | Saturday 18 April 2026 00:57:04 +0000 (0:00:00.302) 0:00:15.507 ******** 2026-04-18 00:58:56.907605 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907610 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.907615 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.907620 | orchestrator | 2026-04-18 00:58:56.907625 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-18 00:58:56.907630 | orchestrator | Saturday 18 April 2026 00:57:04 +0000 (0:00:00.302) 0:00:15.809 ******** 2026-04-18 00:58:56.907635 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907640 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.907645 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.907650 | orchestrator | 2026-04-18 00:58:56.907655 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-18 00:58:56.907660 | orchestrator | Saturday 18 April 2026 00:57:05 +0000 (0:00:00.294) 0:00:16.103 ******** 2026-04-18 00:58:56.907665 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.907670 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.907675 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.907681 | orchestrator | 2026-04-18 00:58:56.907686 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-18 00:58:56.907691 | orchestrator | Saturday 18 April 2026 00:57:05 +0000 (0:00:00.492) 0:00:16.595 ******** 2026-04-18 00:58:56.907701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93b19634--3a0b--57aa--985a--342cbb17f88c-osd--block--93b19634--3a0b--57aa--985a--342cbb17f88c', 'dm-uuid-LVM-YK0myVOptkexU4yylGGexJ0jaYs9lCfjfP61t0d7zNHbihuSC1ZAuo5tCihsfvgP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.907731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--97728c5d--edf3--594c--abdf--329078c85e67-osd--block--97728c5d--edf3--594c--abdf--329078c85e67', 'dm-uuid-LVM-qFE9dGHppFmKhOdCc4qDZz73myZWhdPMxadEiFiH5AIaaC87PH1zQ9oHlyxIc5o5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.907743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.907752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.907761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.907769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.907777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.907789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.907802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.907816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.907851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part1', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part14', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part15', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part16', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.907929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--93b19634--3a0b--57aa--985a--342cbb17f88c-osd--block--93b19634--3a0b--57aa--985a--342cbb17f88c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k487kU-yaqc-BXs0-GMkW-925l-J1IB-Afg7h4', 'scsi-0QEMU_QEMU_HARDDISK_60a8bfc5-b9d0-4eee-9755-9876332cb447', 'scsi-SQEMU_QEMU_HARDDISK_60a8bfc5-b9d0-4eee-9755-9876332cb447'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.907943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9fd71a58--43ec--5e10--bd02--c7d805355b61-osd--block--9fd71a58--43ec--5e10--bd02--c7d805355b61', 'dm-uuid-LVM-TtJhrF2y0VyO4Sh6OfA0FLDMI90y59xBP29nR0p0I5oBou0AHhJqIV9AFvUXCxcb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.907957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--97728c5d--edf3--594c--abdf--329078c85e67-osd--block--97728c5d--edf3--594c--abdf--329078c85e67'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BQAqj4-jyAs-QeiJ-JOpD-ZItP-vhcO-2cvDia', 'scsi-0QEMU_QEMU_HARDDISK_7641f05d-52e5-4bbf-9f1e-20500225c31b', 'scsi-SQEMU_QEMU_HARDDISK_7641f05d-52e5-4bbf-9f1e-20500225c31b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.907994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a-osd--block--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a', 'dm-uuid-LVM-Xijz2X2U8n0Ed5NsAddEMEPKqTbNbDZmSThiyKvySp3zJkhCcVlExaDMUO7aJD8G'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3bae2c-a706-4f5f-b58d-9951f9e0de3e', 'scsi-SQEMU_QEMU_HARDDISK_7f3bae2c-a706-4f5f-b58d-9951f9e0de3e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.908014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.908026 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part1', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part14', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part15', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part16', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.908210 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.908221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9fd71a58--43ec--5e10--bd02--c7d805355b61-osd--block--9fd71a58--43ec--5e10--bd02--c7d805355b61'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vd4WMr-taxH-kNJc-ee1o-JT06-W3Lq-V6APgm', 'scsi-0QEMU_QEMU_HARDDISK_ca3776c9-6de1-440c-b162-bababc8c77c0', 'scsi-SQEMU_QEMU_HARDDISK_ca3776c9-6de1-440c-b162-bababc8c77c0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.908235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a-osd--block--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xmEPRe-iMQ2-oq0P-9Wbs-J2QW-AhjF-qTStBM', 'scsi-0QEMU_QEMU_HARDDISK_3edcf3c9-33ad-4391-9ec0-9b04bbd0d527', 'scsi-SQEMU_QEMU_HARDDISK_3edcf3c9-33ad-4391-9ec0-9b04bbd0d527'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.908241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_599ac768-9444-46f8-8891-4fc4025f5c6d', 'scsi-SQEMU_QEMU_HARDDISK_599ac768-9444-46f8-8891-4fc4025f5c6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.908246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.908252 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.908257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fe91ca0a--93bc--5e10--8732--62b62acecb68-osd--block--fe91ca0a--93bc--5e10--8732--62b62acecb68', 'dm-uuid-LVM-vxAXaLnoTR3noG6kTA5PQ91VH1N03DYFPTBDRXpGVHe0ELkRBtjb0x5wCQTXtTcQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a409408a--9332--5b4b--a953--28c1be45fb12-osd--block--a409408a--9332--5b4b--a953--28c1be45fb12', 'dm-uuid-LVM-dsseoypKsuamXePlZC8Uc3qwD7RMbzZPjnImWAbzG6PG7EKQVP1eJvcLfiXWqcDJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-18 00:58:56.908481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.908503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fe91ca0a--93bc--5e10--8732--62b62acecb68-osd--block--fe91ca0a--93bc--5e10--8732--62b62acecb68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XjJwIb-fx48-eBoK-2o3I-RfXz-21zi-Yu5tj7', 'scsi-0QEMU_QEMU_HARDDISK_73edcc3e-3dbf-4300-a5bd-7c80b242e92a', 'scsi-SQEMU_QEMU_HARDDISK_73edcc3e-3dbf-4300-a5bd-7c80b242e92a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.908509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a409408a--9332--5b4b--a953--28c1be45fb12-osd--block--a409408a--9332--5b4b--a953--28c1be45fb12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3SXKxf-RChQ-pDHv-9T2b-NCQG-Gfi0-74c3lw', 'scsi-0QEMU_QEMU_HARDDISK_b8f980bd-a337-4427-8cb0-8c7c2531a389', 'scsi-SQEMU_QEMU_HARDDISK_b8f980bd-a337-4427-8cb0-8c7c2531a389'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.908514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5963e3c2-4ca5-403e-a695-14ba4ecb8231', 'scsi-SQEMU_QEMU_HARDDISK_5963e3c2-4ca5-403e-a695-14ba4ecb8231'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.908520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-18 00:58:56.908525 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.908530 | orchestrator | 2026-04-18 00:58:56.908539 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-18 00:58:56.908545 | orchestrator | Saturday 18 April 2026 00:57:06 +0000 (0:00:00.567) 0:00:17.163 ******** 2026-04-18 00:58:56.908554 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93b19634--3a0b--57aa--985a--342cbb17f88c-osd--block--93b19634--3a0b--57aa--985a--342cbb17f88c', 'dm-uuid-LVM-YK0myVOptkexU4yylGGexJ0jaYs9lCfjfP61t0d7zNHbihuSC1ZAuo5tCihsfvgP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--97728c5d--edf3--594c--abdf--329078c85e67-osd--block--97728c5d--edf3--594c--abdf--329078c85e67', 'dm-uuid-LVM-qFE9dGHppFmKhOdCc4qDZz73myZWhdPMxadEiFiH5AIaaC87PH1zQ9oHlyxIc5o5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908571 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908576 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908586 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908603 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908613 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908623 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9fd71a58--43ec--5e10--bd02--c7d805355b61-osd--block--9fd71a58--43ec--5e10--bd02--c7d805355b61', 'dm-uuid-LVM-TtJhrF2y0VyO4Sh6OfA0FLDMI90y59xBP29nR0p0I5oBou0AHhJqIV9AFvUXCxcb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part1', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part14', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part15', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part16', 'scsi-SQEMU_QEMU_HARDDISK_f72b8c50-fdce-43ff-a6a0-37b5c0548b80-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a-osd--block--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a', 'dm-uuid-LVM-Xijz2X2U8n0Ed5NsAddEMEPKqTbNbDZmSThiyKvySp3zJkhCcVlExaDMUO7aJD8G'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908655 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--93b19634--3a0b--57aa--985a--342cbb17f88c-osd--block--93b19634--3a0b--57aa--985a--342cbb17f88c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-k487kU-yaqc-BXs0-GMkW-925l-J1IB-Afg7h4', 'scsi-0QEMU_QEMU_HARDDISK_60a8bfc5-b9d0-4eee-9755-9876332cb447', 'scsi-SQEMU_QEMU_HARDDISK_60a8bfc5-b9d0-4eee-9755-9876332cb447'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908660 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--97728c5d--edf3--594c--abdf--329078c85e67-osd--block--97728c5d--edf3--594c--abdf--329078c85e67'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BQAqj4-jyAs-QeiJ-JOpD-ZItP-vhcO-2cvDia', 'scsi-0QEMU_QEMU_HARDDISK_7641f05d-52e5-4bbf-9f1e-20500225c31b', 'scsi-SQEMU_QEMU_HARDDISK_7641f05d-52e5-4bbf-9f1e-20500225c31b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908681 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908686 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f3bae2c-a706-4f5f-b58d-9951f9e0de3e', 'scsi-SQEMU_QEMU_HARDDISK_7f3bae2c-a706-4f5f-b58d-9951f9e0de3e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908691 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908696 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908706 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908711 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.908719 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908724 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908737 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908742 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fe91ca0a--93bc--5e10--8732--62b62acecb68-osd--block--fe91ca0a--93bc--5e10--8732--62b62acecb68', 'dm-uuid-LVM-vxAXaLnoTR3noG6kTA5PQ91VH1N03DYFPTBDRXpGVHe0ELkRBtjb0x5wCQTXtTcQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908758 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part1', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part14', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part15', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part16', 'scsi-SQEMU_QEMU_HARDDISK_c0fd00ce-8f6e-4404-96a8-996c2c5cf5ee-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908764 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a409408a--9332--5b4b--a953--28c1be45fb12-osd--block--a409408a--9332--5b4b--a953--28c1be45fb12', 'dm-uuid-LVM-dsseoypKsuamXePlZC8Uc3qwD7RMbzZPjnImWAbzG6PG7EKQVP1eJvcLfiXWqcDJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908769 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9fd71a58--43ec--5e10--bd02--c7d805355b61-osd--block--9fd71a58--43ec--5e10--bd02--c7d805355b61'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vd4WMr-taxH-kNJc-ee1o-JT06-W3Lq-V6APgm', 'scsi-0QEMU_QEMU_HARDDISK_ca3776c9-6de1-440c-b162-bababc8c77c0', 'scsi-SQEMU_QEMU_HARDDISK_ca3776c9-6de1-440c-b162-bababc8c77c0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908792 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a-osd--block--0a0ecf7f--ac15--597c--a1da--c22b9ec93d1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xmEPRe-iMQ2-oq0P-9Wbs-J2QW-AhjF-qTStBM', 'scsi-0QEMU_QEMU_HARDDISK_3edcf3c9-33ad-4391-9ec0-9b04bbd0d527', 'scsi-SQEMU_QEMU_HARDDISK_3edcf3c9-33ad-4391-9ec0-9b04bbd0d527'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908801 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908814 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_599ac768-9444-46f8-8891-4fc4025f5c6d', 'scsi-SQEMU_QEMU_HARDDISK_599ac768-9444-46f8-8891-4fc4025f5c6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908821 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908827 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908836 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.908841 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908862 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908868 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part1', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part14', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part15', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part16', 'scsi-SQEMU_QEMU_HARDDISK_1f6e4506-6b1e-430c-8147-62b3e11670d5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908890 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fe91ca0a--93bc--5e10--8732--62b62acecb68-osd--block--fe91ca0a--93bc--5e10--8732--62b62acecb68'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XjJwIb-fx48-eBoK-2o3I-RfXz-21zi-Yu5tj7', 'scsi-0QEMU_QEMU_HARDDISK_73edcc3e-3dbf-4300-a5bd-7c80b242e92a', 'scsi-SQEMU_QEMU_HARDDISK_73edcc3e-3dbf-4300-a5bd-7c80b242e92a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908895 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a409408a--9332--5b4b--a953--28c1be45fb12-osd--block--a409408a--9332--5b4b--a953--28c1be45fb12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-3SXKxf-RChQ-pDHv-9T2b-NCQG-Gfi0-74c3lw', 'scsi-0QEMU_QEMU_HARDDISK_b8f980bd-a337-4427-8cb0-8c7c2531a389', 'scsi-SQEMU_QEMU_HARDDISK_b8f980bd-a337-4427-8cb0-8c7c2531a389'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5963e3c2-4ca5-403e-a695-14ba4ecb8231', 'scsi-SQEMU_QEMU_HARDDISK_5963e3c2-4ca5-403e-a695-14ba4ecb8231'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908909 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-18-00-03-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-18 00:58:56.908915 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.908920 | orchestrator | 2026-04-18 00:58:56.908925 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-18 00:58:56.908930 | orchestrator | Saturday 18 April 2026 00:57:06 +0000 (0:00:00.550) 0:00:17.714 ******** 2026-04-18 00:58:56.908934 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.908939 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.908944 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.908949 | orchestrator | 2026-04-18 00:58:56.908954 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-18 00:58:56.908959 | orchestrator | Saturday 18 April 2026 00:57:07 +0000 (0:00:00.718) 0:00:18.432 ******** 2026-04-18 00:58:56.908963 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.908968 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.908973 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.908978 | orchestrator | 2026-04-18 00:58:56.908987 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-18 00:58:56.908992 | orchestrator | Saturday 18 April 2026 00:57:07 +0000 (0:00:00.431) 0:00:18.864 ******** 2026-04-18 00:58:56.908996 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.909001 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.909006 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.909010 | orchestrator | 2026-04-18 00:58:56.909015 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-18 00:58:56.909019 | orchestrator | Saturday 18 April 2026 00:57:08 +0000 (0:00:00.658) 0:00:19.523 ******** 2026-04-18 00:58:56.909024 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.909028 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.909032 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.909037 | orchestrator | 2026-04-18 00:58:56.909041 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-18 00:58:56.909045 | orchestrator | Saturday 18 April 2026 00:57:08 +0000 (0:00:00.263) 0:00:19.786 ******** 2026-04-18 00:58:56.909052 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.909057 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.909061 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.909065 | orchestrator | 2026-04-18 00:58:56.909069 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-18 00:58:56.909074 | orchestrator | Saturday 18 April 2026 00:57:09 +0000 (0:00:00.383) 0:00:20.170 ******** 2026-04-18 00:58:56.909078 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.909086 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.909091 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.909095 | orchestrator | 2026-04-18 00:58:56.909099 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-18 00:58:56.909104 | orchestrator | Saturday 18 April 2026 00:57:09 +0000 (0:00:00.439) 0:00:20.609 ******** 2026-04-18 00:58:56.909108 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-18 00:58:56.909113 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-18 00:58:56.909117 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-18 00:58:56.909121 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-18 00:58:56.909126 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-18 00:58:56.909130 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-18 00:58:56.909134 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-18 00:58:56.909139 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-18 00:58:56.909143 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-18 00:58:56.909147 | orchestrator | 2026-04-18 00:58:56.909151 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-18 00:58:56.909156 | orchestrator | Saturday 18 April 2026 00:57:10 +0000 (0:00:00.790) 0:00:21.400 ******** 2026-04-18 00:58:56.909160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-18 00:58:56.909164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-18 00:58:56.909168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-18 00:58:56.909173 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.909188 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-18 00:58:56.909193 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-18 00:58:56.909204 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-18 00:58:56.909208 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.909212 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-18 00:58:56.909216 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-18 00:58:56.909221 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-18 00:58:56.909225 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.909229 | orchestrator | 2026-04-18 00:58:56.909233 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-18 00:58:56.909238 | orchestrator | Saturday 18 April 2026 00:57:10 +0000 (0:00:00.312) 0:00:21.713 ******** 2026-04-18 00:58:56.909243 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 00:58:56.909247 | orchestrator | 2026-04-18 00:58:56.909251 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-18 00:58:56.909256 | orchestrator | Saturday 18 April 2026 00:57:11 +0000 (0:00:00.636) 0:00:22.349 ******** 2026-04-18 00:58:56.909261 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.909265 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.909270 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.909274 | orchestrator | 2026-04-18 00:58:56.909278 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-18 00:58:56.909283 | orchestrator | Saturday 18 April 2026 00:57:11 +0000 (0:00:00.288) 0:00:22.638 ******** 2026-04-18 00:58:56.909287 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.909292 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.909296 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.909316 | orchestrator | 2026-04-18 00:58:56.909324 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-18 00:58:56.909331 | orchestrator | Saturday 18 April 2026 00:57:11 +0000 (0:00:00.284) 0:00:22.922 ******** 2026-04-18 00:58:56.909343 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.909350 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.909354 | orchestrator | skipping: [testbed-node-5] 2026-04-18 00:58:56.909358 | orchestrator | 2026-04-18 00:58:56.909363 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-18 00:58:56.909367 | orchestrator | Saturday 18 April 2026 00:57:12 +0000 (0:00:00.294) 0:00:23.217 ******** 2026-04-18 00:58:56.909372 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.909376 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.909380 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.909385 | orchestrator | 2026-04-18 00:58:56.909392 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-18 00:58:56.909397 | orchestrator | Saturday 18 April 2026 00:57:12 +0000 (0:00:00.532) 0:00:23.750 ******** 2026-04-18 00:58:56.909401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:58:56.909406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:58:56.909410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:58:56.909414 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.909419 | orchestrator | 2026-04-18 00:58:56.909423 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-18 00:58:56.909427 | orchestrator | Saturday 18 April 2026 00:57:13 +0000 (0:00:00.342) 0:00:24.093 ******** 2026-04-18 00:58:56.909432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:58:56.909436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:58:56.909440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:58:56.909448 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.909453 | orchestrator | 2026-04-18 00:58:56.909457 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-18 00:58:56.909462 | orchestrator | Saturday 18 April 2026 00:57:13 +0000 (0:00:00.354) 0:00:24.447 ******** 2026-04-18 00:58:56.909466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-18 00:58:56.909471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-18 00:58:56.909475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-18 00:58:56.909479 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.909484 | orchestrator | 2026-04-18 00:58:56.909488 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-18 00:58:56.909492 | orchestrator | Saturday 18 April 2026 00:57:13 +0000 (0:00:00.371) 0:00:24.819 ******** 2026-04-18 00:58:56.909497 | orchestrator | ok: [testbed-node-3] 2026-04-18 00:58:56.909501 | orchestrator | ok: [testbed-node-4] 2026-04-18 00:58:56.909506 | orchestrator | ok: [testbed-node-5] 2026-04-18 00:58:56.909510 | orchestrator | 2026-04-18 00:58:56.909515 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-18 00:58:56.909519 | orchestrator | Saturday 18 April 2026 00:57:14 +0000 (0:00:00.302) 0:00:25.121 ******** 2026-04-18 00:58:56.909523 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-18 00:58:56.909528 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-18 00:58:56.909532 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-18 00:58:56.909536 | orchestrator | 2026-04-18 00:58:56.909541 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-18 00:58:56.909545 | orchestrator | Saturday 18 April 2026 00:57:14 +0000 (0:00:00.475) 0:00:25.597 ******** 2026-04-18 00:58:56.909550 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-18 00:58:56.909554 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-18 00:58:56.909559 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-18 00:58:56.909563 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-18 00:58:56.909567 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-18 00:58:56.909579 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-18 00:58:56.909584 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-18 00:58:56.909589 | orchestrator | 2026-04-18 00:58:56.909593 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-18 00:58:56.909597 | orchestrator | Saturday 18 April 2026 00:57:15 +0000 (0:00:00.924) 0:00:26.521 ******** 2026-04-18 00:58:56.909602 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-18 00:58:56.909606 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-18 00:58:56.909611 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-18 00:58:56.909615 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-18 00:58:56.909619 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-18 00:58:56.909624 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-18 00:58:56.909628 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-18 00:58:56.909633 | orchestrator | 2026-04-18 00:58:56.909637 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-18 00:58:56.909641 | orchestrator | Saturday 18 April 2026 00:57:17 +0000 (0:00:01.831) 0:00:28.352 ******** 2026-04-18 00:58:56.909646 | orchestrator | skipping: [testbed-node-3] 2026-04-18 00:58:56.909650 | orchestrator | skipping: [testbed-node-4] 2026-04-18 00:58:56.909655 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-18 00:58:56.909659 | orchestrator | 2026-04-18 00:58:56.909664 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-18 00:58:56.909668 | orchestrator | Saturday 18 April 2026 00:57:17 +0000 (0:00:00.381) 0:00:28.733 ******** 2026-04-18 00:58:56.909674 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-18 00:58:56.909684 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-18 00:58:56.909689 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-18 00:58:56.909698 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-18 00:58:56.909703 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-18 00:58:56.909707 | orchestrator | 2026-04-18 00:58:56.909712 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-18 00:58:56.909716 | orchestrator | Saturday 18 April 2026 00:58:01 +0000 (0:00:43.801) 0:01:12.535 ******** 2026-04-18 00:58:56.909721 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909725 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909733 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909738 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909742 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909746 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909751 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-18 00:58:56.909755 | orchestrator | 2026-04-18 00:58:56.909760 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-18 00:58:56.909765 | orchestrator | Saturday 18 April 2026 00:58:25 +0000 (0:00:23.918) 0:01:36.453 ******** 2026-04-18 00:58:56.909769 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909773 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909778 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909782 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909787 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909791 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909796 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-18 00:58:56.909800 | orchestrator | 2026-04-18 00:58:56.909804 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-18 00:58:56.909809 | orchestrator | Saturday 18 April 2026 00:58:37 +0000 (0:00:12.157) 0:01:48.611 ******** 2026-04-18 00:58:56.909813 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909818 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-18 00:58:56.909822 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-18 00:58:56.909826 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909830 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-18 00:58:56.909849 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-18 00:58:56.909854 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909858 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-18 00:58:56.909862 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-18 00:58:56.909867 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909875 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-18 00:58:56.909882 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-18 00:58:56.909889 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909895 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-18 00:58:56.909903 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-18 00:58:56.909912 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-18 00:58:56.909924 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-18 00:58:56.909932 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-18 00:58:56.909940 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-18 00:58:56.909945 | orchestrator | 2026-04-18 00:58:56.909950 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 00:58:56.909959 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-18 00:58:56.909963 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-18 00:58:56.909972 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-18 00:58:56.909976 | orchestrator | 2026-04-18 00:58:56.909981 | orchestrator | 2026-04-18 00:58:56.909985 | orchestrator | 2026-04-18 00:58:56.909990 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 00:58:56.909994 | orchestrator | Saturday 18 April 2026 00:58:54 +0000 (0:00:17.345) 0:02:05.956 ******** 2026-04-18 00:58:56.909999 | orchestrator | =============================================================================== 2026-04-18 00:58:56.910003 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.80s 2026-04-18 00:58:56.910007 | orchestrator | generate keys ---------------------------------------------------------- 23.92s 2026-04-18 00:58:56.910053 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.35s 2026-04-18 00:58:56.910060 | orchestrator | get keys from monitors ------------------------------------------------- 12.16s 2026-04-18 00:58:56.910065 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.93s 2026-04-18 00:58:56.910069 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.86s 2026-04-18 00:58:56.910073 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.83s 2026-04-18 00:58:56.910078 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.98s 2026-04-18 00:58:56.910082 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.92s 2026-04-18 00:58:56.910086 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2026-04-18 00:58:56.910091 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.79s 2026-04-18 00:58:56.910095 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.77s 2026-04-18 00:58:56.910108 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2026-04-18 00:58:56.910112 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.66s 2026-04-18 00:58:56.910117 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.64s 2026-04-18 00:58:56.910121 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2026-04-18 00:58:56.910125 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.59s 2026-04-18 00:58:56.910130 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.57s 2026-04-18 00:58:56.910134 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.55s 2026-04-18 00:58:56.910139 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.53s 2026-04-18 00:58:56.910143 | orchestrator | 2026-04-18 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:58:59.943734 | orchestrator | 2026-04-18 00:58:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:58:59.944731 | orchestrator | 2026-04-18 00:58:59 | INFO  | Task 998b3b7d-9a39-4bd0-853d-410c38a1c2f1 is in state STARTED 2026-04-18 00:58:59.944781 | orchestrator | 2026-04-18 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:02.981381 | orchestrator | 2026-04-18 00:59:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:02.983033 | orchestrator | 2026-04-18 00:59:02 | INFO  | Task 998b3b7d-9a39-4bd0-853d-410c38a1c2f1 is in state STARTED 2026-04-18 00:59:02.983116 | orchestrator | 2026-04-18 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:06.024168 | orchestrator | 2026-04-18 00:59:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:06.024708 | orchestrator | 2026-04-18 00:59:06 | INFO  | Task 998b3b7d-9a39-4bd0-853d-410c38a1c2f1 is in state STARTED 2026-04-18 00:59:06.024785 | orchestrator | 2026-04-18 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:09.068909 | orchestrator | 2026-04-18 00:59:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:09.069781 | orchestrator | 2026-04-18 00:59:09 | INFO  | Task 998b3b7d-9a39-4bd0-853d-410c38a1c2f1 is in state STARTED 2026-04-18 00:59:09.069831 | orchestrator | 2026-04-18 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:12.115697 | orchestrator | 2026-04-18 00:59:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:12.116955 | orchestrator | 2026-04-18 00:59:12 | INFO  | Task 998b3b7d-9a39-4bd0-853d-410c38a1c2f1 is in state STARTED 2026-04-18 00:59:12.117009 | orchestrator | 2026-04-18 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:15.160271 | orchestrator | 2026-04-18 00:59:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:15.161757 | orchestrator | 2026-04-18 00:59:15 | INFO  | Task 998b3b7d-9a39-4bd0-853d-410c38a1c2f1 is in state STARTED 2026-04-18 00:59:15.162541 | orchestrator | 2026-04-18 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:18.201531 | orchestrator | 2026-04-18 00:59:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:18.203122 | orchestrator | 2026-04-18 00:59:18 | INFO  | Task 998b3b7d-9a39-4bd0-853d-410c38a1c2f1 is in state STARTED 2026-04-18 00:59:18.203176 | orchestrator | 2026-04-18 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:21.247931 | orchestrator | 2026-04-18 00:59:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:21.248127 | orchestrator | 2026-04-18 00:59:21 | INFO  | Task 998b3b7d-9a39-4bd0-853d-410c38a1c2f1 is in state STARTED 2026-04-18 00:59:21.248159 | orchestrator | 2026-04-18 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:24.290619 | orchestrator | 2026-04-18 00:59:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:24.291913 | orchestrator | 2026-04-18 00:59:24 | INFO  | Task 998b3b7d-9a39-4bd0-853d-410c38a1c2f1 is in state STARTED 2026-04-18 00:59:24.291950 | orchestrator | 2026-04-18 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:27.334244 | orchestrator | 2026-04-18 00:59:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:27.336672 | orchestrator | 2026-04-18 00:59:27 | INFO  | Task 998b3b7d-9a39-4bd0-853d-410c38a1c2f1 is in state STARTED 2026-04-18 00:59:27.336905 | orchestrator | 2026-04-18 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:30.381404 | orchestrator | 2026-04-18 00:59:30 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 00:59:30.381592 | orchestrator | 2026-04-18 00:59:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:30.382977 | orchestrator | 2026-04-18 00:59:30 | INFO  | Task 998b3b7d-9a39-4bd0-853d-410c38a1c2f1 is in state SUCCESS 2026-04-18 00:59:30.383024 | orchestrator | 2026-04-18 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:33.422286 | orchestrator | 2026-04-18 00:59:33 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 00:59:33.422689 | orchestrator | 2026-04-18 00:59:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:33.422715 | orchestrator | 2026-04-18 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:36.468352 | orchestrator | 2026-04-18 00:59:36 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 00:59:36.468972 | orchestrator | 2026-04-18 00:59:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:36.469072 | orchestrator | 2026-04-18 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:39.513646 | orchestrator | 2026-04-18 00:59:39 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 00:59:39.514232 | orchestrator | 2026-04-18 00:59:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:39.514252 | orchestrator | 2026-04-18 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:42.556116 | orchestrator | 2026-04-18 00:59:42 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 00:59:42.556275 | orchestrator | 2026-04-18 00:59:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:42.556408 | orchestrator | 2026-04-18 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:45.597999 | orchestrator | 2026-04-18 00:59:45 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 00:59:45.600348 | orchestrator | 2026-04-18 00:59:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:45.600400 | orchestrator | 2026-04-18 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:48.649053 | orchestrator | 2026-04-18 00:59:48 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 00:59:48.651626 | orchestrator | 2026-04-18 00:59:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:48.651668 | orchestrator | 2026-04-18 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:51.689734 | orchestrator | 2026-04-18 00:59:51 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 00:59:51.690735 | orchestrator | 2026-04-18 00:59:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:51.690785 | orchestrator | 2026-04-18 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:54.727279 | orchestrator | 2026-04-18 00:59:54 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 00:59:54.728156 | orchestrator | 2026-04-18 00:59:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:54.728200 | orchestrator | 2026-04-18 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 00:59:57.774006 | orchestrator | 2026-04-18 00:59:57 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 00:59:57.776572 | orchestrator | 2026-04-18 00:59:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 00:59:57.776651 | orchestrator | 2026-04-18 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:00.819270 | orchestrator | 2026-04-18 01:00:00 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 01:00:00.819731 | orchestrator | 2026-04-18 01:00:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:00.819795 | orchestrator | 2026-04-18 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:03.855846 | orchestrator | 2026-04-18 01:00:03 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 01:00:03.857557 | orchestrator | 2026-04-18 01:00:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:03.857621 | orchestrator | 2026-04-18 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:06.901798 | orchestrator | 2026-04-18 01:00:06 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 01:00:06.901890 | orchestrator | 2026-04-18 01:00:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:06.901902 | orchestrator | 2026-04-18 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:09.940167 | orchestrator | 2026-04-18 01:00:09 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 01:00:09.943879 | orchestrator | 2026-04-18 01:00:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:09.944094 | orchestrator | 2026-04-18 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:12.990381 | orchestrator | 2026-04-18 01:00:12 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 01:00:12.991955 | orchestrator | 2026-04-18 01:00:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:12.992017 | orchestrator | 2026-04-18 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:16.040351 | orchestrator | 2026-04-18 01:00:16 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 01:00:16.042869 | orchestrator | 2026-04-18 01:00:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:16.043100 | orchestrator | 2026-04-18 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:19.091071 | orchestrator | 2026-04-18 01:00:19 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 01:00:19.092428 | orchestrator | 2026-04-18 01:00:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:19.093414 | orchestrator | 2026-04-18 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:22.139701 | orchestrator | 2026-04-18 01:00:22 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 01:00:22.141136 | orchestrator | 2026-04-18 01:00:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:22.141971 | orchestrator | 2026-04-18 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:25.192222 | orchestrator | 2026-04-18 01:00:25 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state STARTED 2026-04-18 01:00:25.193998 | orchestrator | 2026-04-18 01:00:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:25.194095 | orchestrator | 2026-04-18 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:28.239767 | orchestrator | 2026-04-18 01:00:28 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:00:28.241613 | orchestrator | 2026-04-18 01:00:28 | INFO  | Task e4406f62-15f8-43c1-9430-e4210194db61 is in state SUCCESS 2026-04-18 01:00:28.241675 | orchestrator | 2026-04-18 01:00:28.241682 | orchestrator | 2026-04-18 01:00:28.241687 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-18 01:00:28.241692 | orchestrator | 2026-04-18 01:00:28.241696 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-18 01:00:28.241725 | orchestrator | Saturday 18 April 2026 00:58:57 +0000 (0:00:00.209) 0:00:00.209 ******** 2026-04-18 01:00:28.241731 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-18 01:00:28.241739 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-18 01:00:28.241744 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-18 01:00:28.241750 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-18 01:00:28.241756 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-18 01:00:28.241762 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-18 01:00:28.241767 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-18 01:00:28.241783 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-18 01:00:28.241789 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-18 01:00:28.241795 | orchestrator | 2026-04-18 01:00:28.241801 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-18 01:00:28.241807 | orchestrator | Saturday 18 April 2026 00:59:02 +0000 (0:00:04.795) 0:00:05.004 ******** 2026-04-18 01:00:28.241813 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-18 01:00:28.241819 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-18 01:00:28.241825 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-18 01:00:28.241831 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-18 01:00:28.241837 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-18 01:00:28.241843 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-18 01:00:28.241848 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-18 01:00:28.241855 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-18 01:00:28.241861 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-18 01:00:28.241867 | orchestrator | 2026-04-18 01:00:28.241873 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-18 01:00:28.241879 | orchestrator | Saturday 18 April 2026 00:59:06 +0000 (0:00:04.106) 0:00:09.111 ******** 2026-04-18 01:00:28.241887 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-18 01:00:28.241894 | orchestrator | 2026-04-18 01:00:28.241900 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-18 01:00:28.241908 | orchestrator | Saturday 18 April 2026 00:59:07 +0000 (0:00:00.919) 0:00:10.030 ******** 2026-04-18 01:00:28.241914 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-18 01:00:28.241921 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-18 01:00:28.241927 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-18 01:00:28.241933 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-18 01:00:28.241940 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-18 01:00:28.241945 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-18 01:00:28.241952 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-18 01:00:28.241967 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-18 01:00:28.241974 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-18 01:00:28.241979 | orchestrator | 2026-04-18 01:00:28.241987 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-18 01:00:28.241993 | orchestrator | Saturday 18 April 2026 00:59:20 +0000 (0:00:12.366) 0:00:22.397 ******** 2026-04-18 01:00:28.242059 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-18 01:00:28.242069 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-18 01:00:28.242082 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-18 01:00:28.242088 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-18 01:00:28.242105 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-18 01:00:28.242111 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-18 01:00:28.242117 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-18 01:00:28.242124 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-18 01:00:28.242129 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-18 01:00:28.242135 | orchestrator | 2026-04-18 01:00:28.242141 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-18 01:00:28.242147 | orchestrator | Saturday 18 April 2026 00:59:23 +0000 (0:00:02.883) 0:00:25.280 ******** 2026-04-18 01:00:28.242154 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-18 01:00:28.242160 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-18 01:00:28.242166 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-18 01:00:28.242172 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-18 01:00:28.242178 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-18 01:00:28.242184 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-18 01:00:28.242189 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-18 01:00:28.242195 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-18 01:00:28.242201 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-18 01:00:28.242207 | orchestrator | 2026-04-18 01:00:28.242213 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 01:00:28.242219 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-18 01:00:28.242227 | orchestrator | 2026-04-18 01:00:28.242234 | orchestrator | 2026-04-18 01:00:28.242240 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 01:00:28.242246 | orchestrator | Saturday 18 April 2026 00:59:29 +0000 (0:00:06.008) 0:00:31.289 ******** 2026-04-18 01:00:28.242252 | orchestrator | =============================================================================== 2026-04-18 01:00:28.242258 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.37s 2026-04-18 01:00:28.242264 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.01s 2026-04-18 01:00:28.242291 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.80s 2026-04-18 01:00:28.242299 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.11s 2026-04-18 01:00:28.242306 | orchestrator | Check if target directories exist --------------------------------------- 2.88s 2026-04-18 01:00:28.242319 | orchestrator | Create share directory -------------------------------------------------- 0.92s 2026-04-18 01:00:28.242324 | orchestrator | 2026-04-18 01:00:28.242329 | orchestrator | 2026-04-18 01:00:28.242333 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-18 01:00:28.242338 | orchestrator | 2026-04-18 01:00:28.242342 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-18 01:00:28.242347 | orchestrator | Saturday 18 April 2026 00:59:32 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-04-18 01:00:28.242351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-18 01:00:28.242357 | orchestrator | 2026-04-18 01:00:28.242361 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-18 01:00:28.242366 | orchestrator | Saturday 18 April 2026 00:59:32 +0000 (0:00:00.214) 0:00:00.492 ******** 2026-04-18 01:00:28.242370 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-18 01:00:28.242375 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-18 01:00:28.242380 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-18 01:00:28.242384 | orchestrator | 2026-04-18 01:00:28.242395 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-18 01:00:28.242400 | orchestrator | Saturday 18 April 2026 00:59:33 +0000 (0:00:01.419) 0:00:01.912 ******** 2026-04-18 01:00:28.242405 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-18 01:00:28.242409 | orchestrator | 2026-04-18 01:00:28.242413 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-18 01:00:28.242418 | orchestrator | Saturday 18 April 2026 00:59:34 +0000 (0:00:00.997) 0:00:02.909 ******** 2026-04-18 01:00:28.242422 | orchestrator | changed: [testbed-manager] 2026-04-18 01:00:28.242427 | orchestrator | 2026-04-18 01:00:28.242432 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-18 01:00:28.242441 | orchestrator | Saturday 18 April 2026 00:59:35 +0000 (0:00:00.831) 0:00:03.741 ******** 2026-04-18 01:00:28.242446 | orchestrator | changed: [testbed-manager] 2026-04-18 01:00:28.242451 | orchestrator | 2026-04-18 01:00:28.242455 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-18 01:00:28.242460 | orchestrator | Saturday 18 April 2026 00:59:36 +0000 (0:00:00.780) 0:00:04.521 ******** 2026-04-18 01:00:28.242463 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-18 01:00:28.242467 | orchestrator | ok: [testbed-manager] 2026-04-18 01:00:28.242471 | orchestrator | 2026-04-18 01:00:28.242475 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-18 01:00:28.242485 | orchestrator | Saturday 18 April 2026 01:00:16 +0000 (0:00:39.613) 0:00:44.135 ******** 2026-04-18 01:00:28.242489 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-18 01:00:28.242493 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-18 01:00:28.242496 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-18 01:00:28.242500 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-18 01:00:28.242504 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-18 01:00:28.242508 | orchestrator | 2026-04-18 01:00:28.242511 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-18 01:00:28.242515 | orchestrator | Saturday 18 April 2026 01:00:20 +0000 (0:00:03.982) 0:00:48.117 ******** 2026-04-18 01:00:28.242519 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-18 01:00:28.242523 | orchestrator | 2026-04-18 01:00:28.242526 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-18 01:00:28.242530 | orchestrator | Saturday 18 April 2026 01:00:20 +0000 (0:00:00.602) 0:00:48.720 ******** 2026-04-18 01:00:28.242534 | orchestrator | skipping: [testbed-manager] 2026-04-18 01:00:28.242543 | orchestrator | 2026-04-18 01:00:28.242546 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-18 01:00:28.242550 | orchestrator | Saturday 18 April 2026 01:00:20 +0000 (0:00:00.128) 0:00:48.849 ******** 2026-04-18 01:00:28.242554 | orchestrator | skipping: [testbed-manager] 2026-04-18 01:00:28.242558 | orchestrator | 2026-04-18 01:00:28.242561 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-18 01:00:28.242565 | orchestrator | Saturday 18 April 2026 01:00:21 +0000 (0:00:00.313) 0:00:49.162 ******** 2026-04-18 01:00:28.242569 | orchestrator | changed: [testbed-manager] 2026-04-18 01:00:28.242573 | orchestrator | 2026-04-18 01:00:28.242576 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-18 01:00:28.242580 | orchestrator | Saturday 18 April 2026 01:00:22 +0000 (0:00:01.339) 0:00:50.501 ******** 2026-04-18 01:00:28.242584 | orchestrator | changed: [testbed-manager] 2026-04-18 01:00:28.242587 | orchestrator | 2026-04-18 01:00:28.242591 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-18 01:00:28.242595 | orchestrator | Saturday 18 April 2026 01:00:23 +0000 (0:00:00.692) 0:00:51.193 ******** 2026-04-18 01:00:28.242599 | orchestrator | changed: [testbed-manager] 2026-04-18 01:00:28.242602 | orchestrator | 2026-04-18 01:00:28.242606 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-18 01:00:28.242610 | orchestrator | Saturday 18 April 2026 01:00:23 +0000 (0:00:00.579) 0:00:51.772 ******** 2026-04-18 01:00:28.242614 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-18 01:00:28.242617 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-18 01:00:28.242621 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-18 01:00:28.242625 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-18 01:00:28.242628 | orchestrator | 2026-04-18 01:00:28.242632 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 01:00:28.242636 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-18 01:00:28.242640 | orchestrator | 2026-04-18 01:00:28.242644 | orchestrator | 2026-04-18 01:00:28.242647 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 01:00:28.242651 | orchestrator | Saturday 18 April 2026 01:00:25 +0000 (0:00:01.420) 0:00:53.193 ******** 2026-04-18 01:00:28.242655 | orchestrator | =============================================================================== 2026-04-18 01:00:28.242658 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.61s 2026-04-18 01:00:28.242662 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.98s 2026-04-18 01:00:28.242666 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.42s 2026-04-18 01:00:28.242669 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.42s 2026-04-18 01:00:28.242673 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.34s 2026-04-18 01:00:28.242677 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.00s 2026-04-18 01:00:28.242681 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.83s 2026-04-18 01:00:28.242684 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.78s 2026-04-18 01:00:28.242688 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2026-04-18 01:00:28.242692 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.60s 2026-04-18 01:00:28.242696 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2026-04-18 01:00:28.242699 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2026-04-18 01:00:28.242703 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2026-04-18 01:00:28.242707 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-04-18 01:00:28.247071 | orchestrator | 2026-04-18 01:00:28 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:00:28.248016 | orchestrator | 2026-04-18 01:00:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:28.248767 | orchestrator | 2026-04-18 01:00:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:00:28.248804 | orchestrator | 2026-04-18 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:31.284248 | orchestrator | 2026-04-18 01:00:31 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:00:31.286412 | orchestrator | 2026-04-18 01:00:31 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:00:31.287031 | orchestrator | 2026-04-18 01:00:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:31.289337 | orchestrator | 2026-04-18 01:00:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:00:31.289385 | orchestrator | 2026-04-18 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:34.337245 | orchestrator | 2026-04-18 01:00:34 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:00:34.340799 | orchestrator | 2026-04-18 01:00:34 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:00:34.343024 | orchestrator | 2026-04-18 01:00:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:34.344619 | orchestrator | 2026-04-18 01:00:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:00:34.344738 | orchestrator | 2026-04-18 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:37.383419 | orchestrator | 2026-04-18 01:00:37 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:00:37.384614 | orchestrator | 2026-04-18 01:00:37 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:00:37.389585 | orchestrator | 2026-04-18 01:00:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:37.392566 | orchestrator | 2026-04-18 01:00:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:00:37.392652 | orchestrator | 2026-04-18 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:40.434964 | orchestrator | 2026-04-18 01:00:40 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:00:40.435051 | orchestrator | 2026-04-18 01:00:40 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:00:40.436021 | orchestrator | 2026-04-18 01:00:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:40.437021 | orchestrator | 2026-04-18 01:00:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:00:40.437072 | orchestrator | 2026-04-18 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:43.480075 | orchestrator | 2026-04-18 01:00:43 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:00:43.481883 | orchestrator | 2026-04-18 01:00:43 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:00:43.482813 | orchestrator | 2026-04-18 01:00:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:43.483653 | orchestrator | 2026-04-18 01:00:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:00:43.483779 | orchestrator | 2026-04-18 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:46.524744 | orchestrator | 2026-04-18 01:00:46 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:00:46.526514 | orchestrator | 2026-04-18 01:00:46 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:00:46.529360 | orchestrator | 2026-04-18 01:00:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:46.530589 | orchestrator | 2026-04-18 01:00:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:00:46.532334 | orchestrator | 2026-04-18 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:49.565365 | orchestrator | 2026-04-18 01:00:49 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:00:49.567420 | orchestrator | 2026-04-18 01:00:49 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:00:49.569420 | orchestrator | 2026-04-18 01:00:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:49.571100 | orchestrator | 2026-04-18 01:00:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:00:49.571273 | orchestrator | 2026-04-18 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:52.609343 | orchestrator | 2026-04-18 01:00:52 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:00:52.610707 | orchestrator | 2026-04-18 01:00:52 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:00:52.610818 | orchestrator | 2026-04-18 01:00:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:52.612536 | orchestrator | 2026-04-18 01:00:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:00:52.612593 | orchestrator | 2026-04-18 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:55.650906 | orchestrator | 2026-04-18 01:00:55 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:00:55.651197 | orchestrator | 2026-04-18 01:00:55 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:00:55.655428 | orchestrator | 2026-04-18 01:00:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:55.657127 | orchestrator | 2026-04-18 01:00:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:00:55.657174 | orchestrator | 2026-04-18 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:00:58.699501 | orchestrator | 2026-04-18 01:00:58 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:00:58.701421 | orchestrator | 2026-04-18 01:00:58 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:00:58.703854 | orchestrator | 2026-04-18 01:00:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:00:58.706952 | orchestrator | 2026-04-18 01:00:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:00:58.707141 | orchestrator | 2026-04-18 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:01.753408 | orchestrator | 2026-04-18 01:01:01 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:01:01.756406 | orchestrator | 2026-04-18 01:01:01 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:01.759089 | orchestrator | 2026-04-18 01:01:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:01.762633 | orchestrator | 2026-04-18 01:01:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:01.762811 | orchestrator | 2026-04-18 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:04.801861 | orchestrator | 2026-04-18 01:01:04 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:01:04.804758 | orchestrator | 2026-04-18 01:01:04 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:04.806116 | orchestrator | 2026-04-18 01:01:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:04.809295 | orchestrator | 2026-04-18 01:01:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:04.809337 | orchestrator | 2026-04-18 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:07.839818 | orchestrator | 2026-04-18 01:01:07 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:01:07.841868 | orchestrator | 2026-04-18 01:01:07 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:07.844510 | orchestrator | 2026-04-18 01:01:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:07.846757 | orchestrator | 2026-04-18 01:01:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:07.846936 | orchestrator | 2026-04-18 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:10.885823 | orchestrator | 2026-04-18 01:01:10 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:01:10.886165 | orchestrator | 2026-04-18 01:01:10 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:10.886921 | orchestrator | 2026-04-18 01:01:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:10.887749 | orchestrator | 2026-04-18 01:01:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:10.887783 | orchestrator | 2026-04-18 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:13.929796 | orchestrator | 2026-04-18 01:01:13 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:01:13.933532 | orchestrator | 2026-04-18 01:01:13 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:13.934808 | orchestrator | 2026-04-18 01:01:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:13.937079 | orchestrator | 2026-04-18 01:01:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:13.937112 | orchestrator | 2026-04-18 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:16.995490 | orchestrator | 2026-04-18 01:01:16 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:01:16.996139 | orchestrator | 2026-04-18 01:01:16 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:16.998130 | orchestrator | 2026-04-18 01:01:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:17.002314 | orchestrator | 2026-04-18 01:01:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:17.002469 | orchestrator | 2026-04-18 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:20.037310 | orchestrator | 2026-04-18 01:01:20 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:01:20.037982 | orchestrator | 2026-04-18 01:01:20 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:20.038984 | orchestrator | 2026-04-18 01:01:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:20.039993 | orchestrator | 2026-04-18 01:01:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:20.040028 | orchestrator | 2026-04-18 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:23.088813 | orchestrator | 2026-04-18 01:01:23 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:01:23.091414 | orchestrator | 2026-04-18 01:01:23 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:23.093581 | orchestrator | 2026-04-18 01:01:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:23.095539 | orchestrator | 2026-04-18 01:01:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:23.095729 | orchestrator | 2026-04-18 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:26.139556 | orchestrator | 2026-04-18 01:01:26 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:01:26.141071 | orchestrator | 2026-04-18 01:01:26 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:26.144289 | orchestrator | 2026-04-18 01:01:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:26.146657 | orchestrator | 2026-04-18 01:01:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:26.146975 | orchestrator | 2026-04-18 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:29.199594 | orchestrator | 2026-04-18 01:01:29 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:01:29.203555 | orchestrator | 2026-04-18 01:01:29 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:29.204410 | orchestrator | 2026-04-18 01:01:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:29.205550 | orchestrator | 2026-04-18 01:01:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:29.205693 | orchestrator | 2026-04-18 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:32.246219 | orchestrator | 2026-04-18 01:01:32 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state STARTED 2026-04-18 01:01:32.248624 | orchestrator | 2026-04-18 01:01:32 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:32.250976 | orchestrator | 2026-04-18 01:01:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:32.252699 | orchestrator | 2026-04-18 01:01:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:32.253377 | orchestrator | 2026-04-18 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:35.301010 | orchestrator | 2026-04-18 01:01:35.301109 | orchestrator | 2026-04-18 01:01:35.301121 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 01:01:35.301130 | orchestrator | 2026-04-18 01:01:35.301136 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 01:01:35.301143 | orchestrator | Saturday 18 April 2026 01:00:28 +0000 (0:00:00.238) 0:00:00.238 ******** 2026-04-18 01:01:35.301150 | orchestrator | ok: [testbed-manager] 2026-04-18 01:01:35.301175 | orchestrator | ok: [testbed-node-0] 2026-04-18 01:01:35.301189 | orchestrator | ok: [testbed-node-1] 2026-04-18 01:01:35.301195 | orchestrator | ok: [testbed-node-2] 2026-04-18 01:01:35.301223 | orchestrator | ok: [testbed-node-3] 2026-04-18 01:01:35.301270 | orchestrator | ok: [testbed-node-4] 2026-04-18 01:01:35.301278 | orchestrator | ok: [testbed-node-5] 2026-04-18 01:01:35.301285 | orchestrator | 2026-04-18 01:01:35.301291 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 01:01:35.301298 | orchestrator | Saturday 18 April 2026 01:00:28 +0000 (0:00:00.519) 0:00:00.758 ******** 2026-04-18 01:01:35.301305 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-18 01:01:35.301317 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-18 01:01:35.301331 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-18 01:01:35.301349 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-18 01:01:35.301358 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-18 01:01:35.301366 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-18 01:01:35.301375 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-18 01:01:35.301384 | orchestrator | 2026-04-18 01:01:35.301454 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-18 01:01:35.301465 | orchestrator | 2026-04-18 01:01:35.301546 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-18 01:01:35.301559 | orchestrator | Saturday 18 April 2026 01:00:29 +0000 (0:00:00.693) 0:00:01.451 ******** 2026-04-18 01:01:35.301571 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 01:01:35.301598 | orchestrator | 2026-04-18 01:01:35.301605 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-18 01:01:35.301613 | orchestrator | Saturday 18 April 2026 01:00:30 +0000 (0:00:01.090) 0:00:02.542 ******** 2026-04-18 01:01:35.301626 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-18 01:01:35.301638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.301659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.301701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.301710 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.301719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.301726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.301734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.301741 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.301750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.301763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.301780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.301787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.301794 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:35.301802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.301808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.301815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.301835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.301843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.301849 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.301856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.301863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.301869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.301876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.301894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.301911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.301929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.301940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.301950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.301960 | orchestrator | 2026-04-18 01:01:35.301971 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-18 01:01:35.301980 | orchestrator | Saturday 18 April 2026 01:00:33 +0000 (0:00:03.019) 0:00:05.562 ******** 2026-04-18 01:01:35.301989 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-18 01:01:35.301999 | orchestrator | 2026-04-18 01:01:35.302009 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-18 01:01:35.302095 | orchestrator | Saturday 18 April 2026 01:00:35 +0000 (0:00:01.384) 0:00:06.946 ******** 2026-04-18 01:01:35.302107 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-18 01:01:35.302144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.302157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.302167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.302174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.302181 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.302187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.302202 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.302213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.302351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.302370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.302382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.302394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.302405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.302425 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.302432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.302452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.302459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.302466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.302472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.302479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.302486 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:35.302499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.302516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:r2026-04-18 01:01:35 | INFO  | Task ff008d92-31ce-4f0a-a4bf-b51623c9494d is in state SUCCESS 2026-04-18 01:01:35.302525 | orchestrator | o', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.302533 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.302541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.302551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.302573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.302587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.302597 | orchestrator | 2026-04-18 01:01:35.302607 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-18 01:01:35.302617 | orchestrator | Saturday 18 April 2026 01:00:40 +0000 (0:00:05.722) 0:00:12.669 ******** 2026-04-18 01:01:35.302642 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-18 01:01:35.302655 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.302665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.302676 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.302695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.302706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.302716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.302727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.302740 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 01:01:35.302748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.302759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.302766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.302772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.302779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.302796 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.302807 | orchestrator | skipping: [testbed-manager] 2026-04-18 01:01:35.302819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.302883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.302904 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:35.302914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.302957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.302968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.302979 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.302989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303055 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.303094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.303106 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:35.303113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.303120 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:35.303126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.303133 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303151 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.303157 | orchestrator | 2026-04-18 01:01:35.303163 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-18 01:01:35.303175 | orchestrator | Saturday 18 April 2026 01:00:42 +0000 (0:00:01.773) 0:00:14.443 ******** 2026-04-18 01:01:35.303183 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-18 01:01:35.303200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.303207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.303213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.303220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.303226 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.303289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.303298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.303310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.303323 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:35.303329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.303336 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.303367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.303385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.303704 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 01:01:35.303732 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:35.303740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.303761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.303768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303782 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.303790 | orchestrator | skipping: [testbed-manager] 2026-04-18 01:01:35.303802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.303809 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:35.303816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303828 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.303835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.303841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303848 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.303859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.303877 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.303883 | orchestrator | 2026-04-18 01:01:35.303889 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-18 01:01:35.303896 | orchestrator | Saturday 18 April 2026 01:00:44 +0000 (0:00:02.180) 0:00:16.623 ******** 2026-04-18 01:01:35.303906 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-18 01:01:35.303913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.303920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.303926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.303941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.303971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.303979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.303989 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.303996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.304003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.304009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.304016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.304049 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.304077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.304084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.304105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.304112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.304118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.304129 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:35.304141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.304148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.304159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.304165 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.304172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.304178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.304220 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.304328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.304340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.304348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.304356 | orchestrator | 2026-04-18 01:01:35.304364 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-18 01:01:35.304376 | orchestrator | Saturday 18 April 2026 01:00:49 +0000 (0:00:05.025) 0:00:21.648 ******** 2026-04-18 01:01:35.304384 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-18 01:01:35.304392 | orchestrator | 2026-04-18 01:01:35.304399 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-18 01:01:35.304406 | orchestrator | Saturday 18 April 2026 01:00:50 +0000 (0:00:00.889) 0:00:22.538 ******** 2026-04-18 01:01:35.304413 | orchestrator | skipping: [testbed-manager] 2026-04-18 01:01:35.304421 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:35.304428 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:35.304435 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:35.304443 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.304450 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.304458 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.304465 | orchestrator | 2026-04-18 01:01:35.304471 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-18 01:01:35.304477 | orchestrator | Saturday 18 April 2026 01:00:51 +0000 (0:00:00.763) 0:00:23.301 ******** 2026-04-18 01:01:35.304484 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-18 01:01:35.304490 | orchestrator | 2026-04-18 01:01:35.304496 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-18 01:01:35.304508 | orchestrator | Saturday 18 April 2026 01:00:52 +0000 (0:00:00.753) 0:00:24.055 ******** 2026-04-18 01:01:35.304514 | orchestrator | [WARNING]: Skipped 2026-04-18 01:01:35.304521 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304528 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-18 01:01:35.304534 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304541 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-18 01:01:35.304547 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-18 01:01:35.304553 | orchestrator | [WARNING]: Skipped 2026-04-18 01:01:35.304560 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304566 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-18 01:01:35.304572 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304578 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-18 01:01:35.304584 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-18 01:01:35.304591 | orchestrator | [WARNING]: Skipped 2026-04-18 01:01:35.304597 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304603 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-18 01:01:35.304609 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304615 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-18 01:01:35.304622 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-18 01:01:35.304628 | orchestrator | [WARNING]: Skipped 2026-04-18 01:01:35.304634 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304640 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-18 01:01:35.304646 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304652 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-18 01:01:35.304659 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-18 01:01:35.304665 | orchestrator | [WARNING]: Skipped 2026-04-18 01:01:35.304674 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304681 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-18 01:01:35.304687 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304693 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-18 01:01:35.304699 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-18 01:01:35.304705 | orchestrator | [WARNING]: Skipped 2026-04-18 01:01:35.304711 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304718 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-18 01:01:35.304724 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304730 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-18 01:01:35.304736 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-18 01:01:35.304742 | orchestrator | [WARNING]: Skipped 2026-04-18 01:01:35.304748 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304754 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-18 01:01:35.304761 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-18 01:01:35.304767 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-18 01:01:35.304773 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-18 01:01:35.304779 | orchestrator | 2026-04-18 01:01:35.304785 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-18 01:01:35.304796 | orchestrator | Saturday 18 April 2026 01:00:53 +0000 (0:00:01.655) 0:00:25.711 ******** 2026-04-18 01:01:35.304802 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-18 01:01:35.304810 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:35.304816 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-18 01:01:35.304822 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:35.304832 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-18 01:01:35.304843 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:35.304858 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-18 01:01:35.304869 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.304879 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-18 01:01:35.304889 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.304898 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-18 01:01:35.304908 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.304919 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-18 01:01:35.304926 | orchestrator | 2026-04-18 01:01:35.304932 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-18 01:01:35.304937 | orchestrator | Saturday 18 April 2026 01:01:05 +0000 (0:00:11.351) 0:00:37.062 ******** 2026-04-18 01:01:35.304943 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-18 01:01:35.304948 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:35.304953 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-18 01:01:35.304959 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:35.304964 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-18 01:01:35.304970 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:35.304975 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-18 01:01:35.304980 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.304985 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-18 01:01:35.304991 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.304996 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-18 01:01:35.305002 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.305007 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-18 01:01:35.305012 | orchestrator | 2026-04-18 01:01:35.305018 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-18 01:01:35.305023 | orchestrator | Saturday 18 April 2026 01:01:08 +0000 (0:00:02.861) 0:00:39.924 ******** 2026-04-18 01:01:35.305029 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-18 01:01:35.305035 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:35.305040 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-18 01:01:35.305046 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:35.305051 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-18 01:01:35.305057 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:35.305062 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-18 01:01:35.305073 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.305079 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-18 01:01:35.305084 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.305090 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-18 01:01:35.305095 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.305101 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-18 01:01:35.305106 | orchestrator | 2026-04-18 01:01:35.305111 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-18 01:01:35.305117 | orchestrator | Saturday 18 April 2026 01:01:09 +0000 (0:00:01.339) 0:00:41.264 ******** 2026-04-18 01:01:35.305122 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-18 01:01:35.305128 | orchestrator | 2026-04-18 01:01:35.305133 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-18 01:01:35.305138 | orchestrator | Saturday 18 April 2026 01:01:10 +0000 (0:00:00.682) 0:00:41.947 ******** 2026-04-18 01:01:35.305144 | orchestrator | skipping: [testbed-manager] 2026-04-18 01:01:35.305149 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:35.305154 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:35.305160 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:35.305165 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.305170 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.305176 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.305181 | orchestrator | 2026-04-18 01:01:35.305186 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-18 01:01:35.305192 | orchestrator | Saturday 18 April 2026 01:01:10 +0000 (0:00:00.697) 0:00:42.644 ******** 2026-04-18 01:01:35.305197 | orchestrator | skipping: [testbed-manager] 2026-04-18 01:01:35.305202 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.305208 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.305213 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.305219 | orchestrator | changed: [testbed-node-0] 2026-04-18 01:01:35.305224 | orchestrator | changed: [testbed-node-1] 2026-04-18 01:01:35.305247 | orchestrator | changed: [testbed-node-2] 2026-04-18 01:01:35.305258 | orchestrator | 2026-04-18 01:01:35.305605 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-18 01:01:35.305706 | orchestrator | Saturday 18 April 2026 01:01:12 +0000 (0:00:01.963) 0:00:44.607 ******** 2026-04-18 01:01:35.305726 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-18 01:01:35.305733 | orchestrator | skipping: [testbed-manager] 2026-04-18 01:01:35.305739 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-18 01:01:35.305744 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:35.305750 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-18 01:01:35.305755 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:35.305761 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-18 01:01:35.305766 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:35.305772 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-18 01:01:35.305777 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.305783 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-18 01:01:35.305788 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.305793 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-18 01:01:35.305806 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.305811 | orchestrator | 2026-04-18 01:01:35.305817 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-18 01:01:35.305822 | orchestrator | Saturday 18 April 2026 01:01:13 +0000 (0:00:01.146) 0:00:45.754 ******** 2026-04-18 01:01:35.305828 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-18 01:01:35.305833 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:35.305839 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-18 01:01:35.305844 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-18 01:01:35.305850 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-18 01:01:35.305857 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:35.305866 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:35.305875 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.305909 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-18 01:01:35.305919 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.305925 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-18 01:01:35.305931 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-18 01:01:35.305936 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.305942 | orchestrator | 2026-04-18 01:01:35.305951 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-18 01:01:35.305956 | orchestrator | Saturday 18 April 2026 01:01:15 +0000 (0:00:01.737) 0:00:47.491 ******** 2026-04-18 01:01:35.305962 | orchestrator | [WARNING]: Skipped 2026-04-18 01:01:35.305968 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-18 01:01:35.305973 | orchestrator | due to this access issue: 2026-04-18 01:01:35.305979 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-18 01:01:35.305984 | orchestrator | not a directory 2026-04-18 01:01:35.305990 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-18 01:01:35.305995 | orchestrator | 2026-04-18 01:01:35.306001 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-18 01:01:35.306006 | orchestrator | Saturday 18 April 2026 01:01:16 +0000 (0:00:01.116) 0:00:48.608 ******** 2026-04-18 01:01:35.306054 | orchestrator | skipping: [testbed-manager] 2026-04-18 01:01:35.306062 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:35.306067 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:35.306073 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:35.306078 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.306083 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.306088 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.306094 | orchestrator | 2026-04-18 01:01:35.306099 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-18 01:01:35.306105 | orchestrator | Saturday 18 April 2026 01:01:17 +0000 (0:00:00.643) 0:00:49.251 ******** 2026-04-18 01:01:35.306110 | orchestrator | skipping: [testbed-manager] 2026-04-18 01:01:35.306116 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:35.306121 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:35.306126 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:35.306132 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.306141 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.306161 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.306170 | orchestrator | 2026-04-18 01:01:35.306180 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-18 01:01:35.306194 | orchestrator | Saturday 18 April 2026 01:01:18 +0000 (0:00:00.793) 0:00:50.045 ******** 2026-04-18 01:01:35.306212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.306223 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-18 01:01:35.306251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.306266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.306276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.306298 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.306317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.306335 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.306345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.306352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-18 01:01:35.306359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.306370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.306376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.306383 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.306400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.306407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.306414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.306421 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:35.306432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.306439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.306450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.306460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.306468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.306474 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.306481 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.306491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.306498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.306514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-18 01:01:35.306532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-18 01:01:35.306544 | orchestrator | 2026-04-18 01:01:35.306553 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-18 01:01:35.306563 | orchestrator | Saturday 18 April 2026 01:01:22 +0000 (0:00:04.056) 0:00:54.102 ******** 2026-04-18 01:01:35.306572 | orchestrator | changed: [testbed-manager] => { 2026-04-18 01:01:35.306593 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 01:01:35.306609 | orchestrator | } 2026-04-18 01:01:35.306619 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 01:01:35.306627 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 01:01:35.306636 | orchestrator | } 2026-04-18 01:01:35.306646 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 01:01:35.306655 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 01:01:35.306664 | orchestrator | } 2026-04-18 01:01:35.306674 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 01:01:35.306683 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 01:01:35.306692 | orchestrator | } 2026-04-18 01:01:35.306701 | orchestrator | changed: [testbed-node-3] => { 2026-04-18 01:01:35.306709 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 01:01:35.306719 | orchestrator | } 2026-04-18 01:01:35.306728 | orchestrator | changed: [testbed-node-4] => { 2026-04-18 01:01:35.306738 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 01:01:35.306746 | orchestrator | } 2026-04-18 01:01:35.306756 | orchestrator | changed: [testbed-node-5] => { 2026-04-18 01:01:35.306765 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 01:01:35.306773 | orchestrator | } 2026-04-18 01:01:35.306782 | orchestrator | 2026-04-18 01:01:35.306791 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 01:01:35.306800 | orchestrator | Saturday 18 April 2026 01:01:23 +0000 (0:00:00.850) 0:00:54.952 ******** 2026-04-18 01:01:35.306809 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-18 01:01:35.306833 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.306844 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.306862 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 01:01:35.306873 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.306883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.306892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.306916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.306927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.306936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.306946 | orchestrator | skipping: [testbed-manager] 2026-04-18 01:01:35.306961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.306971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.306980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.306990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.307005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.307018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.307028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.307036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.307044 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:35.307060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.307070 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:35.307079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-18 01:01:35.307088 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:35.307097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.307113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.307127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.307136 | orchestrator | skipping: [testbed-node-3] 2026-04-18 01:01:35.307145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.307155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.307170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.307180 | orchestrator | skipping: [testbed-node-4] 2026-04-18 01:01:35.307189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-18 01:01:35.307195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.307206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-18 01:01:35.307211 | orchestrator | skipping: [testbed-node-5] 2026-04-18 01:01:35.307217 | orchestrator | 2026-04-18 01:01:35.307223 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-18 01:01:35.307228 | orchestrator | Saturday 18 April 2026 01:01:24 +0000 (0:00:01.839) 0:00:56.792 ******** 2026-04-18 01:01:35.307286 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-18 01:01:35.307296 | orchestrator | skipping: [testbed-manager] 2026-04-18 01:01:35.307306 | orchestrator | 2026-04-18 01:01:35.307318 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-18 01:01:35.307334 | orchestrator | Saturday 18 April 2026 01:01:26 +0000 (0:00:01.098) 0:00:57.890 ******** 2026-04-18 01:01:35.307344 | orchestrator | 2026-04-18 01:01:35.307353 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-18 01:01:35.307368 | orchestrator | Saturday 18 April 2026 01:01:26 +0000 (0:00:00.065) 0:00:57.956 ******** 2026-04-18 01:01:35.307378 | orchestrator | 2026-04-18 01:01:35.307387 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-18 01:01:35.307396 | orchestrator | Saturday 18 April 2026 01:01:26 +0000 (0:00:00.221) 0:00:58.178 ******** 2026-04-18 01:01:35.307405 | orchestrator | 2026-04-18 01:01:35.307413 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-18 01:01:35.307421 | orchestrator | Saturday 18 April 2026 01:01:26 +0000 (0:00:00.061) 0:00:58.239 ******** 2026-04-18 01:01:35.307431 | orchestrator | 2026-04-18 01:01:35.307441 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-18 01:01:35.307450 | orchestrator | Saturday 18 April 2026 01:01:26 +0000 (0:00:00.057) 0:00:58.296 ******** 2026-04-18 01:01:35.307460 | orchestrator | 2026-04-18 01:01:35.307471 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-18 01:01:35.307481 | orchestrator | Saturday 18 April 2026 01:01:26 +0000 (0:00:00.060) 0:00:58.357 ******** 2026-04-18 01:01:35.307492 | orchestrator | 2026-04-18 01:01:35.307502 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-18 01:01:35.307512 | orchestrator | Saturday 18 April 2026 01:01:26 +0000 (0:00:00.060) 0:00:58.418 ******** 2026-04-18 01:01:35.307519 | orchestrator | 2026-04-18 01:01:35.307524 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-18 01:01:35.307530 | orchestrator | Saturday 18 April 2026 01:01:26 +0000 (0:00:00.083) 0:00:58.502 ******** 2026-04-18 01:01:35.307544 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-server\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_sq9p3fhx/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_sq9p3fhx/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_sq9p3fhx/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_sq9p3fhx/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=3.2.1.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-server: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-server not found\")\\n'"} 2026-04-18 01:01:35.307560 | orchestrator | 2026-04-18 01:01:35.307566 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-18 01:01:35.307571 | orchestrator | Saturday 18 April 2026 01:01:29 +0000 (0:00:02.386) 0:01:00.888 ******** 2026-04-18 01:01:35.307586 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_tcny3vxq/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_tcny3vxq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_tcny3vxq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_tcny3vxq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-18 01:01:35.307602 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_6s1qk_ad/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_6s1qk_ad/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_6s1qk_ad/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_6s1qk_ad/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-18 01:01:35.307614 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_a6c3hi2p/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_a6c3hi2p/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_a6c3hi2p/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_a6c3hi2p/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-18 01:01:35.307629 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_viz_y1up/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_viz_y1up/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_viz_y1up/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_viz_y1up/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-18 01:01:35.307645 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_nvmncys2/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_nvmncys2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_nvmncys2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_nvmncys2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-18 01:01:35.307663 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_mvx8yejn/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_mvx8yejn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 352, in recreate_or_restart_container\\n self.start_container()\\n File \"/tmp/ansible_kolla_container_payload_mvx8yejn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 370, in start_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_mvx8yejn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=1.8.2.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2F2024.2%2Fprometheus-node-exporter: Internal Server Error (\"unknown: repository kolla/release/2024.2/prometheus-node-exporter not found\")\\n'"} 2026-04-18 01:01:35.307674 | orchestrator | 2026-04-18 01:01:35.307680 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 01:01:35.307687 | orchestrator | testbed-manager : ok=18  changed=9  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2026-04-18 01:01:35.307693 | orchestrator | testbed-node-0 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-18 01:01:35.307699 | orchestrator | testbed-node-1 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-18 01:01:35.307705 | orchestrator | testbed-node-2 : ok=11  changed=6  unreachable=0 failed=1  skipped=12  rescued=0 ignored=0 2026-04-18 01:01:35.307710 | orchestrator | testbed-node-3 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-18 01:01:35.307715 | orchestrator | testbed-node-4 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-18 01:01:35.307721 | orchestrator | testbed-node-5 : ok=10  changed=5  unreachable=0 failed=1  skipped=13  rescued=0 ignored=0 2026-04-18 01:01:35.307726 | orchestrator | 2026-04-18 01:01:35.307731 | orchestrator | 2026-04-18 01:01:35.307737 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 01:01:35.307743 | orchestrator | Saturday 18 April 2026 01:01:33 +0000 (0:00:04.364) 0:01:05.253 ******** 2026-04-18 01:01:35.307751 | orchestrator | =============================================================================== 2026-04-18 01:01:35.307757 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 11.35s 2026-04-18 01:01:35.307763 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.72s 2026-04-18 01:01:35.307768 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.03s 2026-04-18 01:01:35.307773 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 4.37s 2026-04-18 01:01:35.307779 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.06s 2026-04-18 01:01:35.307784 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.02s 2026-04-18 01:01:35.307789 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.86s 2026-04-18 01:01:35.307795 | orchestrator | prometheus : Restart prometheus-server container ------------------------ 2.39s 2026-04-18 01:01:35.307800 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.18s 2026-04-18 01:01:35.307806 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.96s 2026-04-18 01:01:35.307810 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.84s 2026-04-18 01:01:35.307821 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.77s 2026-04-18 01:01:35.307826 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.74s 2026-04-18 01:01:35.307830 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.66s 2026-04-18 01:01:35.307835 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.38s 2026-04-18 01:01:35.307840 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.34s 2026-04-18 01:01:35.307845 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.15s 2026-04-18 01:01:35.307850 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 1.12s 2026-04-18 01:01:35.307854 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 1.10s 2026-04-18 01:01:35.307859 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.09s 2026-04-18 01:01:35.307867 | orchestrator | 2026-04-18 01:01:35 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:35.307872 | orchestrator | 2026-04-18 01:01:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:35.307877 | orchestrator | 2026-04-18 01:01:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:35.307882 | orchestrator | 2026-04-18 01:01:35 | INFO  | Task 3171ecc6-19dd-4d87-ac0c-74fdbd975ce0 is in state STARTED 2026-04-18 01:01:35.307886 | orchestrator | 2026-04-18 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:38.358786 | orchestrator | 2026-04-18 01:01:38 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:38.361826 | orchestrator | 2026-04-18 01:01:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:38.363185 | orchestrator | 2026-04-18 01:01:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:38.364784 | orchestrator | 2026-04-18 01:01:38 | INFO  | Task 3171ecc6-19dd-4d87-ac0c-74fdbd975ce0 is in state STARTED 2026-04-18 01:01:38.364830 | orchestrator | 2026-04-18 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:41.404696 | orchestrator | 2026-04-18 01:01:41 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:41.404785 | orchestrator | 2026-04-18 01:01:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:41.405040 | orchestrator | 2026-04-18 01:01:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:41.406173 | orchestrator | 2026-04-18 01:01:41 | INFO  | Task 3171ecc6-19dd-4d87-ac0c-74fdbd975ce0 is in state STARTED 2026-04-18 01:01:41.406207 | orchestrator | 2026-04-18 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:44.439536 | orchestrator | 2026-04-18 01:01:44 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:44.441432 | orchestrator | 2026-04-18 01:01:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:44.443419 | orchestrator | 2026-04-18 01:01:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:44.445299 | orchestrator | 2026-04-18 01:01:44 | INFO  | Task 3171ecc6-19dd-4d87-ac0c-74fdbd975ce0 is in state STARTED 2026-04-18 01:01:44.445449 | orchestrator | 2026-04-18 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:47.481651 | orchestrator | 2026-04-18 01:01:47 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:47.483874 | orchestrator | 2026-04-18 01:01:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:47.486267 | orchestrator | 2026-04-18 01:01:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:47.488291 | orchestrator | 2026-04-18 01:01:47 | INFO  | Task 3171ecc6-19dd-4d87-ac0c-74fdbd975ce0 is in state STARTED 2026-04-18 01:01:47.488358 | orchestrator | 2026-04-18 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:50.530178 | orchestrator | 2026-04-18 01:01:50 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:50.530455 | orchestrator | 2026-04-18 01:01:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:50.532059 | orchestrator | 2026-04-18 01:01:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:50.533742 | orchestrator | 2026-04-18 01:01:50 | INFO  | Task 3171ecc6-19dd-4d87-ac0c-74fdbd975ce0 is in state STARTED 2026-04-18 01:01:50.533789 | orchestrator | 2026-04-18 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:53.579480 | orchestrator | 2026-04-18 01:01:53 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:53.581028 | orchestrator | 2026-04-18 01:01:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:53.582294 | orchestrator | 2026-04-18 01:01:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:53.583963 | orchestrator | 2026-04-18 01:01:53 | INFO  | Task 3171ecc6-19dd-4d87-ac0c-74fdbd975ce0 is in state STARTED 2026-04-18 01:01:53.584011 | orchestrator | 2026-04-18 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:56.631321 | orchestrator | 2026-04-18 01:01:56 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:56.633208 | orchestrator | 2026-04-18 01:01:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:56.635046 | orchestrator | 2026-04-18 01:01:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:56.636915 | orchestrator | 2026-04-18 01:01:56 | INFO  | Task 3171ecc6-19dd-4d87-ac0c-74fdbd975ce0 is in state SUCCESS 2026-04-18 01:01:56.638425 | orchestrator | 2026-04-18 01:01:56.638468 | orchestrator | 2026-04-18 01:01:56.638478 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-18 01:01:56.638488 | orchestrator | 2026-04-18 01:01:56.638498 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-18 01:01:56.638507 | orchestrator | Saturday 18 April 2026 01:01:36 +0000 (0:00:00.262) 0:00:00.262 ******** 2026-04-18 01:01:56.638516 | orchestrator | ok: [testbed-node-0] 2026-04-18 01:01:56.638525 | orchestrator | ok: [testbed-node-1] 2026-04-18 01:01:56.638534 | orchestrator | ok: [testbed-node-2] 2026-04-18 01:01:56.638542 | orchestrator | 2026-04-18 01:01:56.638551 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-18 01:01:56.638559 | orchestrator | Saturday 18 April 2026 01:01:36 +0000 (0:00:00.228) 0:00:00.491 ******** 2026-04-18 01:01:56.638567 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-18 01:01:56.638576 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-18 01:01:56.638584 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-18 01:01:56.638593 | orchestrator | 2026-04-18 01:01:56.638601 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-18 01:01:56.638609 | orchestrator | 2026-04-18 01:01:56.638618 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-18 01:01:56.638626 | orchestrator | Saturday 18 April 2026 01:01:37 +0000 (0:00:00.248) 0:00:00.739 ******** 2026-04-18 01:01:56.638653 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 01:01:56.638663 | orchestrator | 2026-04-18 01:01:56.638671 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-18 01:01:56.638680 | orchestrator | Saturday 18 April 2026 01:01:37 +0000 (0:00:00.516) 0:00:01.256 ******** 2026-04-18 01:01:56.638709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.638732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.638741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.638749 | orchestrator | 2026-04-18 01:01:56.638758 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-18 01:01:56.638767 | orchestrator | Saturday 18 April 2026 01:01:38 +0000 (0:00:01.016) 0:00:02.272 ******** 2026-04-18 01:01:56.638775 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-18 01:01:56.638784 | orchestrator | 2026-04-18 01:01:56.638793 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-18 01:01:56.638802 | orchestrator | Saturday 18 April 2026 01:01:39 +0000 (0:00:00.768) 0:00:03.040 ******** 2026-04-18 01:01:56.638811 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-18 01:01:56.638820 | orchestrator | 2026-04-18 01:01:56.638839 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-18 01:01:56.638848 | orchestrator | Saturday 18 April 2026 01:01:39 +0000 (0:00:00.468) 0:00:03.509 ******** 2026-04-18 01:01:56.638857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.638873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.638887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.638895 | orchestrator | 2026-04-18 01:01:56.638905 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-18 01:01:56.638913 | orchestrator | Saturday 18 April 2026 01:01:41 +0000 (0:00:01.345) 0:00:04.854 ******** 2026-04-18 01:01:56.638922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 01:01:56.638932 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:56.638947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 01:01:56.638957 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:56.638973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 01:01:56.638984 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:56.638994 | orchestrator | 2026-04-18 01:01:56.639004 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-18 01:01:56.639013 | orchestrator | Saturday 18 April 2026 01:01:41 +0000 (0:00:00.366) 0:00:05.221 ******** 2026-04-18 01:01:56.639021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 01:01:56.639031 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:56.639044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 01:01:56.639053 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:56.639062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 01:01:56.639071 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:56.639080 | orchestrator | 2026-04-18 01:01:56.639089 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-18 01:01:56.639098 | orchestrator | Saturday 18 April 2026 01:01:42 +0000 (0:00:00.581) 0:00:05.802 ******** 2026-04-18 01:01:56.639114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.639128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.639137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.639147 | orchestrator | 2026-04-18 01:01:56.639161 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-18 01:01:56.639169 | orchestrator | Saturday 18 April 2026 01:01:43 +0000 (0:00:01.339) 0:00:07.142 ******** 2026-04-18 01:01:56.639178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.639186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.639206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.639215 | orchestrator | 2026-04-18 01:01:56.639224 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-18 01:01:56.639296 | orchestrator | Saturday 18 April 2026 01:01:45 +0000 (0:00:01.624) 0:00:08.767 ******** 2026-04-18 01:01:56.639305 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:56.639314 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:56.639323 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:56.639332 | orchestrator | 2026-04-18 01:01:56.639337 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-18 01:01:56.639343 | orchestrator | Saturday 18 April 2026 01:01:45 +0000 (0:00:00.227) 0:00:08.994 ******** 2026-04-18 01:01:56.639348 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-18 01:01:56.639354 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-18 01:01:56.639359 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-18 01:01:56.639364 | orchestrator | 2026-04-18 01:01:56.639369 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-18 01:01:56.639375 | orchestrator | Saturday 18 April 2026 01:01:46 +0000 (0:00:01.275) 0:00:10.270 ******** 2026-04-18 01:01:56.639380 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-18 01:01:56.639385 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-18 01:01:56.639390 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-18 01:01:56.639395 | orchestrator | 2026-04-18 01:01:56.639401 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-18 01:01:56.639406 | orchestrator | Saturday 18 April 2026 01:01:47 +0000 (0:00:01.320) 0:00:11.590 ******** 2026-04-18 01:01:56.639411 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-18 01:01:56.639416 | orchestrator | 2026-04-18 01:01:56.639421 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-18 01:01:56.639426 | orchestrator | Saturday 18 April 2026 01:01:48 +0000 (0:00:00.668) 0:00:12.259 ******** 2026-04-18 01:01:56.639437 | orchestrator | ok: [testbed-node-0] 2026-04-18 01:01:56.639457 | orchestrator | ok: [testbed-node-1] 2026-04-18 01:01:56.639467 | orchestrator | ok: [testbed-node-2] 2026-04-18 01:01:56.639476 | orchestrator | 2026-04-18 01:01:56.639484 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-18 01:01:56.639492 | orchestrator | Saturday 18 April 2026 01:01:49 +0000 (0:00:00.870) 0:00:13.129 ******** 2026-04-18 01:01:56.639501 | orchestrator | changed: [testbed-node-0] 2026-04-18 01:01:56.639510 | orchestrator | changed: [testbed-node-1] 2026-04-18 01:01:56.639517 | orchestrator | changed: [testbed-node-2] 2026-04-18 01:01:56.639525 | orchestrator | 2026-04-18 01:01:56.639533 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-18 01:01:56.639542 | orchestrator | Saturday 18 April 2026 01:01:50 +0000 (0:00:01.282) 0:00:14.411 ******** 2026-04-18 01:01:56.639560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.639569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.639587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-18 01:01:56.639596 | orchestrator | 2026-04-18 01:01:56.639604 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-18 01:01:56.639612 | orchestrator | Saturday 18 April 2026 01:01:51 +0000 (0:00:01.014) 0:00:15.426 ******** 2026-04-18 01:01:56.639617 | orchestrator | changed: [testbed-node-0] => { 2026-04-18 01:01:56.639622 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 01:01:56.639631 | orchestrator | } 2026-04-18 01:01:56.639641 | orchestrator | changed: [testbed-node-1] => { 2026-04-18 01:01:56.639652 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 01:01:56.639660 | orchestrator | } 2026-04-18 01:01:56.639667 | orchestrator | changed: [testbed-node-2] => { 2026-04-18 01:01:56.639675 | orchestrator |  "msg": "Notifying handlers" 2026-04-18 01:01:56.639683 | orchestrator | } 2026-04-18 01:01:56.639690 | orchestrator | 2026-04-18 01:01:56.639698 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-18 01:01:56.639707 | orchestrator | Saturday 18 April 2026 01:01:52 +0000 (0:00:00.291) 0:00:15.718 ******** 2026-04-18 01:01:56.639721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 01:01:56.639737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 01:01:56.639746 | orchestrator | skipping: [testbed-node-0] 2026-04-18 01:01:56.639754 | orchestrator | skipping: [testbed-node-1] 2026-04-18 01:01:56.639762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2024.2/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-18 01:01:56.639771 | orchestrator | skipping: [testbed-node-2] 2026-04-18 01:01:56.639780 | orchestrator | 2026-04-18 01:01:56.639787 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-18 01:01:56.639796 | orchestrator | Saturday 18 April 2026 01:01:52 +0000 (0:00:00.712) 0:00:16.430 ******** 2026-04-18 01:01:56.639811 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "kolla_toolbox container is missing or not running!"} 2026-04-18 01:01:56.639819 | orchestrator | 2026-04-18 01:01:56.639828 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-18 01:01:56.639837 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2026-04-18 01:01:56.639847 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-18 01:01:56.639856 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-18 01:01:56.639865 | orchestrator | 2026-04-18 01:01:56.639873 | orchestrator | 2026-04-18 01:01:56.639881 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-18 01:01:56.639890 | orchestrator | Saturday 18 April 2026 01:01:53 +0000 (0:00:00.878) 0:00:17.309 ******** 2026-04-18 01:01:56.639898 | orchestrator | =============================================================================== 2026-04-18 01:01:56.639905 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.62s 2026-04-18 01:01:56.639911 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.35s 2026-04-18 01:01:56.639915 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.34s 2026-04-18 01:01:56.639920 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.32s 2026-04-18 01:01:56.639925 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.28s 2026-04-18 01:01:56.639935 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.28s 2026-04-18 01:01:56.639940 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.02s 2026-04-18 01:01:56.639945 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.01s 2026-04-18 01:01:56.639950 | orchestrator | grafana : Creating grafana database ------------------------------------- 0.88s 2026-04-18 01:01:56.639954 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.87s 2026-04-18 01:01:56.639959 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.77s 2026-04-18 01:01:56.639964 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.71s 2026-04-18 01:01:56.639969 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.67s 2026-04-18 01:01:56.639974 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.58s 2026-04-18 01:01:56.639986 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.52s 2026-04-18 01:01:56.639991 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.47s 2026-04-18 01:01:56.639996 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.37s 2026-04-18 01:01:56.640001 | orchestrator | service-check-containers : grafana | Notify handlers to restart containers --- 0.29s 2026-04-18 01:01:56.640006 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.25s 2026-04-18 01:01:56.640011 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.23s 2026-04-18 01:01:56.640016 | orchestrator | 2026-04-18 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:01:59.677842 | orchestrator | 2026-04-18 01:01:59 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:01:59.680062 | orchestrator | 2026-04-18 01:01:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:01:59.682122 | orchestrator | 2026-04-18 01:01:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:01:59.682168 | orchestrator | 2026-04-18 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:02.715853 | orchestrator | 2026-04-18 01:02:02 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:02:02.716577 | orchestrator | 2026-04-18 01:02:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:02.717406 | orchestrator | 2026-04-18 01:02:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:02.717450 | orchestrator | 2026-04-18 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:05.756983 | orchestrator | 2026-04-18 01:02:05 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:02:05.759179 | orchestrator | 2026-04-18 01:02:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:05.760631 | orchestrator | 2026-04-18 01:02:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:05.760760 | orchestrator | 2026-04-18 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:08.802848 | orchestrator | 2026-04-18 01:02:08 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:02:08.804134 | orchestrator | 2026-04-18 01:02:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:08.807832 | orchestrator | 2026-04-18 01:02:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:08.807936 | orchestrator | 2026-04-18 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:11.852176 | orchestrator | 2026-04-18 01:02:11 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:02:11.854175 | orchestrator | 2026-04-18 01:02:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:11.855801 | orchestrator | 2026-04-18 01:02:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:11.855893 | orchestrator | 2026-04-18 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:14.893778 | orchestrator | 2026-04-18 01:02:14 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state STARTED 2026-04-18 01:02:14.894766 | orchestrator | 2026-04-18 01:02:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:14.895901 | orchestrator | 2026-04-18 01:02:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:14.896011 | orchestrator | 2026-04-18 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:17.930423 | orchestrator | 2026-04-18 01:02:17 | INFO  | Task d5b39927-0cb4-45c1-9e2a-9c96461f4158 is in state SUCCESS 2026-04-18 01:02:17.931686 | orchestrator | 2026-04-18 01:02:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:17.933280 | orchestrator | 2026-04-18 01:02:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:17.933339 | orchestrator | 2026-04-18 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:20.970070 | orchestrator | 2026-04-18 01:02:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:20.970311 | orchestrator | 2026-04-18 01:02:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:20.970328 | orchestrator | 2026-04-18 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:24.009041 | orchestrator | 2026-04-18 01:02:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:24.010751 | orchestrator | 2026-04-18 01:02:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:24.010865 | orchestrator | 2026-04-18 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:27.044757 | orchestrator | 2026-04-18 01:02:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:27.045152 | orchestrator | 2026-04-18 01:02:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:27.045181 | orchestrator | 2026-04-18 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:30.091068 | orchestrator | 2026-04-18 01:02:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:30.093554 | orchestrator | 2026-04-18 01:02:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:30.093617 | orchestrator | 2026-04-18 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:33.131497 | orchestrator | 2026-04-18 01:02:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:33.133811 | orchestrator | 2026-04-18 01:02:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:33.133940 | orchestrator | 2026-04-18 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:36.176763 | orchestrator | 2026-04-18 01:02:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:36.177946 | orchestrator | 2026-04-18 01:02:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:36.178051 | orchestrator | 2026-04-18 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:39.221100 | orchestrator | 2026-04-18 01:02:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:39.222794 | orchestrator | 2026-04-18 01:02:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:39.222840 | orchestrator | 2026-04-18 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:42.260696 | orchestrator | 2026-04-18 01:02:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:42.262833 | orchestrator | 2026-04-18 01:02:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:42.262905 | orchestrator | 2026-04-18 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:45.302842 | orchestrator | 2026-04-18 01:02:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:45.304086 | orchestrator | 2026-04-18 01:02:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:45.304130 | orchestrator | 2026-04-18 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:48.347498 | orchestrator | 2026-04-18 01:02:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:48.349706 | orchestrator | 2026-04-18 01:02:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:48.349737 | orchestrator | 2026-04-18 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:51.396214 | orchestrator | 2026-04-18 01:02:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:51.397941 | orchestrator | 2026-04-18 01:02:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:51.397976 | orchestrator | 2026-04-18 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:54.442374 | orchestrator | 2026-04-18 01:02:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:54.444681 | orchestrator | 2026-04-18 01:02:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:54.444769 | orchestrator | 2026-04-18 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:02:57.491682 | orchestrator | 2026-04-18 01:02:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:02:57.494416 | orchestrator | 2026-04-18 01:02:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:02:57.494497 | orchestrator | 2026-04-18 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:00.533302 | orchestrator | 2026-04-18 01:03:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:00.534434 | orchestrator | 2026-04-18 01:03:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:00.534463 | orchestrator | 2026-04-18 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:03.572403 | orchestrator | 2026-04-18 01:03:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:03.574547 | orchestrator | 2026-04-18 01:03:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:03.574618 | orchestrator | 2026-04-18 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:06.615968 | orchestrator | 2026-04-18 01:03:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:06.617510 | orchestrator | 2026-04-18 01:03:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:06.617565 | orchestrator | 2026-04-18 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:09.656500 | orchestrator | 2026-04-18 01:03:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:09.657502 | orchestrator | 2026-04-18 01:03:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:09.657548 | orchestrator | 2026-04-18 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:12.704488 | orchestrator | 2026-04-18 01:03:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:12.705838 | orchestrator | 2026-04-18 01:03:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:12.705854 | orchestrator | 2026-04-18 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:15.754979 | orchestrator | 2026-04-18 01:03:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:15.756611 | orchestrator | 2026-04-18 01:03:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:15.756661 | orchestrator | 2026-04-18 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:18.797783 | orchestrator | 2026-04-18 01:03:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:18.798628 | orchestrator | 2026-04-18 01:03:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:18.798677 | orchestrator | 2026-04-18 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:21.844695 | orchestrator | 2026-04-18 01:03:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:21.846329 | orchestrator | 2026-04-18 01:03:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:21.846411 | orchestrator | 2026-04-18 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:24.890348 | orchestrator | 2026-04-18 01:03:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:24.892621 | orchestrator | 2026-04-18 01:03:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:24.892671 | orchestrator | 2026-04-18 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:27.936085 | orchestrator | 2026-04-18 01:03:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:27.938150 | orchestrator | 2026-04-18 01:03:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:27.938440 | orchestrator | 2026-04-18 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:30.974065 | orchestrator | 2026-04-18 01:03:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:30.975546 | orchestrator | 2026-04-18 01:03:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:30.975585 | orchestrator | 2026-04-18 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:34.022104 | orchestrator | 2026-04-18 01:03:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:34.024725 | orchestrator | 2026-04-18 01:03:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:34.024806 | orchestrator | 2026-04-18 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:37.072528 | orchestrator | 2026-04-18 01:03:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:37.076522 | orchestrator | 2026-04-18 01:03:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:37.076611 | orchestrator | 2026-04-18 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:40.113385 | orchestrator | 2026-04-18 01:03:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:40.114891 | orchestrator | 2026-04-18 01:03:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:40.114948 | orchestrator | 2026-04-18 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:43.153322 | orchestrator | 2026-04-18 01:03:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:43.154959 | orchestrator | 2026-04-18 01:03:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:43.155034 | orchestrator | 2026-04-18 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:46.194387 | orchestrator | 2026-04-18 01:03:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:46.195775 | orchestrator | 2026-04-18 01:03:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:46.195818 | orchestrator | 2026-04-18 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:49.235012 | orchestrator | 2026-04-18 01:03:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:49.236558 | orchestrator | 2026-04-18 01:03:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:49.236601 | orchestrator | 2026-04-18 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:52.275913 | orchestrator | 2026-04-18 01:03:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:52.276393 | orchestrator | 2026-04-18 01:03:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:52.276611 | orchestrator | 2026-04-18 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:55.325744 | orchestrator | 2026-04-18 01:03:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:55.327710 | orchestrator | 2026-04-18 01:03:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:55.327752 | orchestrator | 2026-04-18 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:03:58.373275 | orchestrator | 2026-04-18 01:03:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:03:58.375287 | orchestrator | 2026-04-18 01:03:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:03:58.375346 | orchestrator | 2026-04-18 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:01.420415 | orchestrator | 2026-04-18 01:04:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:01.422305 | orchestrator | 2026-04-18 01:04:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:01.422365 | orchestrator | 2026-04-18 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:04.469596 | orchestrator | 2026-04-18 01:04:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:04.471764 | orchestrator | 2026-04-18 01:04:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:04.471827 | orchestrator | 2026-04-18 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:07.511135 | orchestrator | 2026-04-18 01:04:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:07.512815 | orchestrator | 2026-04-18 01:04:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:07.512968 | orchestrator | 2026-04-18 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:10.557084 | orchestrator | 2026-04-18 01:04:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:10.558765 | orchestrator | 2026-04-18 01:04:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:10.558853 | orchestrator | 2026-04-18 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:13.601303 | orchestrator | 2026-04-18 01:04:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:13.602846 | orchestrator | 2026-04-18 01:04:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:13.602994 | orchestrator | 2026-04-18 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:16.641049 | orchestrator | 2026-04-18 01:04:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:16.642539 | orchestrator | 2026-04-18 01:04:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:16.642612 | orchestrator | 2026-04-18 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:19.683807 | orchestrator | 2026-04-18 01:04:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:19.684952 | orchestrator | 2026-04-18 01:04:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:19.684991 | orchestrator | 2026-04-18 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:22.729893 | orchestrator | 2026-04-18 01:04:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:22.730934 | orchestrator | 2026-04-18 01:04:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:22.731077 | orchestrator | 2026-04-18 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:25.770323 | orchestrator | 2026-04-18 01:04:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:25.770893 | orchestrator | 2026-04-18 01:04:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:25.770920 | orchestrator | 2026-04-18 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:28.808681 | orchestrator | 2026-04-18 01:04:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:28.809790 | orchestrator | 2026-04-18 01:04:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:28.809831 | orchestrator | 2026-04-18 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:31.853919 | orchestrator | 2026-04-18 01:04:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:31.854967 | orchestrator | 2026-04-18 01:04:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:31.855023 | orchestrator | 2026-04-18 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:34.897350 | orchestrator | 2026-04-18 01:04:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:34.899269 | orchestrator | 2026-04-18 01:04:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:34.899307 | orchestrator | 2026-04-18 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:37.944039 | orchestrator | 2026-04-18 01:04:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:37.946512 | orchestrator | 2026-04-18 01:04:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:37.946555 | orchestrator | 2026-04-18 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:40.988925 | orchestrator | 2026-04-18 01:04:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:40.990348 | orchestrator | 2026-04-18 01:04:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:40.990402 | orchestrator | 2026-04-18 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:44.036345 | orchestrator | 2026-04-18 01:04:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:44.039432 | orchestrator | 2026-04-18 01:04:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:44.039481 | orchestrator | 2026-04-18 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:47.078739 | orchestrator | 2026-04-18 01:04:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:47.079688 | orchestrator | 2026-04-18 01:04:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:47.079742 | orchestrator | 2026-04-18 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:50.128561 | orchestrator | 2026-04-18 01:04:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:50.130297 | orchestrator | 2026-04-18 01:04:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:50.130362 | orchestrator | 2026-04-18 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:53.176463 | orchestrator | 2026-04-18 01:04:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:53.178675 | orchestrator | 2026-04-18 01:04:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:53.178754 | orchestrator | 2026-04-18 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:56.222289 | orchestrator | 2026-04-18 01:04:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:56.224756 | orchestrator | 2026-04-18 01:04:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:56.224890 | orchestrator | 2026-04-18 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:04:59.264353 | orchestrator | 2026-04-18 01:04:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:04:59.266339 | orchestrator | 2026-04-18 01:04:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:04:59.266478 | orchestrator | 2026-04-18 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:02.308597 | orchestrator | 2026-04-18 01:05:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:02.310847 | orchestrator | 2026-04-18 01:05:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:02.310887 | orchestrator | 2026-04-18 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:05.355070 | orchestrator | 2026-04-18 01:05:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:05.355678 | orchestrator | 2026-04-18 01:05:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:05.355739 | orchestrator | 2026-04-18 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:08.398349 | orchestrator | 2026-04-18 01:05:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:08.399132 | orchestrator | 2026-04-18 01:05:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:08.399264 | orchestrator | 2026-04-18 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:11.443021 | orchestrator | 2026-04-18 01:05:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:11.444712 | orchestrator | 2026-04-18 01:05:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:11.444786 | orchestrator | 2026-04-18 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:14.488654 | orchestrator | 2026-04-18 01:05:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:14.490501 | orchestrator | 2026-04-18 01:05:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:14.490543 | orchestrator | 2026-04-18 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:17.535130 | orchestrator | 2026-04-18 01:05:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:17.536866 | orchestrator | 2026-04-18 01:05:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:17.536938 | orchestrator | 2026-04-18 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:20.577214 | orchestrator | 2026-04-18 01:05:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:20.578447 | orchestrator | 2026-04-18 01:05:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:20.578514 | orchestrator | 2026-04-18 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:23.623280 | orchestrator | 2026-04-18 01:05:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:23.624944 | orchestrator | 2026-04-18 01:05:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:23.625001 | orchestrator | 2026-04-18 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:26.666113 | orchestrator | 2026-04-18 01:05:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:26.666833 | orchestrator | 2026-04-18 01:05:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:26.666952 | orchestrator | 2026-04-18 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:29.712870 | orchestrator | 2026-04-18 01:05:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:29.714727 | orchestrator | 2026-04-18 01:05:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:29.714851 | orchestrator | 2026-04-18 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:32.757000 | orchestrator | 2026-04-18 01:05:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:32.758424 | orchestrator | 2026-04-18 01:05:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:32.758489 | orchestrator | 2026-04-18 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:35.804846 | orchestrator | 2026-04-18 01:05:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:35.806800 | orchestrator | 2026-04-18 01:05:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:35.806861 | orchestrator | 2026-04-18 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:38.856298 | orchestrator | 2026-04-18 01:05:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:38.858185 | orchestrator | 2026-04-18 01:05:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:38.858254 | orchestrator | 2026-04-18 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:41.905941 | orchestrator | 2026-04-18 01:05:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:41.907824 | orchestrator | 2026-04-18 01:05:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:41.907882 | orchestrator | 2026-04-18 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:44.952671 | orchestrator | 2026-04-18 01:05:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:44.955127 | orchestrator | 2026-04-18 01:05:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:44.955295 | orchestrator | 2026-04-18 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:47.996517 | orchestrator | 2026-04-18 01:05:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:47.998084 | orchestrator | 2026-04-18 01:05:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:47.998373 | orchestrator | 2026-04-18 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:51.047923 | orchestrator | 2026-04-18 01:05:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:51.050199 | orchestrator | 2026-04-18 01:05:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:51.050309 | orchestrator | 2026-04-18 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:54.092597 | orchestrator | 2026-04-18 01:05:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:54.093861 | orchestrator | 2026-04-18 01:05:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:54.093936 | orchestrator | 2026-04-18 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:05:57.137956 | orchestrator | 2026-04-18 01:05:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:05:57.141335 | orchestrator | 2026-04-18 01:05:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:05:57.141467 | orchestrator | 2026-04-18 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:00.180476 | orchestrator | 2026-04-18 01:06:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:00.181907 | orchestrator | 2026-04-18 01:06:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:00.182059 | orchestrator | 2026-04-18 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:03.227460 | orchestrator | 2026-04-18 01:06:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:03.229061 | orchestrator | 2026-04-18 01:06:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:03.229155 | orchestrator | 2026-04-18 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:06.273158 | orchestrator | 2026-04-18 01:06:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:06.274504 | orchestrator | 2026-04-18 01:06:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:06.274570 | orchestrator | 2026-04-18 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:09.312290 | orchestrator | 2026-04-18 01:06:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:09.313398 | orchestrator | 2026-04-18 01:06:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:09.313544 | orchestrator | 2026-04-18 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:12.356835 | orchestrator | 2026-04-18 01:06:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:12.358819 | orchestrator | 2026-04-18 01:06:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:12.358895 | orchestrator | 2026-04-18 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:15.393006 | orchestrator | 2026-04-18 01:06:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:15.393349 | orchestrator | 2026-04-18 01:06:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:15.393385 | orchestrator | 2026-04-18 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:18.436923 | orchestrator | 2026-04-18 01:06:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:18.438791 | orchestrator | 2026-04-18 01:06:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:18.438921 | orchestrator | 2026-04-18 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:21.479989 | orchestrator | 2026-04-18 01:06:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:21.482668 | orchestrator | 2026-04-18 01:06:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:21.482754 | orchestrator | 2026-04-18 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:24.528010 | orchestrator | 2026-04-18 01:06:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:24.528619 | orchestrator | 2026-04-18 01:06:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:24.528670 | orchestrator | 2026-04-18 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:27.577602 | orchestrator | 2026-04-18 01:06:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:27.580015 | orchestrator | 2026-04-18 01:06:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:27.580060 | orchestrator | 2026-04-18 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:30.631098 | orchestrator | 2026-04-18 01:06:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:30.633391 | orchestrator | 2026-04-18 01:06:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:30.633439 | orchestrator | 2026-04-18 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:33.679028 | orchestrator | 2026-04-18 01:06:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:33.680776 | orchestrator | 2026-04-18 01:06:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:33.680835 | orchestrator | 2026-04-18 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:36.730857 | orchestrator | 2026-04-18 01:06:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:36.732885 | orchestrator | 2026-04-18 01:06:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:36.732952 | orchestrator | 2026-04-18 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:39.773268 | orchestrator | 2026-04-18 01:06:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:39.775291 | orchestrator | 2026-04-18 01:06:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:39.775377 | orchestrator | 2026-04-18 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:42.817330 | orchestrator | 2026-04-18 01:06:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:42.819329 | orchestrator | 2026-04-18 01:06:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:42.819403 | orchestrator | 2026-04-18 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:45.864599 | orchestrator | 2026-04-18 01:06:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:45.866268 | orchestrator | 2026-04-18 01:06:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:45.866343 | orchestrator | 2026-04-18 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:48.906316 | orchestrator | 2026-04-18 01:06:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:48.907408 | orchestrator | 2026-04-18 01:06:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:48.907456 | orchestrator | 2026-04-18 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:51.950639 | orchestrator | 2026-04-18 01:06:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:51.954551 | orchestrator | 2026-04-18 01:06:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:51.954639 | orchestrator | 2026-04-18 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:55.000799 | orchestrator | 2026-04-18 01:06:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:55.002919 | orchestrator | 2026-04-18 01:06:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:55.002973 | orchestrator | 2026-04-18 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:06:58.050425 | orchestrator | 2026-04-18 01:06:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:06:58.053494 | orchestrator | 2026-04-18 01:06:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:06:58.053567 | orchestrator | 2026-04-18 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:01.097393 | orchestrator | 2026-04-18 01:07:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:01.097805 | orchestrator | 2026-04-18 01:07:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:01.098053 | orchestrator | 2026-04-18 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:04.143843 | orchestrator | 2026-04-18 01:07:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:04.146559 | orchestrator | 2026-04-18 01:07:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:04.146669 | orchestrator | 2026-04-18 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:07.191657 | orchestrator | 2026-04-18 01:07:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:07.192644 | orchestrator | 2026-04-18 01:07:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:07.192968 | orchestrator | 2026-04-18 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:10.238351 | orchestrator | 2026-04-18 01:07:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:10.240501 | orchestrator | 2026-04-18 01:07:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:10.240566 | orchestrator | 2026-04-18 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:13.283020 | orchestrator | 2026-04-18 01:07:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:13.284536 | orchestrator | 2026-04-18 01:07:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:13.284590 | orchestrator | 2026-04-18 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:16.330650 | orchestrator | 2026-04-18 01:07:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:16.333606 | orchestrator | 2026-04-18 01:07:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:16.333668 | orchestrator | 2026-04-18 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:19.376919 | orchestrator | 2026-04-18 01:07:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:19.379036 | orchestrator | 2026-04-18 01:07:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:19.379105 | orchestrator | 2026-04-18 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:22.426737 | orchestrator | 2026-04-18 01:07:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:22.430470 | orchestrator | 2026-04-18 01:07:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:22.430540 | orchestrator | 2026-04-18 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:25.479992 | orchestrator | 2026-04-18 01:07:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:25.481603 | orchestrator | 2026-04-18 01:07:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:25.481663 | orchestrator | 2026-04-18 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:28.527273 | orchestrator | 2026-04-18 01:07:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:28.528907 | orchestrator | 2026-04-18 01:07:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:28.528965 | orchestrator | 2026-04-18 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:31.581037 | orchestrator | 2026-04-18 01:07:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:31.585677 | orchestrator | 2026-04-18 01:07:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:31.585810 | orchestrator | 2026-04-18 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:34.632277 | orchestrator | 2026-04-18 01:07:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:34.633033 | orchestrator | 2026-04-18 01:07:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:34.633517 | orchestrator | 2026-04-18 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:37.686948 | orchestrator | 2026-04-18 01:07:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:37.688105 | orchestrator | 2026-04-18 01:07:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:37.688130 | orchestrator | 2026-04-18 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:40.728831 | orchestrator | 2026-04-18 01:07:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:40.729907 | orchestrator | 2026-04-18 01:07:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:40.729954 | orchestrator | 2026-04-18 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:43.773042 | orchestrator | 2026-04-18 01:07:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:43.774708 | orchestrator | 2026-04-18 01:07:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:43.774805 | orchestrator | 2026-04-18 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:46.821062 | orchestrator | 2026-04-18 01:07:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:46.822725 | orchestrator | 2026-04-18 01:07:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:46.822786 | orchestrator | 2026-04-18 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:49.864693 | orchestrator | 2026-04-18 01:07:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:49.867453 | orchestrator | 2026-04-18 01:07:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:49.867927 | orchestrator | 2026-04-18 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:52.913437 | orchestrator | 2026-04-18 01:07:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:52.915042 | orchestrator | 2026-04-18 01:07:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:52.915111 | orchestrator | 2026-04-18 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:55.963719 | orchestrator | 2026-04-18 01:07:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:55.964907 | orchestrator | 2026-04-18 01:07:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:55.964959 | orchestrator | 2026-04-18 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:07:59.016790 | orchestrator | 2026-04-18 01:07:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:07:59.018465 | orchestrator | 2026-04-18 01:07:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:07:59.018531 | orchestrator | 2026-04-18 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:02.060131 | orchestrator | 2026-04-18 01:08:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:02.060844 | orchestrator | 2026-04-18 01:08:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:02.060895 | orchestrator | 2026-04-18 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:05.100671 | orchestrator | 2026-04-18 01:08:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:05.101690 | orchestrator | 2026-04-18 01:08:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:05.101716 | orchestrator | 2026-04-18 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:08.147788 | orchestrator | 2026-04-18 01:08:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:08.149580 | orchestrator | 2026-04-18 01:08:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:08.149639 | orchestrator | 2026-04-18 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:11.194734 | orchestrator | 2026-04-18 01:08:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:11.195716 | orchestrator | 2026-04-18 01:08:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:11.195763 | orchestrator | 2026-04-18 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:14.239771 | orchestrator | 2026-04-18 01:08:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:14.241860 | orchestrator | 2026-04-18 01:08:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:14.241926 | orchestrator | 2026-04-18 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:17.283365 | orchestrator | 2026-04-18 01:08:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:17.285224 | orchestrator | 2026-04-18 01:08:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:17.285359 | orchestrator | 2026-04-18 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:20.329767 | orchestrator | 2026-04-18 01:08:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:20.331115 | orchestrator | 2026-04-18 01:08:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:20.331235 | orchestrator | 2026-04-18 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:23.381684 | orchestrator | 2026-04-18 01:08:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:23.383503 | orchestrator | 2026-04-18 01:08:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:23.383555 | orchestrator | 2026-04-18 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:26.433951 | orchestrator | 2026-04-18 01:08:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:26.436659 | orchestrator | 2026-04-18 01:08:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:26.436735 | orchestrator | 2026-04-18 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:29.483377 | orchestrator | 2026-04-18 01:08:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:29.485560 | orchestrator | 2026-04-18 01:08:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:29.485626 | orchestrator | 2026-04-18 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:32.528730 | orchestrator | 2026-04-18 01:08:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:32.530158 | orchestrator | 2026-04-18 01:08:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:32.530310 | orchestrator | 2026-04-18 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:35.573041 | orchestrator | 2026-04-18 01:08:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:35.579646 | orchestrator | 2026-04-18 01:08:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:35.579735 | orchestrator | 2026-04-18 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:38.620835 | orchestrator | 2026-04-18 01:08:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:38.622331 | orchestrator | 2026-04-18 01:08:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:38.622383 | orchestrator | 2026-04-18 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:41.668775 | orchestrator | 2026-04-18 01:08:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:41.670270 | orchestrator | 2026-04-18 01:08:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:41.670380 | orchestrator | 2026-04-18 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:44.714351 | orchestrator | 2026-04-18 01:08:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:44.716534 | orchestrator | 2026-04-18 01:08:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:44.716687 | orchestrator | 2026-04-18 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:47.760763 | orchestrator | 2026-04-18 01:08:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:47.762695 | orchestrator | 2026-04-18 01:08:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:47.762758 | orchestrator | 2026-04-18 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:50.814963 | orchestrator | 2026-04-18 01:08:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:50.816026 | orchestrator | 2026-04-18 01:08:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:50.816085 | orchestrator | 2026-04-18 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:53.857862 | orchestrator | 2026-04-18 01:08:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:53.859581 | orchestrator | 2026-04-18 01:08:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:53.859665 | orchestrator | 2026-04-18 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:56.902731 | orchestrator | 2026-04-18 01:08:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:56.905002 | orchestrator | 2026-04-18 01:08:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:56.905061 | orchestrator | 2026-04-18 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:08:59.952508 | orchestrator | 2026-04-18 01:08:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:08:59.953936 | orchestrator | 2026-04-18 01:08:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:08:59.954105 | orchestrator | 2026-04-18 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:02.997356 | orchestrator | 2026-04-18 01:09:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:02.999098 | orchestrator | 2026-04-18 01:09:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:02.999246 | orchestrator | 2026-04-18 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:06.038480 | orchestrator | 2026-04-18 01:09:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:06.040094 | orchestrator | 2026-04-18 01:09:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:06.040301 | orchestrator | 2026-04-18 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:09.088682 | orchestrator | 2026-04-18 01:09:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:09.090135 | orchestrator | 2026-04-18 01:09:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:09.090201 | orchestrator | 2026-04-18 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:12.137650 | orchestrator | 2026-04-18 01:09:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:12.138693 | orchestrator | 2026-04-18 01:09:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:12.138742 | orchestrator | 2026-04-18 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:15.183870 | orchestrator | 2026-04-18 01:09:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:15.186753 | orchestrator | 2026-04-18 01:09:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:15.186804 | orchestrator | 2026-04-18 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:18.232979 | orchestrator | 2026-04-18 01:09:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:18.235359 | orchestrator | 2026-04-18 01:09:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:18.235409 | orchestrator | 2026-04-18 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:21.272924 | orchestrator | 2026-04-18 01:09:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:21.274317 | orchestrator | 2026-04-18 01:09:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:21.274364 | orchestrator | 2026-04-18 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:24.311984 | orchestrator | 2026-04-18 01:09:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:24.313983 | orchestrator | 2026-04-18 01:09:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:24.314073 | orchestrator | 2026-04-18 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:27.365823 | orchestrator | 2026-04-18 01:09:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:27.368336 | orchestrator | 2026-04-18 01:09:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:27.368408 | orchestrator | 2026-04-18 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:30.415081 | orchestrator | 2026-04-18 01:09:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:30.418335 | orchestrator | 2026-04-18 01:09:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:30.418536 | orchestrator | 2026-04-18 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:33.462939 | orchestrator | 2026-04-18 01:09:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:33.464786 | orchestrator | 2026-04-18 01:09:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:33.465037 | orchestrator | 2026-04-18 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:36.510343 | orchestrator | 2026-04-18 01:09:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:36.512276 | orchestrator | 2026-04-18 01:09:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:36.512415 | orchestrator | 2026-04-18 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:39.564631 | orchestrator | 2026-04-18 01:09:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:39.565858 | orchestrator | 2026-04-18 01:09:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:39.565917 | orchestrator | 2026-04-18 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:42.615635 | orchestrator | 2026-04-18 01:09:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:42.618959 | orchestrator | 2026-04-18 01:09:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:42.619006 | orchestrator | 2026-04-18 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:45.671477 | orchestrator | 2026-04-18 01:09:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:45.673200 | orchestrator | 2026-04-18 01:09:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:45.673247 | orchestrator | 2026-04-18 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:48.720345 | orchestrator | 2026-04-18 01:09:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:48.723225 | orchestrator | 2026-04-18 01:09:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:48.723380 | orchestrator | 2026-04-18 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:51.769606 | orchestrator | 2026-04-18 01:09:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:51.771161 | orchestrator | 2026-04-18 01:09:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:51.771216 | orchestrator | 2026-04-18 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:54.820949 | orchestrator | 2026-04-18 01:09:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:54.822459 | orchestrator | 2026-04-18 01:09:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:54.822579 | orchestrator | 2026-04-18 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:09:57.868252 | orchestrator | 2026-04-18 01:09:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:09:57.869593 | orchestrator | 2026-04-18 01:09:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:09:57.869857 | orchestrator | 2026-04-18 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:00.913838 | orchestrator | 2026-04-18 01:10:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:00.915584 | orchestrator | 2026-04-18 01:10:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:00.915643 | orchestrator | 2026-04-18 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:03.965806 | orchestrator | 2026-04-18 01:10:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:03.966399 | orchestrator | 2026-04-18 01:10:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:03.966586 | orchestrator | 2026-04-18 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:07.011603 | orchestrator | 2026-04-18 01:10:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:07.014474 | orchestrator | 2026-04-18 01:10:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:07.014663 | orchestrator | 2026-04-18 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:10.065482 | orchestrator | 2026-04-18 01:10:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:10.067530 | orchestrator | 2026-04-18 01:10:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:10.067588 | orchestrator | 2026-04-18 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:13.117823 | orchestrator | 2026-04-18 01:10:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:13.119233 | orchestrator | 2026-04-18 01:10:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:13.119324 | orchestrator | 2026-04-18 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:16.169017 | orchestrator | 2026-04-18 01:10:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:16.172094 | orchestrator | 2026-04-18 01:10:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:16.172241 | orchestrator | 2026-04-18 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:19.221553 | orchestrator | 2026-04-18 01:10:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:19.223437 | orchestrator | 2026-04-18 01:10:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:19.223512 | orchestrator | 2026-04-18 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:22.273620 | orchestrator | 2026-04-18 01:10:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:22.274939 | orchestrator | 2026-04-18 01:10:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:22.275088 | orchestrator | 2026-04-18 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:25.322602 | orchestrator | 2026-04-18 01:10:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:25.324004 | orchestrator | 2026-04-18 01:10:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:25.324063 | orchestrator | 2026-04-18 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:28.370506 | orchestrator | 2026-04-18 01:10:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:28.372327 | orchestrator | 2026-04-18 01:10:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:28.372491 | orchestrator | 2026-04-18 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:31.419251 | orchestrator | 2026-04-18 01:10:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:31.420665 | orchestrator | 2026-04-18 01:10:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:31.420752 | orchestrator | 2026-04-18 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:34.467856 | orchestrator | 2026-04-18 01:10:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:34.469671 | orchestrator | 2026-04-18 01:10:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:34.469737 | orchestrator | 2026-04-18 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:37.516550 | orchestrator | 2026-04-18 01:10:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:37.518424 | orchestrator | 2026-04-18 01:10:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:37.518548 | orchestrator | 2026-04-18 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:40.569070 | orchestrator | 2026-04-18 01:10:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:40.571384 | orchestrator | 2026-04-18 01:10:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:40.571969 | orchestrator | 2026-04-18 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:43.614314 | orchestrator | 2026-04-18 01:10:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:43.615679 | orchestrator | 2026-04-18 01:10:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:43.615747 | orchestrator | 2026-04-18 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:46.660688 | orchestrator | 2026-04-18 01:10:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:46.662281 | orchestrator | 2026-04-18 01:10:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:46.662317 | orchestrator | 2026-04-18 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:49.706116 | orchestrator | 2026-04-18 01:10:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:49.707967 | orchestrator | 2026-04-18 01:10:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:49.708007 | orchestrator | 2026-04-18 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:52.751327 | orchestrator | 2026-04-18 01:10:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:52.753987 | orchestrator | 2026-04-18 01:10:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:52.754186 | orchestrator | 2026-04-18 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:55.795829 | orchestrator | 2026-04-18 01:10:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:55.797187 | orchestrator | 2026-04-18 01:10:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:55.797219 | orchestrator | 2026-04-18 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:10:58.845866 | orchestrator | 2026-04-18 01:10:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:10:58.847374 | orchestrator | 2026-04-18 01:10:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:10:58.847455 | orchestrator | 2026-04-18 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:01.889327 | orchestrator | 2026-04-18 01:11:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:01.892158 | orchestrator | 2026-04-18 01:11:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:01.892273 | orchestrator | 2026-04-18 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:04.929502 | orchestrator | 2026-04-18 01:11:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:04.931643 | orchestrator | 2026-04-18 01:11:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:04.931743 | orchestrator | 2026-04-18 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:07.975178 | orchestrator | 2026-04-18 01:11:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:07.977581 | orchestrator | 2026-04-18 01:11:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:07.977680 | orchestrator | 2026-04-18 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:11.026209 | orchestrator | 2026-04-18 01:11:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:11.028082 | orchestrator | 2026-04-18 01:11:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:11.028181 | orchestrator | 2026-04-18 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:14.075795 | orchestrator | 2026-04-18 01:11:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:14.079588 | orchestrator | 2026-04-18 01:11:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:14.079661 | orchestrator | 2026-04-18 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:17.129443 | orchestrator | 2026-04-18 01:11:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:17.130207 | orchestrator | 2026-04-18 01:11:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:17.130238 | orchestrator | 2026-04-18 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:20.168525 | orchestrator | 2026-04-18 01:11:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:20.170777 | orchestrator | 2026-04-18 01:11:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:20.170842 | orchestrator | 2026-04-18 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:23.213410 | orchestrator | 2026-04-18 01:11:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:23.214924 | orchestrator | 2026-04-18 01:11:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:23.215008 | orchestrator | 2026-04-18 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:26.260685 | orchestrator | 2026-04-18 01:11:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:26.262888 | orchestrator | 2026-04-18 01:11:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:26.263393 | orchestrator | 2026-04-18 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:29.310447 | orchestrator | 2026-04-18 01:11:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:29.312860 | orchestrator | 2026-04-18 01:11:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:29.312908 | orchestrator | 2026-04-18 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:32.356899 | orchestrator | 2026-04-18 01:11:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:32.359333 | orchestrator | 2026-04-18 01:11:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:32.359397 | orchestrator | 2026-04-18 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:35.402339 | orchestrator | 2026-04-18 01:11:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:35.403971 | orchestrator | 2026-04-18 01:11:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:35.404019 | orchestrator | 2026-04-18 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:38.448711 | orchestrator | 2026-04-18 01:11:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:38.451054 | orchestrator | 2026-04-18 01:11:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:38.451217 | orchestrator | 2026-04-18 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:41.498282 | orchestrator | 2026-04-18 01:11:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:41.500049 | orchestrator | 2026-04-18 01:11:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:41.500155 | orchestrator | 2026-04-18 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:44.548008 | orchestrator | 2026-04-18 01:11:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:44.549725 | orchestrator | 2026-04-18 01:11:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:44.549791 | orchestrator | 2026-04-18 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:47.600674 | orchestrator | 2026-04-18 01:11:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:47.602686 | orchestrator | 2026-04-18 01:11:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:47.602918 | orchestrator | 2026-04-18 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:50.646861 | orchestrator | 2026-04-18 01:11:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:50.648045 | orchestrator | 2026-04-18 01:11:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:50.648085 | orchestrator | 2026-04-18 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:53.691566 | orchestrator | 2026-04-18 01:11:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:53.693375 | orchestrator | 2026-04-18 01:11:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:53.693435 | orchestrator | 2026-04-18 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:56.733822 | orchestrator | 2026-04-18 01:11:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:56.736350 | orchestrator | 2026-04-18 01:11:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:56.736431 | orchestrator | 2026-04-18 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:11:59.780948 | orchestrator | 2026-04-18 01:11:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:11:59.782940 | orchestrator | 2026-04-18 01:11:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:11:59.783010 | orchestrator | 2026-04-18 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:02.820072 | orchestrator | 2026-04-18 01:12:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:02.821747 | orchestrator | 2026-04-18 01:12:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:02.821806 | orchestrator | 2026-04-18 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:05.861775 | orchestrator | 2026-04-18 01:12:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:05.863518 | orchestrator | 2026-04-18 01:12:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:05.863586 | orchestrator | 2026-04-18 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:08.901261 | orchestrator | 2026-04-18 01:12:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:08.903274 | orchestrator | 2026-04-18 01:12:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:08.903346 | orchestrator | 2026-04-18 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:11.946876 | orchestrator | 2026-04-18 01:12:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:11.949575 | orchestrator | 2026-04-18 01:12:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:11.949627 | orchestrator | 2026-04-18 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:14.998771 | orchestrator | 2026-04-18 01:12:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:14.999922 | orchestrator | 2026-04-18 01:12:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:14.999982 | orchestrator | 2026-04-18 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:18.050336 | orchestrator | 2026-04-18 01:12:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:18.053624 | orchestrator | 2026-04-18 01:12:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:18.053682 | orchestrator | 2026-04-18 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:21.097076 | orchestrator | 2026-04-18 01:12:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:21.098794 | orchestrator | 2026-04-18 01:12:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:21.098863 | orchestrator | 2026-04-18 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:24.149375 | orchestrator | 2026-04-18 01:12:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:24.151043 | orchestrator | 2026-04-18 01:12:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:24.151161 | orchestrator | 2026-04-18 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:27.199874 | orchestrator | 2026-04-18 01:12:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:27.205030 | orchestrator | 2026-04-18 01:12:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:27.205096 | orchestrator | 2026-04-18 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:30.246763 | orchestrator | 2026-04-18 01:12:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:30.248167 | orchestrator | 2026-04-18 01:12:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:30.248309 | orchestrator | 2026-04-18 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:33.294675 | orchestrator | 2026-04-18 01:12:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:33.296145 | orchestrator | 2026-04-18 01:12:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:33.296197 | orchestrator | 2026-04-18 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:36.346422 | orchestrator | 2026-04-18 01:12:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:36.347600 | orchestrator | 2026-04-18 01:12:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:36.347728 | orchestrator | 2026-04-18 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:39.387607 | orchestrator | 2026-04-18 01:12:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:39.389766 | orchestrator | 2026-04-18 01:12:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:39.389830 | orchestrator | 2026-04-18 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:42.430755 | orchestrator | 2026-04-18 01:12:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:42.432315 | orchestrator | 2026-04-18 01:12:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:42.432389 | orchestrator | 2026-04-18 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:45.476853 | orchestrator | 2026-04-18 01:12:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:45.476979 | orchestrator | 2026-04-18 01:12:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:45.476998 | orchestrator | 2026-04-18 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:48.524064 | orchestrator | 2026-04-18 01:12:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:48.526162 | orchestrator | 2026-04-18 01:12:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:48.526232 | orchestrator | 2026-04-18 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:51.573531 | orchestrator | 2026-04-18 01:12:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:51.575727 | orchestrator | 2026-04-18 01:12:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:51.575794 | orchestrator | 2026-04-18 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:54.621774 | orchestrator | 2026-04-18 01:12:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:54.623240 | orchestrator | 2026-04-18 01:12:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:54.623286 | orchestrator | 2026-04-18 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:12:57.668180 | orchestrator | 2026-04-18 01:12:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:12:57.670061 | orchestrator | 2026-04-18 01:12:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:12:57.670136 | orchestrator | 2026-04-18 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:00.716106 | orchestrator | 2026-04-18 01:13:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:00.718414 | orchestrator | 2026-04-18 01:13:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:00.718482 | orchestrator | 2026-04-18 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:03.766565 | orchestrator | 2026-04-18 01:13:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:03.769386 | orchestrator | 2026-04-18 01:13:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:03.769453 | orchestrator | 2026-04-18 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:06.814388 | orchestrator | 2026-04-18 01:13:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:06.816732 | orchestrator | 2026-04-18 01:13:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:06.816830 | orchestrator | 2026-04-18 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:09.859968 | orchestrator | 2026-04-18 01:13:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:09.860535 | orchestrator | 2026-04-18 01:13:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:09.860574 | orchestrator | 2026-04-18 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:12.904663 | orchestrator | 2026-04-18 01:13:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:12.906222 | orchestrator | 2026-04-18 01:13:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:12.906272 | orchestrator | 2026-04-18 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:15.949668 | orchestrator | 2026-04-18 01:13:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:15.951554 | orchestrator | 2026-04-18 01:13:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:15.951715 | orchestrator | 2026-04-18 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:18.990757 | orchestrator | 2026-04-18 01:13:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:18.992537 | orchestrator | 2026-04-18 01:13:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:18.992591 | orchestrator | 2026-04-18 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:22.036688 | orchestrator | 2026-04-18 01:13:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:22.037881 | orchestrator | 2026-04-18 01:13:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:22.037945 | orchestrator | 2026-04-18 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:25.085922 | orchestrator | 2026-04-18 01:13:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:25.088061 | orchestrator | 2026-04-18 01:13:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:25.088240 | orchestrator | 2026-04-18 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:28.132393 | orchestrator | 2026-04-18 01:13:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:28.135431 | orchestrator | 2026-04-18 01:13:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:28.135501 | orchestrator | 2026-04-18 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:31.175573 | orchestrator | 2026-04-18 01:13:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:31.176949 | orchestrator | 2026-04-18 01:13:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:31.177245 | orchestrator | 2026-04-18 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:34.225787 | orchestrator | 2026-04-18 01:13:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:34.228119 | orchestrator | 2026-04-18 01:13:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:34.228186 | orchestrator | 2026-04-18 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:37.274367 | orchestrator | 2026-04-18 01:13:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:37.275993 | orchestrator | 2026-04-18 01:13:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:37.276141 | orchestrator | 2026-04-18 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:40.324409 | orchestrator | 2026-04-18 01:13:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:40.324856 | orchestrator | 2026-04-18 01:13:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:40.325439 | orchestrator | 2026-04-18 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:43.371539 | orchestrator | 2026-04-18 01:13:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:43.371760 | orchestrator | 2026-04-18 01:13:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:43.371778 | orchestrator | 2026-04-18 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:46.415530 | orchestrator | 2026-04-18 01:13:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:46.416587 | orchestrator | 2026-04-18 01:13:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:46.416739 | orchestrator | 2026-04-18 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:49.462284 | orchestrator | 2026-04-18 01:13:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:49.464564 | orchestrator | 2026-04-18 01:13:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:49.464628 | orchestrator | 2026-04-18 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:52.506761 | orchestrator | 2026-04-18 01:13:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:52.508384 | orchestrator | 2026-04-18 01:13:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:52.508434 | orchestrator | 2026-04-18 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:55.551119 | orchestrator | 2026-04-18 01:13:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:55.552704 | orchestrator | 2026-04-18 01:13:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:55.552758 | orchestrator | 2026-04-18 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:13:58.595697 | orchestrator | 2026-04-18 01:13:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:13:58.597206 | orchestrator | 2026-04-18 01:13:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:13:58.597250 | orchestrator | 2026-04-18 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:01.636687 | orchestrator | 2026-04-18 01:14:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:01.638562 | orchestrator | 2026-04-18 01:14:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:01.638784 | orchestrator | 2026-04-18 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:04.682399 | orchestrator | 2026-04-18 01:14:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:04.685879 | orchestrator | 2026-04-18 01:14:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:04.686183 | orchestrator | 2026-04-18 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:07.727451 | orchestrator | 2026-04-18 01:14:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:07.729438 | orchestrator | 2026-04-18 01:14:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:07.729485 | orchestrator | 2026-04-18 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:10.773690 | orchestrator | 2026-04-18 01:14:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:10.775783 | orchestrator | 2026-04-18 01:14:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:10.775837 | orchestrator | 2026-04-18 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:13.819491 | orchestrator | 2026-04-18 01:14:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:13.820912 | orchestrator | 2026-04-18 01:14:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:13.821019 | orchestrator | 2026-04-18 01:14:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:16.863523 | orchestrator | 2026-04-18 01:14:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:16.867232 | orchestrator | 2026-04-18 01:14:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:16.867325 | orchestrator | 2026-04-18 01:14:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:19.916822 | orchestrator | 2026-04-18 01:14:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:19.918996 | orchestrator | 2026-04-18 01:14:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:19.919150 | orchestrator | 2026-04-18 01:14:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:22.965703 | orchestrator | 2026-04-18 01:14:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:22.966940 | orchestrator | 2026-04-18 01:14:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:22.967181 | orchestrator | 2026-04-18 01:14:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:26.018214 | orchestrator | 2026-04-18 01:14:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:26.019958 | orchestrator | 2026-04-18 01:14:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:26.020013 | orchestrator | 2026-04-18 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:29.075597 | orchestrator | 2026-04-18 01:14:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:29.077139 | orchestrator | 2026-04-18 01:14:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:29.077209 | orchestrator | 2026-04-18 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:32.126301 | orchestrator | 2026-04-18 01:14:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:32.128509 | orchestrator | 2026-04-18 01:14:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:32.128558 | orchestrator | 2026-04-18 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:35.179113 | orchestrator | 2026-04-18 01:14:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:35.180829 | orchestrator | 2026-04-18 01:14:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:35.180892 | orchestrator | 2026-04-18 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:38.228515 | orchestrator | 2026-04-18 01:14:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:38.229809 | orchestrator | 2026-04-18 01:14:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:38.229932 | orchestrator | 2026-04-18 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:41.275229 | orchestrator | 2026-04-18 01:14:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:41.277242 | orchestrator | 2026-04-18 01:14:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:41.277328 | orchestrator | 2026-04-18 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:44.320540 | orchestrator | 2026-04-18 01:14:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:44.321512 | orchestrator | 2026-04-18 01:14:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:44.321567 | orchestrator | 2026-04-18 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:47.358356 | orchestrator | 2026-04-18 01:14:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:47.358530 | orchestrator | 2026-04-18 01:14:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:47.358549 | orchestrator | 2026-04-18 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:50.404594 | orchestrator | 2026-04-18 01:14:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:50.406697 | orchestrator | 2026-04-18 01:14:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:50.406757 | orchestrator | 2026-04-18 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:53.448705 | orchestrator | 2026-04-18 01:14:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:53.452066 | orchestrator | 2026-04-18 01:14:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:53.452159 | orchestrator | 2026-04-18 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:56.490694 | orchestrator | 2026-04-18 01:14:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:56.492433 | orchestrator | 2026-04-18 01:14:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:56.492509 | orchestrator | 2026-04-18 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:14:59.536223 | orchestrator | 2026-04-18 01:14:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:14:59.537276 | orchestrator | 2026-04-18 01:14:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:14:59.537322 | orchestrator | 2026-04-18 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:02.575630 | orchestrator | 2026-04-18 01:15:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:02.576814 | orchestrator | 2026-04-18 01:15:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:02.576863 | orchestrator | 2026-04-18 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:05.614576 | orchestrator | 2026-04-18 01:15:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:05.615560 | orchestrator | 2026-04-18 01:15:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:05.615615 | orchestrator | 2026-04-18 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:08.656964 | orchestrator | 2026-04-18 01:15:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:08.658213 | orchestrator | 2026-04-18 01:15:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:08.658530 | orchestrator | 2026-04-18 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:11.704416 | orchestrator | 2026-04-18 01:15:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:11.706568 | orchestrator | 2026-04-18 01:15:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:11.706629 | orchestrator | 2026-04-18 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:14.752062 | orchestrator | 2026-04-18 01:15:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:14.754112 | orchestrator | 2026-04-18 01:15:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:14.754168 | orchestrator | 2026-04-18 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:17.798749 | orchestrator | 2026-04-18 01:15:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:17.801434 | orchestrator | 2026-04-18 01:15:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:17.801469 | orchestrator | 2026-04-18 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:20.836878 | orchestrator | 2026-04-18 01:15:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:20.837625 | orchestrator | 2026-04-18 01:15:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:20.837638 | orchestrator | 2026-04-18 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:23.877966 | orchestrator | 2026-04-18 01:15:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:23.879853 | orchestrator | 2026-04-18 01:15:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:23.879953 | orchestrator | 2026-04-18 01:15:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:26.920294 | orchestrator | 2026-04-18 01:15:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:26.922139 | orchestrator | 2026-04-18 01:15:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:26.922195 | orchestrator | 2026-04-18 01:15:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:29.968022 | orchestrator | 2026-04-18 01:15:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:29.969748 | orchestrator | 2026-04-18 01:15:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:29.969799 | orchestrator | 2026-04-18 01:15:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:33.014964 | orchestrator | 2026-04-18 01:15:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:33.017557 | orchestrator | 2026-04-18 01:15:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:33.017625 | orchestrator | 2026-04-18 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:36.058949 | orchestrator | 2026-04-18 01:15:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:36.059437 | orchestrator | 2026-04-18 01:15:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:36.059467 | orchestrator | 2026-04-18 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:39.100201 | orchestrator | 2026-04-18 01:15:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:39.100330 | orchestrator | 2026-04-18 01:15:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:39.100385 | orchestrator | 2026-04-18 01:15:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:42.144416 | orchestrator | 2026-04-18 01:15:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:42.146620 | orchestrator | 2026-04-18 01:15:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:42.146694 | orchestrator | 2026-04-18 01:15:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:45.189734 | orchestrator | 2026-04-18 01:15:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:45.191932 | orchestrator | 2026-04-18 01:15:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:45.192002 | orchestrator | 2026-04-18 01:15:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:48.234780 | orchestrator | 2026-04-18 01:15:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:48.236463 | orchestrator | 2026-04-18 01:15:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:48.236519 | orchestrator | 2026-04-18 01:15:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:51.284955 | orchestrator | 2026-04-18 01:15:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:51.287238 | orchestrator | 2026-04-18 01:15:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:51.287295 | orchestrator | 2026-04-18 01:15:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:54.332272 | orchestrator | 2026-04-18 01:15:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:54.333558 | orchestrator | 2026-04-18 01:15:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:54.333649 | orchestrator | 2026-04-18 01:15:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:15:57.380456 | orchestrator | 2026-04-18 01:15:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:15:57.381459 | orchestrator | 2026-04-18 01:15:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:15:57.381567 | orchestrator | 2026-04-18 01:15:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:00.422269 | orchestrator | 2026-04-18 01:16:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:00.424636 | orchestrator | 2026-04-18 01:16:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:00.424738 | orchestrator | 2026-04-18 01:16:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:03.468278 | orchestrator | 2026-04-18 01:16:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:03.469495 | orchestrator | 2026-04-18 01:16:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:03.469546 | orchestrator | 2026-04-18 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:06.514516 | orchestrator | 2026-04-18 01:16:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:06.516180 | orchestrator | 2026-04-18 01:16:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:06.516264 | orchestrator | 2026-04-18 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:09.559492 | orchestrator | 2026-04-18 01:16:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:09.561354 | orchestrator | 2026-04-18 01:16:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:09.561454 | orchestrator | 2026-04-18 01:16:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:12.608972 | orchestrator | 2026-04-18 01:16:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:12.610759 | orchestrator | 2026-04-18 01:16:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:12.610819 | orchestrator | 2026-04-18 01:16:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:15.657420 | orchestrator | 2026-04-18 01:16:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:15.659067 | orchestrator | 2026-04-18 01:16:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:15.659119 | orchestrator | 2026-04-18 01:16:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:18.710004 | orchestrator | 2026-04-18 01:16:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:18.710308 | orchestrator | 2026-04-18 01:16:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:18.710326 | orchestrator | 2026-04-18 01:16:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:21.758092 | orchestrator | 2026-04-18 01:16:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:21.759440 | orchestrator | 2026-04-18 01:16:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:21.759497 | orchestrator | 2026-04-18 01:16:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:24.799888 | orchestrator | 2026-04-18 01:16:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:24.801316 | orchestrator | 2026-04-18 01:16:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:24.801376 | orchestrator | 2026-04-18 01:16:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:27.851109 | orchestrator | 2026-04-18 01:16:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:27.853322 | orchestrator | 2026-04-18 01:16:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:27.853382 | orchestrator | 2026-04-18 01:16:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:30.901311 | orchestrator | 2026-04-18 01:16:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:30.902680 | orchestrator | 2026-04-18 01:16:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:30.902757 | orchestrator | 2026-04-18 01:16:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:33.947347 | orchestrator | 2026-04-18 01:16:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:33.949178 | orchestrator | 2026-04-18 01:16:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:33.949246 | orchestrator | 2026-04-18 01:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:36.996014 | orchestrator | 2026-04-18 01:16:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:36.998403 | orchestrator | 2026-04-18 01:16:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:36.998461 | orchestrator | 2026-04-18 01:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:40.043740 | orchestrator | 2026-04-18 01:16:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:40.045961 | orchestrator | 2026-04-18 01:16:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:40.046075 | orchestrator | 2026-04-18 01:16:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:43.094324 | orchestrator | 2026-04-18 01:16:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:43.095710 | orchestrator | 2026-04-18 01:16:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:43.095785 | orchestrator | 2026-04-18 01:16:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:46.147811 | orchestrator | 2026-04-18 01:16:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:46.149998 | orchestrator | 2026-04-18 01:16:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:46.150225 | orchestrator | 2026-04-18 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:49.205725 | orchestrator | 2026-04-18 01:16:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:49.208887 | orchestrator | 2026-04-18 01:16:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:49.208943 | orchestrator | 2026-04-18 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:52.254194 | orchestrator | 2026-04-18 01:16:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:52.256654 | orchestrator | 2026-04-18 01:16:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:52.257180 | orchestrator | 2026-04-18 01:16:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:55.297402 | orchestrator | 2026-04-18 01:16:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:55.299269 | orchestrator | 2026-04-18 01:16:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:55.299324 | orchestrator | 2026-04-18 01:16:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:16:58.342361 | orchestrator | 2026-04-18 01:16:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:16:58.343522 | orchestrator | 2026-04-18 01:16:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:16:58.343581 | orchestrator | 2026-04-18 01:16:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:01.385467 | orchestrator | 2026-04-18 01:17:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:01.386878 | orchestrator | 2026-04-18 01:17:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:01.386946 | orchestrator | 2026-04-18 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:04.432494 | orchestrator | 2026-04-18 01:17:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:04.433797 | orchestrator | 2026-04-18 01:17:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:04.433876 | orchestrator | 2026-04-18 01:17:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:07.477977 | orchestrator | 2026-04-18 01:17:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:07.480139 | orchestrator | 2026-04-18 01:17:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:07.481492 | orchestrator | 2026-04-18 01:17:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:10.531369 | orchestrator | 2026-04-18 01:17:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:10.532012 | orchestrator | 2026-04-18 01:17:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:10.532046 | orchestrator | 2026-04-18 01:17:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:13.585739 | orchestrator | 2026-04-18 01:17:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:13.589349 | orchestrator | 2026-04-18 01:17:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:13.589441 | orchestrator | 2026-04-18 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:16.635470 | orchestrator | 2026-04-18 01:17:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:16.636734 | orchestrator | 2026-04-18 01:17:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:16.636816 | orchestrator | 2026-04-18 01:17:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:19.682391 | orchestrator | 2026-04-18 01:17:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:19.684238 | orchestrator | 2026-04-18 01:17:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:19.684292 | orchestrator | 2026-04-18 01:17:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:22.727676 | orchestrator | 2026-04-18 01:17:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:22.729441 | orchestrator | 2026-04-18 01:17:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:22.729493 | orchestrator | 2026-04-18 01:17:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:25.774530 | orchestrator | 2026-04-18 01:17:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:25.775862 | orchestrator | 2026-04-18 01:17:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:25.775903 | orchestrator | 2026-04-18 01:17:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:28.822271 | orchestrator | 2026-04-18 01:17:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:28.823860 | orchestrator | 2026-04-18 01:17:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:28.824146 | orchestrator | 2026-04-18 01:17:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:31.870495 | orchestrator | 2026-04-18 01:17:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:31.871832 | orchestrator | 2026-04-18 01:17:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:31.871934 | orchestrator | 2026-04-18 01:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:34.920509 | orchestrator | 2026-04-18 01:17:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:34.924070 | orchestrator | 2026-04-18 01:17:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:34.924183 | orchestrator | 2026-04-18 01:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:37.969421 | orchestrator | 2026-04-18 01:17:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:37.971215 | orchestrator | 2026-04-18 01:17:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:37.971319 | orchestrator | 2026-04-18 01:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:41.012817 | orchestrator | 2026-04-18 01:17:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:41.014188 | orchestrator | 2026-04-18 01:17:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:41.014323 | orchestrator | 2026-04-18 01:17:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:44.063163 | orchestrator | 2026-04-18 01:17:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:44.064070 | orchestrator | 2026-04-18 01:17:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:44.064172 | orchestrator | 2026-04-18 01:17:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:47.110369 | orchestrator | 2026-04-18 01:17:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:47.112532 | orchestrator | 2026-04-18 01:17:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:47.112584 | orchestrator | 2026-04-18 01:17:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:50.154476 | orchestrator | 2026-04-18 01:17:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:50.156705 | orchestrator | 2026-04-18 01:17:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:50.157022 | orchestrator | 2026-04-18 01:17:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:53.201264 | orchestrator | 2026-04-18 01:17:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:53.202223 | orchestrator | 2026-04-18 01:17:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:53.202282 | orchestrator | 2026-04-18 01:17:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:56.246699 | orchestrator | 2026-04-18 01:17:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:56.248031 | orchestrator | 2026-04-18 01:17:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:56.248071 | orchestrator | 2026-04-18 01:17:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:17:59.289458 | orchestrator | 2026-04-18 01:17:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:17:59.290540 | orchestrator | 2026-04-18 01:17:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:17:59.290579 | orchestrator | 2026-04-18 01:17:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:02.329990 | orchestrator | 2026-04-18 01:18:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:02.331549 | orchestrator | 2026-04-18 01:18:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:02.331630 | orchestrator | 2026-04-18 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:05.378724 | orchestrator | 2026-04-18 01:18:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:05.379991 | orchestrator | 2026-04-18 01:18:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:05.380184 | orchestrator | 2026-04-18 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:08.430212 | orchestrator | 2026-04-18 01:18:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:08.430652 | orchestrator | 2026-04-18 01:18:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:08.430687 | orchestrator | 2026-04-18 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:11.480619 | orchestrator | 2026-04-18 01:18:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:11.483712 | orchestrator | 2026-04-18 01:18:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:11.483792 | orchestrator | 2026-04-18 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:14.528332 | orchestrator | 2026-04-18 01:18:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:14.531058 | orchestrator | 2026-04-18 01:18:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:14.531167 | orchestrator | 2026-04-18 01:18:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:17.574772 | orchestrator | 2026-04-18 01:18:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:17.576835 | orchestrator | 2026-04-18 01:18:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:17.576973 | orchestrator | 2026-04-18 01:18:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:20.617398 | orchestrator | 2026-04-18 01:18:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:20.617808 | orchestrator | 2026-04-18 01:18:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:20.617866 | orchestrator | 2026-04-18 01:18:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:23.661521 | orchestrator | 2026-04-18 01:18:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:23.664074 | orchestrator | 2026-04-18 01:18:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:23.664214 | orchestrator | 2026-04-18 01:18:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:26.709951 | orchestrator | 2026-04-18 01:18:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:26.712300 | orchestrator | 2026-04-18 01:18:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:26.712393 | orchestrator | 2026-04-18 01:18:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:29.760193 | orchestrator | 2026-04-18 01:18:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:29.761275 | orchestrator | 2026-04-18 01:18:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:29.761369 | orchestrator | 2026-04-18 01:18:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:32.804819 | orchestrator | 2026-04-18 01:18:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:32.807249 | orchestrator | 2026-04-18 01:18:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:32.807322 | orchestrator | 2026-04-18 01:18:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:35.849142 | orchestrator | 2026-04-18 01:18:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:35.850283 | orchestrator | 2026-04-18 01:18:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:35.850396 | orchestrator | 2026-04-18 01:18:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:38.895885 | orchestrator | 2026-04-18 01:18:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:38.897614 | orchestrator | 2026-04-18 01:18:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:38.897694 | orchestrator | 2026-04-18 01:18:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:41.943297 | orchestrator | 2026-04-18 01:18:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:41.944890 | orchestrator | 2026-04-18 01:18:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:41.944947 | orchestrator | 2026-04-18 01:18:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:44.987159 | orchestrator | 2026-04-18 01:18:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:44.989552 | orchestrator | 2026-04-18 01:18:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:44.989629 | orchestrator | 2026-04-18 01:18:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:48.031143 | orchestrator | 2026-04-18 01:18:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:48.031770 | orchestrator | 2026-04-18 01:18:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:48.031953 | orchestrator | 2026-04-18 01:18:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:51.070426 | orchestrator | 2026-04-18 01:18:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:51.072631 | orchestrator | 2026-04-18 01:18:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:51.072696 | orchestrator | 2026-04-18 01:18:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:54.119717 | orchestrator | 2026-04-18 01:18:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:54.121872 | orchestrator | 2026-04-18 01:18:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:54.121935 | orchestrator | 2026-04-18 01:18:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:18:57.158319 | orchestrator | 2026-04-18 01:18:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:18:57.162238 | orchestrator | 2026-04-18 01:18:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:18:57.162308 | orchestrator | 2026-04-18 01:18:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:00.206646 | orchestrator | 2026-04-18 01:19:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:00.208024 | orchestrator | 2026-04-18 01:19:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:00.208160 | orchestrator | 2026-04-18 01:19:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:03.252543 | orchestrator | 2026-04-18 01:19:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:03.254177 | orchestrator | 2026-04-18 01:19:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:03.254237 | orchestrator | 2026-04-18 01:19:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:06.301262 | orchestrator | 2026-04-18 01:19:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:06.303169 | orchestrator | 2026-04-18 01:19:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:06.303235 | orchestrator | 2026-04-18 01:19:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:09.352278 | orchestrator | 2026-04-18 01:19:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:09.357079 | orchestrator | 2026-04-18 01:19:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:09.357180 | orchestrator | 2026-04-18 01:19:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:12.409373 | orchestrator | 2026-04-18 01:19:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:12.411436 | orchestrator | 2026-04-18 01:19:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:12.411509 | orchestrator | 2026-04-18 01:19:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:15.466661 | orchestrator | 2026-04-18 01:19:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:15.467770 | orchestrator | 2026-04-18 01:19:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:15.467819 | orchestrator | 2026-04-18 01:19:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:18.514416 | orchestrator | 2026-04-18 01:19:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:18.517649 | orchestrator | 2026-04-18 01:19:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:18.517750 | orchestrator | 2026-04-18 01:19:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:21.563507 | orchestrator | 2026-04-18 01:19:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:21.566433 | orchestrator | 2026-04-18 01:19:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:21.566617 | orchestrator | 2026-04-18 01:19:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:24.611250 | orchestrator | 2026-04-18 01:19:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:24.613006 | orchestrator | 2026-04-18 01:19:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:24.613069 | orchestrator | 2026-04-18 01:19:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:27.655376 | orchestrator | 2026-04-18 01:19:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:27.656316 | orchestrator | 2026-04-18 01:19:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:27.656559 | orchestrator | 2026-04-18 01:19:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:30.700154 | orchestrator | 2026-04-18 01:19:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:30.700873 | orchestrator | 2026-04-18 01:19:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:30.700893 | orchestrator | 2026-04-18 01:19:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:33.746524 | orchestrator | 2026-04-18 01:19:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:33.748053 | orchestrator | 2026-04-18 01:19:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:33.748222 | orchestrator | 2026-04-18 01:19:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:36.794082 | orchestrator | 2026-04-18 01:19:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:36.797244 | orchestrator | 2026-04-18 01:19:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:36.797322 | orchestrator | 2026-04-18 01:19:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:39.855799 | orchestrator | 2026-04-18 01:19:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:39.855883 | orchestrator | 2026-04-18 01:19:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:39.855895 | orchestrator | 2026-04-18 01:19:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:42.900528 | orchestrator | 2026-04-18 01:19:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:42.901144 | orchestrator | 2026-04-18 01:19:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:42.901174 | orchestrator | 2026-04-18 01:19:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:45.950156 | orchestrator | 2026-04-18 01:19:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:45.951027 | orchestrator | 2026-04-18 01:19:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:45.951088 | orchestrator | 2026-04-18 01:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:48.993841 | orchestrator | 2026-04-18 01:19:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:48.995872 | orchestrator | 2026-04-18 01:19:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:48.995953 | orchestrator | 2026-04-18 01:19:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:52.044004 | orchestrator | 2026-04-18 01:19:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:52.044086 | orchestrator | 2026-04-18 01:19:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:52.044093 | orchestrator | 2026-04-18 01:19:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:55.097042 | orchestrator | 2026-04-18 01:19:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:55.098279 | orchestrator | 2026-04-18 01:19:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:55.098680 | orchestrator | 2026-04-18 01:19:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:19:58.143637 | orchestrator | 2026-04-18 01:19:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:19:58.144572 | orchestrator | 2026-04-18 01:19:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:19:58.144627 | orchestrator | 2026-04-18 01:19:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:20:01.195562 | orchestrator | 2026-04-18 01:20:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:20:01.196840 | orchestrator | 2026-04-18 01:20:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:20:01.196881 | orchestrator | 2026-04-18 01:20:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:20:04.234299 | orchestrator | 2026-04-18 01:20:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:20:04.236043 | orchestrator | 2026-04-18 01:20:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:20:04.236135 | orchestrator | 2026-04-18 01:20:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:07.385729 | orchestrator | 2026-04-18 01:22:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:07.385848 | orchestrator | 2026-04-18 01:22:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:07.385857 | orchestrator | 2026-04-18 01:22:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:10.429129 | orchestrator | 2026-04-18 01:22:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:10.430127 | orchestrator | 2026-04-18 01:22:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:10.430159 | orchestrator | 2026-04-18 01:22:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:13.469024 | orchestrator | 2026-04-18 01:22:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:13.470636 | orchestrator | 2026-04-18 01:22:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:13.470700 | orchestrator | 2026-04-18 01:22:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:16.510422 | orchestrator | 2026-04-18 01:22:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:16.511958 | orchestrator | 2026-04-18 01:22:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:16.512073 | orchestrator | 2026-04-18 01:22:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:19.552834 | orchestrator | 2026-04-18 01:22:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:19.555298 | orchestrator | 2026-04-18 01:22:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:19.555361 | orchestrator | 2026-04-18 01:22:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:22.600108 | orchestrator | 2026-04-18 01:22:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:22.601334 | orchestrator | 2026-04-18 01:22:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:22.601476 | orchestrator | 2026-04-18 01:22:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:25.646201 | orchestrator | 2026-04-18 01:22:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:25.647606 | orchestrator | 2026-04-18 01:22:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:25.647675 | orchestrator | 2026-04-18 01:22:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:28.698119 | orchestrator | 2026-04-18 01:22:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:28.699572 | orchestrator | 2026-04-18 01:22:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:28.699705 | orchestrator | 2026-04-18 01:22:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:31.742829 | orchestrator | 2026-04-18 01:22:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:31.744266 | orchestrator | 2026-04-18 01:22:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:31.744310 | orchestrator | 2026-04-18 01:22:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:34.789943 | orchestrator | 2026-04-18 01:22:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:34.791118 | orchestrator | 2026-04-18 01:22:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:34.791167 | orchestrator | 2026-04-18 01:22:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:37.834695 | orchestrator | 2026-04-18 01:22:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:37.835635 | orchestrator | 2026-04-18 01:22:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:37.835809 | orchestrator | 2026-04-18 01:22:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:40.876814 | orchestrator | 2026-04-18 01:22:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:40.878291 | orchestrator | 2026-04-18 01:22:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:40.878354 | orchestrator | 2026-04-18 01:22:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:43.922144 | orchestrator | 2026-04-18 01:22:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:43.923805 | orchestrator | 2026-04-18 01:22:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:43.923914 | orchestrator | 2026-04-18 01:22:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:46.973953 | orchestrator | 2026-04-18 01:22:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:46.975969 | orchestrator | 2026-04-18 01:22:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:46.976026 | orchestrator | 2026-04-18 01:22:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:50.033176 | orchestrator | 2026-04-18 01:22:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:50.036117 | orchestrator | 2026-04-18 01:22:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:50.036175 | orchestrator | 2026-04-18 01:22:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:53.079755 | orchestrator | 2026-04-18 01:22:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:53.080713 | orchestrator | 2026-04-18 01:22:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:53.080764 | orchestrator | 2026-04-18 01:22:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:56.125839 | orchestrator | 2026-04-18 01:22:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:56.127265 | orchestrator | 2026-04-18 01:22:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:56.127321 | orchestrator | 2026-04-18 01:22:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:22:59.169879 | orchestrator | 2026-04-18 01:22:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:22:59.170128 | orchestrator | 2026-04-18 01:22:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:22:59.170224 | orchestrator | 2026-04-18 01:22:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:02.211104 | orchestrator | 2026-04-18 01:23:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:02.212560 | orchestrator | 2026-04-18 01:23:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:02.212655 | orchestrator | 2026-04-18 01:23:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:05.255630 | orchestrator | 2026-04-18 01:23:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:05.257193 | orchestrator | 2026-04-18 01:23:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:05.257418 | orchestrator | 2026-04-18 01:23:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:08.307358 | orchestrator | 2026-04-18 01:23:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:08.308672 | orchestrator | 2026-04-18 01:23:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:08.308741 | orchestrator | 2026-04-18 01:23:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:11.345374 | orchestrator | 2026-04-18 01:23:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:11.347365 | orchestrator | 2026-04-18 01:23:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:11.347445 | orchestrator | 2026-04-18 01:23:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:14.391983 | orchestrator | 2026-04-18 01:23:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:14.393486 | orchestrator | 2026-04-18 01:23:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:14.393570 | orchestrator | 2026-04-18 01:23:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:17.447941 | orchestrator | 2026-04-18 01:23:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:17.450582 | orchestrator | 2026-04-18 01:23:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:17.450689 | orchestrator | 2026-04-18 01:23:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:20.497960 | orchestrator | 2026-04-18 01:23:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:20.499580 | orchestrator | 2026-04-18 01:23:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:20.499644 | orchestrator | 2026-04-18 01:23:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:23.536551 | orchestrator | 2026-04-18 01:23:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:23.537231 | orchestrator | 2026-04-18 01:23:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:23.537322 | orchestrator | 2026-04-18 01:23:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:26.573958 | orchestrator | 2026-04-18 01:23:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:26.575412 | orchestrator | 2026-04-18 01:23:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:26.575550 | orchestrator | 2026-04-18 01:23:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:29.613836 | orchestrator | 2026-04-18 01:23:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:29.615602 | orchestrator | 2026-04-18 01:23:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:29.615683 | orchestrator | 2026-04-18 01:23:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:32.653631 | orchestrator | 2026-04-18 01:23:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:32.655465 | orchestrator | 2026-04-18 01:23:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:32.655530 | orchestrator | 2026-04-18 01:23:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:35.693687 | orchestrator | 2026-04-18 01:23:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:35.695226 | orchestrator | 2026-04-18 01:23:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:35.695275 | orchestrator | 2026-04-18 01:23:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:38.739511 | orchestrator | 2026-04-18 01:23:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:38.742361 | orchestrator | 2026-04-18 01:23:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:38.742505 | orchestrator | 2026-04-18 01:23:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:41.788288 | orchestrator | 2026-04-18 01:23:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:41.789576 | orchestrator | 2026-04-18 01:23:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:41.789598 | orchestrator | 2026-04-18 01:23:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:44.837770 | orchestrator | 2026-04-18 01:23:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:44.840348 | orchestrator | 2026-04-18 01:23:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:44.840410 | orchestrator | 2026-04-18 01:23:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:47.888389 | orchestrator | 2026-04-18 01:23:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:47.891751 | orchestrator | 2026-04-18 01:23:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:47.891824 | orchestrator | 2026-04-18 01:23:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:50.948147 | orchestrator | 2026-04-18 01:23:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:50.949853 | orchestrator | 2026-04-18 01:23:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:50.949949 | orchestrator | 2026-04-18 01:23:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:53.992755 | orchestrator | 2026-04-18 01:23:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:53.993716 | orchestrator | 2026-04-18 01:23:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:53.993779 | orchestrator | 2026-04-18 01:23:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:23:57.041264 | orchestrator | 2026-04-18 01:23:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:23:57.041795 | orchestrator | 2026-04-18 01:23:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:23:57.041835 | orchestrator | 2026-04-18 01:23:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:00.081503 | orchestrator | 2026-04-18 01:24:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:00.083110 | orchestrator | 2026-04-18 01:24:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:00.083193 | orchestrator | 2026-04-18 01:24:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:03.128752 | orchestrator | 2026-04-18 01:24:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:03.130814 | orchestrator | 2026-04-18 01:24:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:03.130987 | orchestrator | 2026-04-18 01:24:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:06.177126 | orchestrator | 2026-04-18 01:24:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:06.178500 | orchestrator | 2026-04-18 01:24:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:06.178557 | orchestrator | 2026-04-18 01:24:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:09.219279 | orchestrator | 2026-04-18 01:24:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:09.221166 | orchestrator | 2026-04-18 01:24:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:09.221265 | orchestrator | 2026-04-18 01:24:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:12.263363 | orchestrator | 2026-04-18 01:24:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:12.265010 | orchestrator | 2026-04-18 01:24:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:12.265059 | orchestrator | 2026-04-18 01:24:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:15.306701 | orchestrator | 2026-04-18 01:24:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:15.308344 | orchestrator | 2026-04-18 01:24:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:15.308401 | orchestrator | 2026-04-18 01:24:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:18.350100 | orchestrator | 2026-04-18 01:24:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:18.351429 | orchestrator | 2026-04-18 01:24:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:18.351457 | orchestrator | 2026-04-18 01:24:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:21.395732 | orchestrator | 2026-04-18 01:24:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:21.397363 | orchestrator | 2026-04-18 01:24:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:21.397442 | orchestrator | 2026-04-18 01:24:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:24.443527 | orchestrator | 2026-04-18 01:24:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:24.446076 | orchestrator | 2026-04-18 01:24:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:24.446130 | orchestrator | 2026-04-18 01:24:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:27.486463 | orchestrator | 2026-04-18 01:24:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:27.486773 | orchestrator | 2026-04-18 01:24:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:27.486792 | orchestrator | 2026-04-18 01:24:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:30.527922 | orchestrator | 2026-04-18 01:24:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:30.530072 | orchestrator | 2026-04-18 01:24:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:30.530133 | orchestrator | 2026-04-18 01:24:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:33.565183 | orchestrator | 2026-04-18 01:24:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:33.566649 | orchestrator | 2026-04-18 01:24:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:33.566716 | orchestrator | 2026-04-18 01:24:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:36.613728 | orchestrator | 2026-04-18 01:24:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:36.614412 | orchestrator | 2026-04-18 01:24:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:36.614441 | orchestrator | 2026-04-18 01:24:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:39.659387 | orchestrator | 2026-04-18 01:24:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:39.661111 | orchestrator | 2026-04-18 01:24:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:39.661162 | orchestrator | 2026-04-18 01:24:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:42.709271 | orchestrator | 2026-04-18 01:24:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:42.710464 | orchestrator | 2026-04-18 01:24:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:42.710513 | orchestrator | 2026-04-18 01:24:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:45.752490 | orchestrator | 2026-04-18 01:24:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:45.753610 | orchestrator | 2026-04-18 01:24:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:45.753694 | orchestrator | 2026-04-18 01:24:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:48.795629 | orchestrator | 2026-04-18 01:24:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:48.797134 | orchestrator | 2026-04-18 01:24:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:48.797191 | orchestrator | 2026-04-18 01:24:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:51.843828 | orchestrator | 2026-04-18 01:24:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:51.845027 | orchestrator | 2026-04-18 01:24:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:51.845048 | orchestrator | 2026-04-18 01:24:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:54.888170 | orchestrator | 2026-04-18 01:24:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:54.889846 | orchestrator | 2026-04-18 01:24:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:54.890129 | orchestrator | 2026-04-18 01:24:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:24:57.937151 | orchestrator | 2026-04-18 01:24:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:24:57.938266 | orchestrator | 2026-04-18 01:24:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:24:57.938326 | orchestrator | 2026-04-18 01:24:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:00.980609 | orchestrator | 2026-04-18 01:25:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:00.982007 | orchestrator | 2026-04-18 01:25:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:00.982084 | orchestrator | 2026-04-18 01:25:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:04.028098 | orchestrator | 2026-04-18 01:25:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:04.030124 | orchestrator | 2026-04-18 01:25:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:04.030226 | orchestrator | 2026-04-18 01:25:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:07.079754 | orchestrator | 2026-04-18 01:25:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:07.081325 | orchestrator | 2026-04-18 01:25:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:07.081395 | orchestrator | 2026-04-18 01:25:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:10.117409 | orchestrator | 2026-04-18 01:25:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:10.118160 | orchestrator | 2026-04-18 01:25:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:10.118204 | orchestrator | 2026-04-18 01:25:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:13.155216 | orchestrator | 2026-04-18 01:25:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:13.156329 | orchestrator | 2026-04-18 01:25:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:13.156400 | orchestrator | 2026-04-18 01:25:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:16.198348 | orchestrator | 2026-04-18 01:25:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:16.199711 | orchestrator | 2026-04-18 01:25:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:16.199802 | orchestrator | 2026-04-18 01:25:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:19.245762 | orchestrator | 2026-04-18 01:25:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:19.247798 | orchestrator | 2026-04-18 01:25:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:19.247858 | orchestrator | 2026-04-18 01:25:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:22.301084 | orchestrator | 2026-04-18 01:25:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:22.303324 | orchestrator | 2026-04-18 01:25:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:22.303381 | orchestrator | 2026-04-18 01:25:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:25.345227 | orchestrator | 2026-04-18 01:25:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:25.347691 | orchestrator | 2026-04-18 01:25:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:25.347771 | orchestrator | 2026-04-18 01:25:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:28.390715 | orchestrator | 2026-04-18 01:25:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:28.392180 | orchestrator | 2026-04-18 01:25:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:28.392294 | orchestrator | 2026-04-18 01:25:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:31.435716 | orchestrator | 2026-04-18 01:25:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:31.437408 | orchestrator | 2026-04-18 01:25:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:31.437453 | orchestrator | 2026-04-18 01:25:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:34.481615 | orchestrator | 2026-04-18 01:25:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:34.484195 | orchestrator | 2026-04-18 01:25:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:34.484276 | orchestrator | 2026-04-18 01:25:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:37.526649 | orchestrator | 2026-04-18 01:25:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:37.528921 | orchestrator | 2026-04-18 01:25:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:37.529024 | orchestrator | 2026-04-18 01:25:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:40.572553 | orchestrator | 2026-04-18 01:25:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:40.574940 | orchestrator | 2026-04-18 01:25:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:40.575089 | orchestrator | 2026-04-18 01:25:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:43.621967 | orchestrator | 2026-04-18 01:25:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:43.623801 | orchestrator | 2026-04-18 01:25:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:43.623862 | orchestrator | 2026-04-18 01:25:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:46.665745 | orchestrator | 2026-04-18 01:25:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:46.667376 | orchestrator | 2026-04-18 01:25:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:46.667444 | orchestrator | 2026-04-18 01:25:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:49.708812 | orchestrator | 2026-04-18 01:25:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:49.709582 | orchestrator | 2026-04-18 01:25:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:49.709672 | orchestrator | 2026-04-18 01:25:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:52.755309 | orchestrator | 2026-04-18 01:25:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:52.756683 | orchestrator | 2026-04-18 01:25:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:52.756767 | orchestrator | 2026-04-18 01:25:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:55.799454 | orchestrator | 2026-04-18 01:25:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:55.801580 | orchestrator | 2026-04-18 01:25:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:55.801628 | orchestrator | 2026-04-18 01:25:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:25:58.845255 | orchestrator | 2026-04-18 01:25:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:25:58.847183 | orchestrator | 2026-04-18 01:25:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:25:58.847246 | orchestrator | 2026-04-18 01:25:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:01.887597 | orchestrator | 2026-04-18 01:26:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:01.889081 | orchestrator | 2026-04-18 01:26:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:01.889147 | orchestrator | 2026-04-18 01:26:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:04.931091 | orchestrator | 2026-04-18 01:26:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:04.932959 | orchestrator | 2026-04-18 01:26:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:04.933009 | orchestrator | 2026-04-18 01:26:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:07.980130 | orchestrator | 2026-04-18 01:26:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:07.982741 | orchestrator | 2026-04-18 01:26:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:07.982803 | orchestrator | 2026-04-18 01:26:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:11.031398 | orchestrator | 2026-04-18 01:26:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:11.033946 | orchestrator | 2026-04-18 01:26:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:11.034065 | orchestrator | 2026-04-18 01:26:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:14.082523 | orchestrator | 2026-04-18 01:26:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:14.084067 | orchestrator | 2026-04-18 01:26:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:14.084222 | orchestrator | 2026-04-18 01:26:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:17.127200 | orchestrator | 2026-04-18 01:26:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:17.129323 | orchestrator | 2026-04-18 01:26:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:17.129473 | orchestrator | 2026-04-18 01:26:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:20.172351 | orchestrator | 2026-04-18 01:26:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:20.173671 | orchestrator | 2026-04-18 01:26:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:20.173716 | orchestrator | 2026-04-18 01:26:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:23.222721 | orchestrator | 2026-04-18 01:26:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:23.224545 | orchestrator | 2026-04-18 01:26:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:23.224676 | orchestrator | 2026-04-18 01:26:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:26.269966 | orchestrator | 2026-04-18 01:26:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:26.271560 | orchestrator | 2026-04-18 01:26:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:26.271625 | orchestrator | 2026-04-18 01:26:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:29.313828 | orchestrator | 2026-04-18 01:26:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:29.315514 | orchestrator | 2026-04-18 01:26:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:29.315572 | orchestrator | 2026-04-18 01:26:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:32.358518 | orchestrator | 2026-04-18 01:26:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:32.361211 | orchestrator | 2026-04-18 01:26:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:32.361274 | orchestrator | 2026-04-18 01:26:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:35.402720 | orchestrator | 2026-04-18 01:26:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:35.405333 | orchestrator | 2026-04-18 01:26:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:35.405382 | orchestrator | 2026-04-18 01:26:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:38.444930 | orchestrator | 2026-04-18 01:26:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:38.445551 | orchestrator | 2026-04-18 01:26:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:38.445583 | orchestrator | 2026-04-18 01:26:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:41.490725 | orchestrator | 2026-04-18 01:26:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:41.492245 | orchestrator | 2026-04-18 01:26:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:41.492308 | orchestrator | 2026-04-18 01:26:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:44.534188 | orchestrator | 2026-04-18 01:26:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:44.535948 | orchestrator | 2026-04-18 01:26:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:44.536031 | orchestrator | 2026-04-18 01:26:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:47.578822 | orchestrator | 2026-04-18 01:26:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:47.579205 | orchestrator | 2026-04-18 01:26:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:47.579276 | orchestrator | 2026-04-18 01:26:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:50.624757 | orchestrator | 2026-04-18 01:26:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:50.626212 | orchestrator | 2026-04-18 01:26:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:50.626344 | orchestrator | 2026-04-18 01:26:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:53.669832 | orchestrator | 2026-04-18 01:26:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:53.671468 | orchestrator | 2026-04-18 01:26:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:53.671560 | orchestrator | 2026-04-18 01:26:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:56.714393 | orchestrator | 2026-04-18 01:26:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:56.716204 | orchestrator | 2026-04-18 01:26:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:56.716251 | orchestrator | 2026-04-18 01:26:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:26:59.765362 | orchestrator | 2026-04-18 01:26:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:26:59.766540 | orchestrator | 2026-04-18 01:26:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:26:59.766617 | orchestrator | 2026-04-18 01:26:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:02.816151 | orchestrator | 2026-04-18 01:27:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:02.818286 | orchestrator | 2026-04-18 01:27:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:02.818371 | orchestrator | 2026-04-18 01:27:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:05.866398 | orchestrator | 2026-04-18 01:27:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:05.867832 | orchestrator | 2026-04-18 01:27:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:05.867882 | orchestrator | 2026-04-18 01:27:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:08.926488 | orchestrator | 2026-04-18 01:27:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:08.926555 | orchestrator | 2026-04-18 01:27:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:08.926561 | orchestrator | 2026-04-18 01:27:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:11.988669 | orchestrator | 2026-04-18 01:27:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:11.988745 | orchestrator | 2026-04-18 01:27:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:11.988752 | orchestrator | 2026-04-18 01:27:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:15.028150 | orchestrator | 2026-04-18 01:27:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:15.028368 | orchestrator | 2026-04-18 01:27:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:15.028386 | orchestrator | 2026-04-18 01:27:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:18.074131 | orchestrator | 2026-04-18 01:27:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:18.075117 | orchestrator | 2026-04-18 01:27:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:18.075411 | orchestrator | 2026-04-18 01:27:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:21.135967 | orchestrator | 2026-04-18 01:27:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:21.136554 | orchestrator | 2026-04-18 01:27:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:21.136795 | orchestrator | 2026-04-18 01:27:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:24.186362 | orchestrator | 2026-04-18 01:27:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:24.186529 | orchestrator | 2026-04-18 01:27:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:24.186545 | orchestrator | 2026-04-18 01:27:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:27.246331 | orchestrator | 2026-04-18 01:27:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:27.246603 | orchestrator | 2026-04-18 01:27:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:27.246650 | orchestrator | 2026-04-18 01:27:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:30.290616 | orchestrator | 2026-04-18 01:27:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:30.291840 | orchestrator | 2026-04-18 01:27:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:30.291889 | orchestrator | 2026-04-18 01:27:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:33.345745 | orchestrator | 2026-04-18 01:27:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:33.347635 | orchestrator | 2026-04-18 01:27:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:33.347729 | orchestrator | 2026-04-18 01:27:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:36.392180 | orchestrator | 2026-04-18 01:27:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:36.394268 | orchestrator | 2026-04-18 01:27:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:36.394639 | orchestrator | 2026-04-18 01:27:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:39.450179 | orchestrator | 2026-04-18 01:27:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:39.451672 | orchestrator | 2026-04-18 01:27:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:39.451726 | orchestrator | 2026-04-18 01:27:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:42.505515 | orchestrator | 2026-04-18 01:27:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:42.507198 | orchestrator | 2026-04-18 01:27:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:42.507331 | orchestrator | 2026-04-18 01:27:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:45.563964 | orchestrator | 2026-04-18 01:27:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:45.567339 | orchestrator | 2026-04-18 01:27:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:45.567486 | orchestrator | 2026-04-18 01:27:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:48.614154 | orchestrator | 2026-04-18 01:27:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:48.615287 | orchestrator | 2026-04-18 01:27:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:48.615362 | orchestrator | 2026-04-18 01:27:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:51.681371 | orchestrator | 2026-04-18 01:27:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:51.682154 | orchestrator | 2026-04-18 01:27:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:51.682261 | orchestrator | 2026-04-18 01:27:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:54.731520 | orchestrator | 2026-04-18 01:27:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:54.732457 | orchestrator | 2026-04-18 01:27:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:54.732499 | orchestrator | 2026-04-18 01:27:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:27:57.784459 | orchestrator | 2026-04-18 01:27:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:27:57.785900 | orchestrator | 2026-04-18 01:27:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:27:57.785950 | orchestrator | 2026-04-18 01:27:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:00.836768 | orchestrator | 2026-04-18 01:28:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:00.838558 | orchestrator | 2026-04-18 01:28:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:00.838618 | orchestrator | 2026-04-18 01:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:03.895474 | orchestrator | 2026-04-18 01:28:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:03.897426 | orchestrator | 2026-04-18 01:28:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:03.897496 | orchestrator | 2026-04-18 01:28:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:06.954436 | orchestrator | 2026-04-18 01:28:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:06.960973 | orchestrator | 2026-04-18 01:28:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:06.961151 | orchestrator | 2026-04-18 01:28:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:10.004250 | orchestrator | 2026-04-18 01:28:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:10.007701 | orchestrator | 2026-04-18 01:28:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:10.007778 | orchestrator | 2026-04-18 01:28:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:13.058857 | orchestrator | 2026-04-18 01:28:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:13.061713 | orchestrator | 2026-04-18 01:28:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:13.061802 | orchestrator | 2026-04-18 01:28:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:16.105347 | orchestrator | 2026-04-18 01:28:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:16.106787 | orchestrator | 2026-04-18 01:28:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:16.106843 | orchestrator | 2026-04-18 01:28:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:19.150176 | orchestrator | 2026-04-18 01:28:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:19.151821 | orchestrator | 2026-04-18 01:28:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:19.151863 | orchestrator | 2026-04-18 01:28:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:22.197473 | orchestrator | 2026-04-18 01:28:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:22.199140 | orchestrator | 2026-04-18 01:28:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:22.199188 | orchestrator | 2026-04-18 01:28:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:25.244195 | orchestrator | 2026-04-18 01:28:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:25.246448 | orchestrator | 2026-04-18 01:28:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:25.246499 | orchestrator | 2026-04-18 01:28:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:28.297133 | orchestrator | 2026-04-18 01:28:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:28.298411 | orchestrator | 2026-04-18 01:28:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:28.298486 | orchestrator | 2026-04-18 01:28:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:31.347588 | orchestrator | 2026-04-18 01:28:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:31.349380 | orchestrator | 2026-04-18 01:28:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:31.349418 | orchestrator | 2026-04-18 01:28:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:34.392484 | orchestrator | 2026-04-18 01:28:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:34.393675 | orchestrator | 2026-04-18 01:28:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:34.393716 | orchestrator | 2026-04-18 01:28:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:37.437483 | orchestrator | 2026-04-18 01:28:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:37.438544 | orchestrator | 2026-04-18 01:28:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:37.438578 | orchestrator | 2026-04-18 01:28:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:40.488460 | orchestrator | 2026-04-18 01:28:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:40.489486 | orchestrator | 2026-04-18 01:28:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:40.489541 | orchestrator | 2026-04-18 01:28:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:43.532938 | orchestrator | 2026-04-18 01:28:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:43.534687 | orchestrator | 2026-04-18 01:28:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:43.534762 | orchestrator | 2026-04-18 01:28:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:46.577857 | orchestrator | 2026-04-18 01:28:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:46.579660 | orchestrator | 2026-04-18 01:28:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:46.579712 | orchestrator | 2026-04-18 01:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:49.620854 | orchestrator | 2026-04-18 01:28:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:49.622568 | orchestrator | 2026-04-18 01:28:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:49.622666 | orchestrator | 2026-04-18 01:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:52.670213 | orchestrator | 2026-04-18 01:28:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:52.672122 | orchestrator | 2026-04-18 01:28:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:52.672196 | orchestrator | 2026-04-18 01:28:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:55.715172 | orchestrator | 2026-04-18 01:28:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:55.716784 | orchestrator | 2026-04-18 01:28:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:55.716857 | orchestrator | 2026-04-18 01:28:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:28:58.771531 | orchestrator | 2026-04-18 01:28:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:28:58.773714 | orchestrator | 2026-04-18 01:28:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:28:58.773802 | orchestrator | 2026-04-18 01:28:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:01.815388 | orchestrator | 2026-04-18 01:29:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:01.818308 | orchestrator | 2026-04-18 01:29:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:01.818421 | orchestrator | 2026-04-18 01:29:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:04.871516 | orchestrator | 2026-04-18 01:29:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:04.874218 | orchestrator | 2026-04-18 01:29:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:04.874362 | orchestrator | 2026-04-18 01:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:07.922924 | orchestrator | 2026-04-18 01:29:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:07.924689 | orchestrator | 2026-04-18 01:29:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:07.924778 | orchestrator | 2026-04-18 01:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:10.970398 | orchestrator | 2026-04-18 01:29:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:10.973042 | orchestrator | 2026-04-18 01:29:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:10.973102 | orchestrator | 2026-04-18 01:29:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:14.029675 | orchestrator | 2026-04-18 01:29:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:14.030388 | orchestrator | 2026-04-18 01:29:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:14.030435 | orchestrator | 2026-04-18 01:29:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:17.074442 | orchestrator | 2026-04-18 01:29:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:17.076866 | orchestrator | 2026-04-18 01:29:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:17.076929 | orchestrator | 2026-04-18 01:29:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:20.138096 | orchestrator | 2026-04-18 01:29:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:20.140148 | orchestrator | 2026-04-18 01:29:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:20.140215 | orchestrator | 2026-04-18 01:29:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:23.191858 | orchestrator | 2026-04-18 01:29:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:23.193713 | orchestrator | 2026-04-18 01:29:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:23.193768 | orchestrator | 2026-04-18 01:29:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:26.241878 | orchestrator | 2026-04-18 01:29:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:26.243645 | orchestrator | 2026-04-18 01:29:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:26.243779 | orchestrator | 2026-04-18 01:29:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:29.299257 | orchestrator | 2026-04-18 01:29:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:29.301061 | orchestrator | 2026-04-18 01:29:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:29.301100 | orchestrator | 2026-04-18 01:29:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:32.347162 | orchestrator | 2026-04-18 01:29:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:32.349924 | orchestrator | 2026-04-18 01:29:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:32.349989 | orchestrator | 2026-04-18 01:29:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:35.399404 | orchestrator | 2026-04-18 01:29:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:35.401493 | orchestrator | 2026-04-18 01:29:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:35.401686 | orchestrator | 2026-04-18 01:29:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:38.448017 | orchestrator | 2026-04-18 01:29:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:38.451130 | orchestrator | 2026-04-18 01:29:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:38.451238 | orchestrator | 2026-04-18 01:29:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:41.505463 | orchestrator | 2026-04-18 01:29:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:41.508884 | orchestrator | 2026-04-18 01:29:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:41.509377 | orchestrator | 2026-04-18 01:29:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:44.558467 | orchestrator | 2026-04-18 01:29:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:44.561225 | orchestrator | 2026-04-18 01:29:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:44.561302 | orchestrator | 2026-04-18 01:29:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:47.604894 | orchestrator | 2026-04-18 01:29:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:47.605182 | orchestrator | 2026-04-18 01:29:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:47.605291 | orchestrator | 2026-04-18 01:29:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:50.649990 | orchestrator | 2026-04-18 01:29:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:50.652768 | orchestrator | 2026-04-18 01:29:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:50.652829 | orchestrator | 2026-04-18 01:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:53.700686 | orchestrator | 2026-04-18 01:29:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:53.702517 | orchestrator | 2026-04-18 01:29:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:53.702578 | orchestrator | 2026-04-18 01:29:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:56.752306 | orchestrator | 2026-04-18 01:29:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:56.754702 | orchestrator | 2026-04-18 01:29:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:56.754825 | orchestrator | 2026-04-18 01:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:29:59.806238 | orchestrator | 2026-04-18 01:29:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:29:59.807958 | orchestrator | 2026-04-18 01:29:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:29:59.808035 | orchestrator | 2026-04-18 01:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:02.856982 | orchestrator | 2026-04-18 01:30:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:02.858400 | orchestrator | 2026-04-18 01:30:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:02.858466 | orchestrator | 2026-04-18 01:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:05.903920 | orchestrator | 2026-04-18 01:30:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:05.904898 | orchestrator | 2026-04-18 01:30:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:05.904988 | orchestrator | 2026-04-18 01:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:08.951874 | orchestrator | 2026-04-18 01:30:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:08.952548 | orchestrator | 2026-04-18 01:30:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:08.952588 | orchestrator | 2026-04-18 01:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:11.998124 | orchestrator | 2026-04-18 01:30:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:12.001470 | orchestrator | 2026-04-18 01:30:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:12.001540 | orchestrator | 2026-04-18 01:30:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:15.052523 | orchestrator | 2026-04-18 01:30:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:15.054450 | orchestrator | 2026-04-18 01:30:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:15.054582 | orchestrator | 2026-04-18 01:30:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:18.095876 | orchestrator | 2026-04-18 01:30:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:18.098388 | orchestrator | 2026-04-18 01:30:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:18.098459 | orchestrator | 2026-04-18 01:30:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:21.142699 | orchestrator | 2026-04-18 01:30:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:21.144802 | orchestrator | 2026-04-18 01:30:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:21.144864 | orchestrator | 2026-04-18 01:30:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:24.191494 | orchestrator | 2026-04-18 01:30:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:24.194418 | orchestrator | 2026-04-18 01:30:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:24.194485 | orchestrator | 2026-04-18 01:30:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:27.240245 | orchestrator | 2026-04-18 01:30:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:27.241442 | orchestrator | 2026-04-18 01:30:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:27.241491 | orchestrator | 2026-04-18 01:30:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:30.291652 | orchestrator | 2026-04-18 01:30:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:30.293159 | orchestrator | 2026-04-18 01:30:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:30.293219 | orchestrator | 2026-04-18 01:30:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:33.334802 | orchestrator | 2026-04-18 01:30:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:33.336167 | orchestrator | 2026-04-18 01:30:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:33.336284 | orchestrator | 2026-04-18 01:30:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:36.378609 | orchestrator | 2026-04-18 01:30:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:36.380190 | orchestrator | 2026-04-18 01:30:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:36.380243 | orchestrator | 2026-04-18 01:30:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:39.420517 | orchestrator | 2026-04-18 01:30:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:39.421273 | orchestrator | 2026-04-18 01:30:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:39.421313 | orchestrator | 2026-04-18 01:30:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:42.465832 | orchestrator | 2026-04-18 01:30:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:42.468179 | orchestrator | 2026-04-18 01:30:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:42.468271 | orchestrator | 2026-04-18 01:30:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:45.512820 | orchestrator | 2026-04-18 01:30:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:45.514532 | orchestrator | 2026-04-18 01:30:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:45.514586 | orchestrator | 2026-04-18 01:30:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:48.557615 | orchestrator | 2026-04-18 01:30:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:48.559221 | orchestrator | 2026-04-18 01:30:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:48.559312 | orchestrator | 2026-04-18 01:30:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:51.604747 | orchestrator | 2026-04-18 01:30:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:51.605764 | orchestrator | 2026-04-18 01:30:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:51.605815 | orchestrator | 2026-04-18 01:30:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:54.652922 | orchestrator | 2026-04-18 01:30:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:54.654302 | orchestrator | 2026-04-18 01:30:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:54.654355 | orchestrator | 2026-04-18 01:30:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:30:57.700111 | orchestrator | 2026-04-18 01:30:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:30:57.701748 | orchestrator | 2026-04-18 01:30:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:30:57.701799 | orchestrator | 2026-04-18 01:30:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:00.744126 | orchestrator | 2026-04-18 01:31:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:00.745259 | orchestrator | 2026-04-18 01:31:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:00.745346 | orchestrator | 2026-04-18 01:31:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:03.787567 | orchestrator | 2026-04-18 01:31:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:03.788873 | orchestrator | 2026-04-18 01:31:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:03.788903 | orchestrator | 2026-04-18 01:31:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:06.837974 | orchestrator | 2026-04-18 01:31:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:06.839962 | orchestrator | 2026-04-18 01:31:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:06.840096 | orchestrator | 2026-04-18 01:31:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:09.879214 | orchestrator | 2026-04-18 01:31:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:09.881051 | orchestrator | 2026-04-18 01:31:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:09.881123 | orchestrator | 2026-04-18 01:31:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:12.930527 | orchestrator | 2026-04-18 01:31:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:12.931257 | orchestrator | 2026-04-18 01:31:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:12.931301 | orchestrator | 2026-04-18 01:31:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:15.977223 | orchestrator | 2026-04-18 01:31:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:15.979461 | orchestrator | 2026-04-18 01:31:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:15.979563 | orchestrator | 2026-04-18 01:31:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:19.031193 | orchestrator | 2026-04-18 01:31:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:19.033693 | orchestrator | 2026-04-18 01:31:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:19.033752 | orchestrator | 2026-04-18 01:31:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:22.087667 | orchestrator | 2026-04-18 01:31:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:22.090190 | orchestrator | 2026-04-18 01:31:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:22.090244 | orchestrator | 2026-04-18 01:31:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:25.135184 | orchestrator | 2026-04-18 01:31:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:25.136565 | orchestrator | 2026-04-18 01:31:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:25.136629 | orchestrator | 2026-04-18 01:31:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:28.182846 | orchestrator | 2026-04-18 01:31:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:28.184304 | orchestrator | 2026-04-18 01:31:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:28.184409 | orchestrator | 2026-04-18 01:31:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:31.229400 | orchestrator | 2026-04-18 01:31:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:31.229844 | orchestrator | 2026-04-18 01:31:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:31.229954 | orchestrator | 2026-04-18 01:31:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:34.276565 | orchestrator | 2026-04-18 01:31:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:34.279333 | orchestrator | 2026-04-18 01:31:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:34.279402 | orchestrator | 2026-04-18 01:31:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:37.325798 | orchestrator | 2026-04-18 01:31:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:37.328767 | orchestrator | 2026-04-18 01:31:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:37.329036 | orchestrator | 2026-04-18 01:31:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:40.374754 | orchestrator | 2026-04-18 01:31:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:40.376684 | orchestrator | 2026-04-18 01:31:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:40.376736 | orchestrator | 2026-04-18 01:31:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:43.426783 | orchestrator | 2026-04-18 01:31:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:43.428358 | orchestrator | 2026-04-18 01:31:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:43.428659 | orchestrator | 2026-04-18 01:31:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:46.477720 | orchestrator | 2026-04-18 01:31:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:46.479239 | orchestrator | 2026-04-18 01:31:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:46.479302 | orchestrator | 2026-04-18 01:31:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:49.526832 | orchestrator | 2026-04-18 01:31:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:49.528187 | orchestrator | 2026-04-18 01:31:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:49.528224 | orchestrator | 2026-04-18 01:31:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:52.571051 | orchestrator | 2026-04-18 01:31:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:52.572121 | orchestrator | 2026-04-18 01:31:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:52.572254 | orchestrator | 2026-04-18 01:31:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:55.609338 | orchestrator | 2026-04-18 01:31:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:55.610805 | orchestrator | 2026-04-18 01:31:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:55.610907 | orchestrator | 2026-04-18 01:31:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:31:58.652549 | orchestrator | 2026-04-18 01:31:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:31:58.653238 | orchestrator | 2026-04-18 01:31:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:31:58.653271 | orchestrator | 2026-04-18 01:31:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:01.695900 | orchestrator | 2026-04-18 01:32:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:01.696421 | orchestrator | 2026-04-18 01:32:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:01.696559 | orchestrator | 2026-04-18 01:32:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:04.743048 | orchestrator | 2026-04-18 01:32:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:04.744689 | orchestrator | 2026-04-18 01:32:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:04.744915 | orchestrator | 2026-04-18 01:32:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:07.789813 | orchestrator | 2026-04-18 01:32:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:07.791334 | orchestrator | 2026-04-18 01:32:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:07.791443 | orchestrator | 2026-04-18 01:32:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:10.840062 | orchestrator | 2026-04-18 01:32:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:10.841617 | orchestrator | 2026-04-18 01:32:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:10.842332 | orchestrator | 2026-04-18 01:32:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:13.884230 | orchestrator | 2026-04-18 01:32:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:13.885500 | orchestrator | 2026-04-18 01:32:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:13.885536 | orchestrator | 2026-04-18 01:32:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:16.930890 | orchestrator | 2026-04-18 01:32:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:16.932211 | orchestrator | 2026-04-18 01:32:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:16.932257 | orchestrator | 2026-04-18 01:32:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:19.980776 | orchestrator | 2026-04-18 01:32:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:19.981746 | orchestrator | 2026-04-18 01:32:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:19.981821 | orchestrator | 2026-04-18 01:32:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:23.022629 | orchestrator | 2026-04-18 01:32:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:23.024354 | orchestrator | 2026-04-18 01:32:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:23.024420 | orchestrator | 2026-04-18 01:32:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:26.069469 | orchestrator | 2026-04-18 01:32:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:26.070409 | orchestrator | 2026-04-18 01:32:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:26.070563 | orchestrator | 2026-04-18 01:32:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:29.115319 | orchestrator | 2026-04-18 01:32:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:29.116935 | orchestrator | 2026-04-18 01:32:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:29.117083 | orchestrator | 2026-04-18 01:32:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:32.153205 | orchestrator | 2026-04-18 01:32:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:32.154601 | orchestrator | 2026-04-18 01:32:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:32.154726 | orchestrator | 2026-04-18 01:32:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:35.205738 | orchestrator | 2026-04-18 01:32:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:35.206952 | orchestrator | 2026-04-18 01:32:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:35.207055 | orchestrator | 2026-04-18 01:32:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:38.254939 | orchestrator | 2026-04-18 01:32:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:38.256201 | orchestrator | 2026-04-18 01:32:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:38.256292 | orchestrator | 2026-04-18 01:32:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:41.310965 | orchestrator | 2026-04-18 01:32:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:41.312210 | orchestrator | 2026-04-18 01:32:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:41.312251 | orchestrator | 2026-04-18 01:32:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:44.352868 | orchestrator | 2026-04-18 01:32:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:44.355690 | orchestrator | 2026-04-18 01:32:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:44.355748 | orchestrator | 2026-04-18 01:32:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:47.407861 | orchestrator | 2026-04-18 01:32:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:47.410094 | orchestrator | 2026-04-18 01:32:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:47.410213 | orchestrator | 2026-04-18 01:32:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:50.459600 | orchestrator | 2026-04-18 01:32:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:50.461589 | orchestrator | 2026-04-18 01:32:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:50.461679 | orchestrator | 2026-04-18 01:32:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:53.511315 | orchestrator | 2026-04-18 01:32:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:53.512217 | orchestrator | 2026-04-18 01:32:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:53.512648 | orchestrator | 2026-04-18 01:32:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:56.553836 | orchestrator | 2026-04-18 01:32:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:56.557394 | orchestrator | 2026-04-18 01:32:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:56.557523 | orchestrator | 2026-04-18 01:32:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:32:59.605792 | orchestrator | 2026-04-18 01:32:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:32:59.608732 | orchestrator | 2026-04-18 01:32:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:32:59.608818 | orchestrator | 2026-04-18 01:32:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:02.652059 | orchestrator | 2026-04-18 01:33:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:02.653844 | orchestrator | 2026-04-18 01:33:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:02.653961 | orchestrator | 2026-04-18 01:33:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:05.705429 | orchestrator | 2026-04-18 01:33:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:05.707624 | orchestrator | 2026-04-18 01:33:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:05.707675 | orchestrator | 2026-04-18 01:33:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:08.759054 | orchestrator | 2026-04-18 01:33:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:08.760927 | orchestrator | 2026-04-18 01:33:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:08.761041 | orchestrator | 2026-04-18 01:33:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:11.810113 | orchestrator | 2026-04-18 01:33:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:11.811281 | orchestrator | 2026-04-18 01:33:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:11.811329 | orchestrator | 2026-04-18 01:33:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:14.859478 | orchestrator | 2026-04-18 01:33:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:14.859531 | orchestrator | 2026-04-18 01:33:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:14.859539 | orchestrator | 2026-04-18 01:33:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:17.908401 | orchestrator | 2026-04-18 01:33:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:17.909473 | orchestrator | 2026-04-18 01:33:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:17.909620 | orchestrator | 2026-04-18 01:33:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:20.950387 | orchestrator | 2026-04-18 01:33:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:20.951708 | orchestrator | 2026-04-18 01:33:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:20.951766 | orchestrator | 2026-04-18 01:33:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:23.993464 | orchestrator | 2026-04-18 01:33:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:23.994508 | orchestrator | 2026-04-18 01:33:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:23.994560 | orchestrator | 2026-04-18 01:33:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:27.039084 | orchestrator | 2026-04-18 01:33:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:27.042232 | orchestrator | 2026-04-18 01:33:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:27.042314 | orchestrator | 2026-04-18 01:33:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:30.089045 | orchestrator | 2026-04-18 01:33:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:30.090799 | orchestrator | 2026-04-18 01:33:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:30.090879 | orchestrator | 2026-04-18 01:33:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:33.131747 | orchestrator | 2026-04-18 01:33:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:33.133591 | orchestrator | 2026-04-18 01:33:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:33.133655 | orchestrator | 2026-04-18 01:33:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:36.175605 | orchestrator | 2026-04-18 01:33:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:36.177718 | orchestrator | 2026-04-18 01:33:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:36.177788 | orchestrator | 2026-04-18 01:33:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:39.225293 | orchestrator | 2026-04-18 01:33:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:39.227298 | orchestrator | 2026-04-18 01:33:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:39.227410 | orchestrator | 2026-04-18 01:33:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:42.266112 | orchestrator | 2026-04-18 01:33:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:42.267706 | orchestrator | 2026-04-18 01:33:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:42.267766 | orchestrator | 2026-04-18 01:33:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:45.312477 | orchestrator | 2026-04-18 01:33:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:45.315271 | orchestrator | 2026-04-18 01:33:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:45.315332 | orchestrator | 2026-04-18 01:33:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:48.361680 | orchestrator | 2026-04-18 01:33:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:48.363730 | orchestrator | 2026-04-18 01:33:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:48.364180 | orchestrator | 2026-04-18 01:33:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:51.414368 | orchestrator | 2026-04-18 01:33:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:51.415691 | orchestrator | 2026-04-18 01:33:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:51.415766 | orchestrator | 2026-04-18 01:33:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:54.461542 | orchestrator | 2026-04-18 01:33:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:54.463014 | orchestrator | 2026-04-18 01:33:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:54.463061 | orchestrator | 2026-04-18 01:33:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:33:57.503635 | orchestrator | 2026-04-18 01:33:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:33:57.505124 | orchestrator | 2026-04-18 01:33:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:33:57.505197 | orchestrator | 2026-04-18 01:33:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:00.549403 | orchestrator | 2026-04-18 01:34:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:00.551395 | orchestrator | 2026-04-18 01:34:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:00.551534 | orchestrator | 2026-04-18 01:34:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:03.592122 | orchestrator | 2026-04-18 01:34:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:03.593531 | orchestrator | 2026-04-18 01:34:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:03.593584 | orchestrator | 2026-04-18 01:34:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:06.634863 | orchestrator | 2026-04-18 01:34:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:06.636384 | orchestrator | 2026-04-18 01:34:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:06.636462 | orchestrator | 2026-04-18 01:34:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:09.679876 | orchestrator | 2026-04-18 01:34:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:09.681351 | orchestrator | 2026-04-18 01:34:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:09.681537 | orchestrator | 2026-04-18 01:34:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:12.726758 | orchestrator | 2026-04-18 01:34:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:12.728035 | orchestrator | 2026-04-18 01:34:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:12.728086 | orchestrator | 2026-04-18 01:34:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:15.779758 | orchestrator | 2026-04-18 01:34:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:15.782475 | orchestrator | 2026-04-18 01:34:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:15.782563 | orchestrator | 2026-04-18 01:34:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:18.836127 | orchestrator | 2026-04-18 01:34:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:18.838441 | orchestrator | 2026-04-18 01:34:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:18.838507 | orchestrator | 2026-04-18 01:34:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:21.888497 | orchestrator | 2026-04-18 01:34:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:21.890557 | orchestrator | 2026-04-18 01:34:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:21.890645 | orchestrator | 2026-04-18 01:34:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:24.938628 | orchestrator | 2026-04-18 01:34:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:24.941401 | orchestrator | 2026-04-18 01:34:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:24.941453 | orchestrator | 2026-04-18 01:34:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:27.990678 | orchestrator | 2026-04-18 01:34:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:27.994228 | orchestrator | 2026-04-18 01:34:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:27.994330 | orchestrator | 2026-04-18 01:34:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:31.052562 | orchestrator | 2026-04-18 01:34:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:31.053642 | orchestrator | 2026-04-18 01:34:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:31.053693 | orchestrator | 2026-04-18 01:34:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:34.103331 | orchestrator | 2026-04-18 01:34:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:34.107197 | orchestrator | 2026-04-18 01:34:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:34.107283 | orchestrator | 2026-04-18 01:34:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:37.158843 | orchestrator | 2026-04-18 01:34:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:37.161145 | orchestrator | 2026-04-18 01:34:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:37.161235 | orchestrator | 2026-04-18 01:34:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:40.213160 | orchestrator | 2026-04-18 01:34:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:40.215760 | orchestrator | 2026-04-18 01:34:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:40.215839 | orchestrator | 2026-04-18 01:34:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:43.256385 | orchestrator | 2026-04-18 01:34:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:43.259071 | orchestrator | 2026-04-18 01:34:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:43.259153 | orchestrator | 2026-04-18 01:34:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:46.299329 | orchestrator | 2026-04-18 01:34:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:46.300444 | orchestrator | 2026-04-18 01:34:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:46.300486 | orchestrator | 2026-04-18 01:34:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:49.340341 | orchestrator | 2026-04-18 01:34:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:49.341955 | orchestrator | 2026-04-18 01:34:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:49.342124 | orchestrator | 2026-04-18 01:34:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:52.386406 | orchestrator | 2026-04-18 01:34:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:52.388784 | orchestrator | 2026-04-18 01:34:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:52.388839 | orchestrator | 2026-04-18 01:34:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:55.430479 | orchestrator | 2026-04-18 01:34:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:55.432401 | orchestrator | 2026-04-18 01:34:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:55.432508 | orchestrator | 2026-04-18 01:34:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:34:58.476103 | orchestrator | 2026-04-18 01:34:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:34:58.477677 | orchestrator | 2026-04-18 01:34:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:34:58.478250 | orchestrator | 2026-04-18 01:34:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:01.525084 | orchestrator | 2026-04-18 01:35:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:01.526713 | orchestrator | 2026-04-18 01:35:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:01.526826 | orchestrator | 2026-04-18 01:35:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:04.576442 | orchestrator | 2026-04-18 01:35:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:04.578052 | orchestrator | 2026-04-18 01:35:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:04.578116 | orchestrator | 2026-04-18 01:35:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:07.624319 | orchestrator | 2026-04-18 01:35:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:07.626168 | orchestrator | 2026-04-18 01:35:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:07.626238 | orchestrator | 2026-04-18 01:35:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:10.677123 | orchestrator | 2026-04-18 01:35:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:10.679705 | orchestrator | 2026-04-18 01:35:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:10.679759 | orchestrator | 2026-04-18 01:35:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:13.726374 | orchestrator | 2026-04-18 01:35:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:13.728195 | orchestrator | 2026-04-18 01:35:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:13.728424 | orchestrator | 2026-04-18 01:35:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:16.777907 | orchestrator | 2026-04-18 01:35:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:16.779768 | orchestrator | 2026-04-18 01:35:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:16.779839 | orchestrator | 2026-04-18 01:35:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:19.822489 | orchestrator | 2026-04-18 01:35:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:19.823411 | orchestrator | 2026-04-18 01:35:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:19.823457 | orchestrator | 2026-04-18 01:35:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:22.870499 | orchestrator | 2026-04-18 01:35:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:22.872239 | orchestrator | 2026-04-18 01:35:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:22.872295 | orchestrator | 2026-04-18 01:35:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:25.919699 | orchestrator | 2026-04-18 01:35:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:25.921308 | orchestrator | 2026-04-18 01:35:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:25.921404 | orchestrator | 2026-04-18 01:35:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:28.965121 | orchestrator | 2026-04-18 01:35:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:28.966757 | orchestrator | 2026-04-18 01:35:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:28.966812 | orchestrator | 2026-04-18 01:35:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:32.018438 | orchestrator | 2026-04-18 01:35:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:32.020110 | orchestrator | 2026-04-18 01:35:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:32.020195 | orchestrator | 2026-04-18 01:35:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:35.066666 | orchestrator | 2026-04-18 01:35:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:35.070103 | orchestrator | 2026-04-18 01:35:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:35.070220 | orchestrator | 2026-04-18 01:35:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:38.116057 | orchestrator | 2026-04-18 01:35:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:38.117698 | orchestrator | 2026-04-18 01:35:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:38.117799 | orchestrator | 2026-04-18 01:35:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:41.165307 | orchestrator | 2026-04-18 01:35:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:41.166545 | orchestrator | 2026-04-18 01:35:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:41.166593 | orchestrator | 2026-04-18 01:35:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:44.211116 | orchestrator | 2026-04-18 01:35:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:44.213744 | orchestrator | 2026-04-18 01:35:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:44.213819 | orchestrator | 2026-04-18 01:35:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:47.258774 | orchestrator | 2026-04-18 01:35:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:47.261171 | orchestrator | 2026-04-18 01:35:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:47.261364 | orchestrator | 2026-04-18 01:35:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:50.301828 | orchestrator | 2026-04-18 01:35:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:50.304039 | orchestrator | 2026-04-18 01:35:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:50.304166 | orchestrator | 2026-04-18 01:35:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:53.349515 | orchestrator | 2026-04-18 01:35:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:53.351445 | orchestrator | 2026-04-18 01:35:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:53.351494 | orchestrator | 2026-04-18 01:35:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:56.403804 | orchestrator | 2026-04-18 01:35:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:56.405703 | orchestrator | 2026-04-18 01:35:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:56.405755 | orchestrator | 2026-04-18 01:35:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:35:59.456860 | orchestrator | 2026-04-18 01:35:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:35:59.459508 | orchestrator | 2026-04-18 01:35:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:35:59.459566 | orchestrator | 2026-04-18 01:35:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:02.512119 | orchestrator | 2026-04-18 01:36:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:02.513879 | orchestrator | 2026-04-18 01:36:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:02.513998 | orchestrator | 2026-04-18 01:36:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:05.562621 | orchestrator | 2026-04-18 01:36:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:05.563095 | orchestrator | 2026-04-18 01:36:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:05.563129 | orchestrator | 2026-04-18 01:36:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:08.619803 | orchestrator | 2026-04-18 01:36:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:08.621268 | orchestrator | 2026-04-18 01:36:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:08.621306 | orchestrator | 2026-04-18 01:36:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:11.670386 | orchestrator | 2026-04-18 01:36:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:11.671740 | orchestrator | 2026-04-18 01:36:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:11.671793 | orchestrator | 2026-04-18 01:36:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:14.714503 | orchestrator | 2026-04-18 01:36:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:14.715596 | orchestrator | 2026-04-18 01:36:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:14.715633 | orchestrator | 2026-04-18 01:36:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:17.766300 | orchestrator | 2026-04-18 01:36:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:17.768597 | orchestrator | 2026-04-18 01:36:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:17.768692 | orchestrator | 2026-04-18 01:36:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:20.819995 | orchestrator | 2026-04-18 01:36:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:20.821039 | orchestrator | 2026-04-18 01:36:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:20.821085 | orchestrator | 2026-04-18 01:36:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:23.869443 | orchestrator | 2026-04-18 01:36:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:23.872790 | orchestrator | 2026-04-18 01:36:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:23.872846 | orchestrator | 2026-04-18 01:36:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:26.925536 | orchestrator | 2026-04-18 01:36:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:26.927072 | orchestrator | 2026-04-18 01:36:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:26.927200 | orchestrator | 2026-04-18 01:36:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:29.979390 | orchestrator | 2026-04-18 01:36:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:29.981883 | orchestrator | 2026-04-18 01:36:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:29.981932 | orchestrator | 2026-04-18 01:36:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:33.033665 | orchestrator | 2026-04-18 01:36:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:33.035449 | orchestrator | 2026-04-18 01:36:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:33.035498 | orchestrator | 2026-04-18 01:36:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:36.086732 | orchestrator | 2026-04-18 01:36:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:36.088756 | orchestrator | 2026-04-18 01:36:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:36.088839 | orchestrator | 2026-04-18 01:36:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:39.139662 | orchestrator | 2026-04-18 01:36:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:39.140130 | orchestrator | 2026-04-18 01:36:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:39.140167 | orchestrator | 2026-04-18 01:36:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:42.190852 | orchestrator | 2026-04-18 01:36:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:42.192202 | orchestrator | 2026-04-18 01:36:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:42.192256 | orchestrator | 2026-04-18 01:36:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:45.243616 | orchestrator | 2026-04-18 01:36:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:45.247131 | orchestrator | 2026-04-18 01:36:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:45.247209 | orchestrator | 2026-04-18 01:36:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:48.291391 | orchestrator | 2026-04-18 01:36:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:48.293302 | orchestrator | 2026-04-18 01:36:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:48.293357 | orchestrator | 2026-04-18 01:36:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:51.346516 | orchestrator | 2026-04-18 01:36:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:51.347941 | orchestrator | 2026-04-18 01:36:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:51.347974 | orchestrator | 2026-04-18 01:36:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:54.396283 | orchestrator | 2026-04-18 01:36:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:54.399770 | orchestrator | 2026-04-18 01:36:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:54.400017 | orchestrator | 2026-04-18 01:36:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:36:57.455228 | orchestrator | 2026-04-18 01:36:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:36:57.456857 | orchestrator | 2026-04-18 01:36:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:36:57.456876 | orchestrator | 2026-04-18 01:36:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:00.506837 | orchestrator | 2026-04-18 01:37:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:00.506996 | orchestrator | 2026-04-18 01:37:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:00.507015 | orchestrator | 2026-04-18 01:37:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:03.568397 | orchestrator | 2026-04-18 01:37:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:03.570292 | orchestrator | 2026-04-18 01:37:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:03.570371 | orchestrator | 2026-04-18 01:37:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:06.622199 | orchestrator | 2026-04-18 01:37:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:06.625296 | orchestrator | 2026-04-18 01:37:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:06.625511 | orchestrator | 2026-04-18 01:37:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:09.683187 | orchestrator | 2026-04-18 01:37:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:09.684525 | orchestrator | 2026-04-18 01:37:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:09.684690 | orchestrator | 2026-04-18 01:37:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:12.732322 | orchestrator | 2026-04-18 01:37:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:12.733561 | orchestrator | 2026-04-18 01:37:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:12.733624 | orchestrator | 2026-04-18 01:37:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:15.792322 | orchestrator | 2026-04-18 01:37:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:15.793168 | orchestrator | 2026-04-18 01:37:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:15.793195 | orchestrator | 2026-04-18 01:37:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:18.843099 | orchestrator | 2026-04-18 01:37:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:18.844921 | orchestrator | 2026-04-18 01:37:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:18.845076 | orchestrator | 2026-04-18 01:37:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:21.895652 | orchestrator | 2026-04-18 01:37:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:21.897119 | orchestrator | 2026-04-18 01:37:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:21.897171 | orchestrator | 2026-04-18 01:37:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:24.949089 | orchestrator | 2026-04-18 01:37:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:24.951195 | orchestrator | 2026-04-18 01:37:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:24.951316 | orchestrator | 2026-04-18 01:37:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:28.005851 | orchestrator | 2026-04-18 01:37:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:28.008127 | orchestrator | 2026-04-18 01:37:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:28.008180 | orchestrator | 2026-04-18 01:37:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:31.054806 | orchestrator | 2026-04-18 01:37:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:31.055748 | orchestrator | 2026-04-18 01:37:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:31.055817 | orchestrator | 2026-04-18 01:37:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:34.102948 | orchestrator | 2026-04-18 01:37:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:34.105066 | orchestrator | 2026-04-18 01:37:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:34.105159 | orchestrator | 2026-04-18 01:37:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:37.151551 | orchestrator | 2026-04-18 01:37:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:37.153648 | orchestrator | 2026-04-18 01:37:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:37.157348 | orchestrator | 2026-04-18 01:37:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:40.203228 | orchestrator | 2026-04-18 01:37:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:40.204941 | orchestrator | 2026-04-18 01:37:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:40.205188 | orchestrator | 2026-04-18 01:37:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:43.254730 | orchestrator | 2026-04-18 01:37:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:43.256888 | orchestrator | 2026-04-18 01:37:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:43.256923 | orchestrator | 2026-04-18 01:37:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:46.300175 | orchestrator | 2026-04-18 01:37:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:46.301700 | orchestrator | 2026-04-18 01:37:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:46.301761 | orchestrator | 2026-04-18 01:37:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:49.350514 | orchestrator | 2026-04-18 01:37:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:49.352503 | orchestrator | 2026-04-18 01:37:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:49.352636 | orchestrator | 2026-04-18 01:37:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:52.399393 | orchestrator | 2026-04-18 01:37:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:52.401287 | orchestrator | 2026-04-18 01:37:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:52.401577 | orchestrator | 2026-04-18 01:37:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:55.452245 | orchestrator | 2026-04-18 01:37:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:55.452495 | orchestrator | 2026-04-18 01:37:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:55.452520 | orchestrator | 2026-04-18 01:37:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:37:58.499811 | orchestrator | 2026-04-18 01:37:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:37:58.501762 | orchestrator | 2026-04-18 01:37:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:37:58.502148 | orchestrator | 2026-04-18 01:37:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:01.542892 | orchestrator | 2026-04-18 01:38:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:01.544625 | orchestrator | 2026-04-18 01:38:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:01.544665 | orchestrator | 2026-04-18 01:38:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:04.593441 | orchestrator | 2026-04-18 01:38:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:04.594833 | orchestrator | 2026-04-18 01:38:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:04.594896 | orchestrator | 2026-04-18 01:38:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:07.645096 | orchestrator | 2026-04-18 01:38:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:07.647104 | orchestrator | 2026-04-18 01:38:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:07.647257 | orchestrator | 2026-04-18 01:38:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:10.690859 | orchestrator | 2026-04-18 01:38:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:10.693368 | orchestrator | 2026-04-18 01:38:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:10.693448 | orchestrator | 2026-04-18 01:38:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:13.737778 | orchestrator | 2026-04-18 01:38:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:13.738976 | orchestrator | 2026-04-18 01:38:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:13.739055 | orchestrator | 2026-04-18 01:38:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:16.782328 | orchestrator | 2026-04-18 01:38:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:16.783440 | orchestrator | 2026-04-18 01:38:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:16.783492 | orchestrator | 2026-04-18 01:38:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:19.833636 | orchestrator | 2026-04-18 01:38:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:19.834901 | orchestrator | 2026-04-18 01:38:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:19.834945 | orchestrator | 2026-04-18 01:38:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:22.883425 | orchestrator | 2026-04-18 01:38:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:22.884255 | orchestrator | 2026-04-18 01:38:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:22.884301 | orchestrator | 2026-04-18 01:38:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:25.930276 | orchestrator | 2026-04-18 01:38:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:25.931588 | orchestrator | 2026-04-18 01:38:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:25.931635 | orchestrator | 2026-04-18 01:38:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:28.980671 | orchestrator | 2026-04-18 01:38:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:28.982471 | orchestrator | 2026-04-18 01:38:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:28.982612 | orchestrator | 2026-04-18 01:38:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:32.023526 | orchestrator | 2026-04-18 01:38:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:32.024408 | orchestrator | 2026-04-18 01:38:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:32.024564 | orchestrator | 2026-04-18 01:38:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:35.071508 | orchestrator | 2026-04-18 01:38:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:35.072694 | orchestrator | 2026-04-18 01:38:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:35.073111 | orchestrator | 2026-04-18 01:38:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:38.114829 | orchestrator | 2026-04-18 01:38:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:38.116488 | orchestrator | 2026-04-18 01:38:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:38.116542 | orchestrator | 2026-04-18 01:38:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:41.166752 | orchestrator | 2026-04-18 01:38:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:41.167810 | orchestrator | 2026-04-18 01:38:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:41.167849 | orchestrator | 2026-04-18 01:38:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:44.222102 | orchestrator | 2026-04-18 01:38:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:44.223805 | orchestrator | 2026-04-18 01:38:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:44.224512 | orchestrator | 2026-04-18 01:38:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:47.265073 | orchestrator | 2026-04-18 01:38:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:47.266324 | orchestrator | 2026-04-18 01:38:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:47.266552 | orchestrator | 2026-04-18 01:38:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:50.313319 | orchestrator | 2026-04-18 01:38:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:50.314993 | orchestrator | 2026-04-18 01:38:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:50.315079 | orchestrator | 2026-04-18 01:38:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:53.350650 | orchestrator | 2026-04-18 01:38:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:53.352509 | orchestrator | 2026-04-18 01:38:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:53.352575 | orchestrator | 2026-04-18 01:38:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:56.400886 | orchestrator | 2026-04-18 01:38:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:56.401854 | orchestrator | 2026-04-18 01:38:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:56.401885 | orchestrator | 2026-04-18 01:38:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:38:59.453467 | orchestrator | 2026-04-18 01:38:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:38:59.454306 | orchestrator | 2026-04-18 01:38:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:38:59.454372 | orchestrator | 2026-04-18 01:38:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:02.501543 | orchestrator | 2026-04-18 01:39:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:02.503236 | orchestrator | 2026-04-18 01:39:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:02.503338 | orchestrator | 2026-04-18 01:39:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:05.552069 | orchestrator | 2026-04-18 01:39:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:05.553769 | orchestrator | 2026-04-18 01:39:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:05.554226 | orchestrator | 2026-04-18 01:39:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:08.594193 | orchestrator | 2026-04-18 01:39:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:08.595315 | orchestrator | 2026-04-18 01:39:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:08.595435 | orchestrator | 2026-04-18 01:39:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:11.643611 | orchestrator | 2026-04-18 01:39:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:11.645691 | orchestrator | 2026-04-18 01:39:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:11.645802 | orchestrator | 2026-04-18 01:39:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:14.697534 | orchestrator | 2026-04-18 01:39:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:14.699560 | orchestrator | 2026-04-18 01:39:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:14.699626 | orchestrator | 2026-04-18 01:39:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:17.747581 | orchestrator | 2026-04-18 01:39:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:17.749259 | orchestrator | 2026-04-18 01:39:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:17.749313 | orchestrator | 2026-04-18 01:39:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:20.796504 | orchestrator | 2026-04-18 01:39:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:20.796725 | orchestrator | 2026-04-18 01:39:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:20.796744 | orchestrator | 2026-04-18 01:39:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:23.839269 | orchestrator | 2026-04-18 01:39:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:23.840315 | orchestrator | 2026-04-18 01:39:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:23.840774 | orchestrator | 2026-04-18 01:39:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:26.888498 | orchestrator | 2026-04-18 01:39:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:26.890346 | orchestrator | 2026-04-18 01:39:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:26.890407 | orchestrator | 2026-04-18 01:39:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:29.934930 | orchestrator | 2026-04-18 01:39:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:29.935669 | orchestrator | 2026-04-18 01:39:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:29.935766 | orchestrator | 2026-04-18 01:39:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:32.983683 | orchestrator | 2026-04-18 01:39:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:32.986146 | orchestrator | 2026-04-18 01:39:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:32.986215 | orchestrator | 2026-04-18 01:39:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:36.040915 | orchestrator | 2026-04-18 01:39:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:36.042853 | orchestrator | 2026-04-18 01:39:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:36.042905 | orchestrator | 2026-04-18 01:39:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:39.096199 | orchestrator | 2026-04-18 01:39:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:39.099745 | orchestrator | 2026-04-18 01:39:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:39.099791 | orchestrator | 2026-04-18 01:39:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:42.146292 | orchestrator | 2026-04-18 01:39:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:42.147671 | orchestrator | 2026-04-18 01:39:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:42.147793 | orchestrator | 2026-04-18 01:39:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:45.202620 | orchestrator | 2026-04-18 01:39:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:45.205746 | orchestrator | 2026-04-18 01:39:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:45.205822 | orchestrator | 2026-04-18 01:39:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:48.252993 | orchestrator | 2026-04-18 01:39:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:48.256794 | orchestrator | 2026-04-18 01:39:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:48.256879 | orchestrator | 2026-04-18 01:39:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:51.315209 | orchestrator | 2026-04-18 01:39:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:51.319251 | orchestrator | 2026-04-18 01:39:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:51.319320 | orchestrator | 2026-04-18 01:39:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:54.375340 | orchestrator | 2026-04-18 01:39:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:54.376411 | orchestrator | 2026-04-18 01:39:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:54.376470 | orchestrator | 2026-04-18 01:39:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:39:57.422388 | orchestrator | 2026-04-18 01:39:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:39:57.423243 | orchestrator | 2026-04-18 01:39:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:39:57.423351 | orchestrator | 2026-04-18 01:39:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:00.470947 | orchestrator | 2026-04-18 01:40:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:00.472209 | orchestrator | 2026-04-18 01:40:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:00.472252 | orchestrator | 2026-04-18 01:40:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:03.515142 | orchestrator | 2026-04-18 01:40:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:03.516571 | orchestrator | 2026-04-18 01:40:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:03.516632 | orchestrator | 2026-04-18 01:40:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:06.560533 | orchestrator | 2026-04-18 01:40:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:06.561581 | orchestrator | 2026-04-18 01:40:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:06.561857 | orchestrator | 2026-04-18 01:40:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:09.592797 | orchestrator | 2026-04-18 01:40:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:09.592996 | orchestrator | 2026-04-18 01:40:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:09.593108 | orchestrator | 2026-04-18 01:40:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:12.643708 | orchestrator | 2026-04-18 01:40:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:12.644861 | orchestrator | 2026-04-18 01:40:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:12.644884 | orchestrator | 2026-04-18 01:40:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:15.697054 | orchestrator | 2026-04-18 01:40:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:15.697876 | orchestrator | 2026-04-18 01:40:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:15.697894 | orchestrator | 2026-04-18 01:40:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:18.739343 | orchestrator | 2026-04-18 01:40:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:18.742085 | orchestrator | 2026-04-18 01:40:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:18.742146 | orchestrator | 2026-04-18 01:40:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:21.783609 | orchestrator | 2026-04-18 01:40:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:21.786956 | orchestrator | 2026-04-18 01:40:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:21.787410 | orchestrator | 2026-04-18 01:40:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:24.833137 | orchestrator | 2026-04-18 01:40:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:24.836290 | orchestrator | 2026-04-18 01:40:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:24.836644 | orchestrator | 2026-04-18 01:40:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:27.878177 | orchestrator | 2026-04-18 01:40:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:27.879756 | orchestrator | 2026-04-18 01:40:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:27.879810 | orchestrator | 2026-04-18 01:40:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:30.922224 | orchestrator | 2026-04-18 01:40:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:30.923538 | orchestrator | 2026-04-18 01:40:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:30.923748 | orchestrator | 2026-04-18 01:40:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:33.967004 | orchestrator | 2026-04-18 01:40:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:33.969321 | orchestrator | 2026-04-18 01:40:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:33.969420 | orchestrator | 2026-04-18 01:40:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:37.021348 | orchestrator | 2026-04-18 01:40:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:37.024736 | orchestrator | 2026-04-18 01:40:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:37.024799 | orchestrator | 2026-04-18 01:40:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:40.068337 | orchestrator | 2026-04-18 01:40:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:40.069966 | orchestrator | 2026-04-18 01:40:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:40.070008 | orchestrator | 2026-04-18 01:40:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:43.119347 | orchestrator | 2026-04-18 01:40:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:43.121544 | orchestrator | 2026-04-18 01:40:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:43.121584 | orchestrator | 2026-04-18 01:40:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:46.166665 | orchestrator | 2026-04-18 01:40:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:46.167787 | orchestrator | 2026-04-18 01:40:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:46.167835 | orchestrator | 2026-04-18 01:40:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:49.214981 | orchestrator | 2026-04-18 01:40:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:49.216749 | orchestrator | 2026-04-18 01:40:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:49.216794 | orchestrator | 2026-04-18 01:40:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:52.265681 | orchestrator | 2026-04-18 01:40:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:52.268007 | orchestrator | 2026-04-18 01:40:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:52.268306 | orchestrator | 2026-04-18 01:40:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:55.321939 | orchestrator | 2026-04-18 01:40:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:55.322955 | orchestrator | 2026-04-18 01:40:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:55.323007 | orchestrator | 2026-04-18 01:40:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:40:58.371608 | orchestrator | 2026-04-18 01:40:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:40:58.373497 | orchestrator | 2026-04-18 01:40:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:40:58.373545 | orchestrator | 2026-04-18 01:40:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:01.428372 | orchestrator | 2026-04-18 01:41:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:01.429895 | orchestrator | 2026-04-18 01:41:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:01.429951 | orchestrator | 2026-04-18 01:41:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:04.479064 | orchestrator | 2026-04-18 01:41:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:04.480485 | orchestrator | 2026-04-18 01:41:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:04.480562 | orchestrator | 2026-04-18 01:41:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:07.526220 | orchestrator | 2026-04-18 01:41:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:07.528296 | orchestrator | 2026-04-18 01:41:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:07.528362 | orchestrator | 2026-04-18 01:41:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:10.574640 | orchestrator | 2026-04-18 01:41:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:10.575685 | orchestrator | 2026-04-18 01:41:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:10.575760 | orchestrator | 2026-04-18 01:41:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:13.636111 | orchestrator | 2026-04-18 01:41:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:13.637611 | orchestrator | 2026-04-18 01:41:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:13.637671 | orchestrator | 2026-04-18 01:41:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:16.685433 | orchestrator | 2026-04-18 01:41:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:16.687384 | orchestrator | 2026-04-18 01:41:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:16.687464 | orchestrator | 2026-04-18 01:41:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:19.733874 | orchestrator | 2026-04-18 01:41:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:19.736147 | orchestrator | 2026-04-18 01:41:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:19.736202 | orchestrator | 2026-04-18 01:41:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:22.780738 | orchestrator | 2026-04-18 01:41:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:22.781826 | orchestrator | 2026-04-18 01:41:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:22.781892 | orchestrator | 2026-04-18 01:41:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:25.828248 | orchestrator | 2026-04-18 01:41:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:25.829941 | orchestrator | 2026-04-18 01:41:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:25.829986 | orchestrator | 2026-04-18 01:41:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:28.877249 | orchestrator | 2026-04-18 01:41:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:28.880462 | orchestrator | 2026-04-18 01:41:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:28.880541 | orchestrator | 2026-04-18 01:41:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:31.924011 | orchestrator | 2026-04-18 01:41:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:31.925614 | orchestrator | 2026-04-18 01:41:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:31.925666 | orchestrator | 2026-04-18 01:41:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:34.970647 | orchestrator | 2026-04-18 01:41:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:34.972251 | orchestrator | 2026-04-18 01:41:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:34.972307 | orchestrator | 2026-04-18 01:41:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:38.018827 | orchestrator | 2026-04-18 01:41:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:38.022309 | orchestrator | 2026-04-18 01:41:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:38.022358 | orchestrator | 2026-04-18 01:41:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:41.066877 | orchestrator | 2026-04-18 01:41:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:41.068863 | orchestrator | 2026-04-18 01:41:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:41.068921 | orchestrator | 2026-04-18 01:41:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:44.110432 | orchestrator | 2026-04-18 01:41:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:44.113835 | orchestrator | 2026-04-18 01:41:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:44.113909 | orchestrator | 2026-04-18 01:41:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:47.158477 | orchestrator | 2026-04-18 01:41:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:47.160126 | orchestrator | 2026-04-18 01:41:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:47.160240 | orchestrator | 2026-04-18 01:41:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:50.212537 | orchestrator | 2026-04-18 01:41:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:50.214916 | orchestrator | 2026-04-18 01:41:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:50.214984 | orchestrator | 2026-04-18 01:41:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:53.258174 | orchestrator | 2026-04-18 01:41:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:53.259519 | orchestrator | 2026-04-18 01:41:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:53.259620 | orchestrator | 2026-04-18 01:41:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:56.310390 | orchestrator | 2026-04-18 01:41:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:56.313292 | orchestrator | 2026-04-18 01:41:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:56.314881 | orchestrator | 2026-04-18 01:41:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:41:59.357410 | orchestrator | 2026-04-18 01:41:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:41:59.357994 | orchestrator | 2026-04-18 01:41:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:41:59.358090 | orchestrator | 2026-04-18 01:41:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:02.400326 | orchestrator | 2026-04-18 01:42:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:02.402648 | orchestrator | 2026-04-18 01:42:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:02.402706 | orchestrator | 2026-04-18 01:42:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:05.448377 | orchestrator | 2026-04-18 01:42:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:05.449689 | orchestrator | 2026-04-18 01:42:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:05.449768 | orchestrator | 2026-04-18 01:42:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:08.494270 | orchestrator | 2026-04-18 01:42:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:08.495943 | orchestrator | 2026-04-18 01:42:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:08.496070 | orchestrator | 2026-04-18 01:42:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:11.541355 | orchestrator | 2026-04-18 01:42:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:11.543455 | orchestrator | 2026-04-18 01:42:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:11.543579 | orchestrator | 2026-04-18 01:42:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:14.592688 | orchestrator | 2026-04-18 01:42:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:14.594757 | orchestrator | 2026-04-18 01:42:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:14.595321 | orchestrator | 2026-04-18 01:42:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:17.650426 | orchestrator | 2026-04-18 01:42:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:17.651459 | orchestrator | 2026-04-18 01:42:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:17.651496 | orchestrator | 2026-04-18 01:42:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:20.696287 | orchestrator | 2026-04-18 01:42:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:20.697909 | orchestrator | 2026-04-18 01:42:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:20.697952 | orchestrator | 2026-04-18 01:42:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:23.752188 | orchestrator | 2026-04-18 01:42:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:23.752945 | orchestrator | 2026-04-18 01:42:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:23.754066 | orchestrator | 2026-04-18 01:42:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:26.795802 | orchestrator | 2026-04-18 01:42:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:26.797442 | orchestrator | 2026-04-18 01:42:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:26.797483 | orchestrator | 2026-04-18 01:42:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:29.842568 | orchestrator | 2026-04-18 01:42:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:29.844456 | orchestrator | 2026-04-18 01:42:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:29.844536 | orchestrator | 2026-04-18 01:42:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:32.890482 | orchestrator | 2026-04-18 01:42:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:32.892509 | orchestrator | 2026-04-18 01:42:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:32.892615 | orchestrator | 2026-04-18 01:42:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:35.936678 | orchestrator | 2026-04-18 01:42:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:35.938299 | orchestrator | 2026-04-18 01:42:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:35.938348 | orchestrator | 2026-04-18 01:42:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:38.990477 | orchestrator | 2026-04-18 01:42:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:38.991814 | orchestrator | 2026-04-18 01:42:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:38.991967 | orchestrator | 2026-04-18 01:42:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:42.041884 | orchestrator | 2026-04-18 01:42:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:42.043118 | orchestrator | 2026-04-18 01:42:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:42.043158 | orchestrator | 2026-04-18 01:42:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:45.087318 | orchestrator | 2026-04-18 01:42:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:45.089247 | orchestrator | 2026-04-18 01:42:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:45.089326 | orchestrator | 2026-04-18 01:42:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:48.129645 | orchestrator | 2026-04-18 01:42:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:48.132228 | orchestrator | 2026-04-18 01:42:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:48.132407 | orchestrator | 2026-04-18 01:42:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:51.177252 | orchestrator | 2026-04-18 01:42:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:51.179217 | orchestrator | 2026-04-18 01:42:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:51.179272 | orchestrator | 2026-04-18 01:42:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:54.220513 | orchestrator | 2026-04-18 01:42:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:54.221679 | orchestrator | 2026-04-18 01:42:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:54.221721 | orchestrator | 2026-04-18 01:42:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:42:57.264350 | orchestrator | 2026-04-18 01:42:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:42:57.265958 | orchestrator | 2026-04-18 01:42:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:42:57.265997 | orchestrator | 2026-04-18 01:42:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:00.308621 | orchestrator | 2026-04-18 01:43:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:00.310104 | orchestrator | 2026-04-18 01:43:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:00.310150 | orchestrator | 2026-04-18 01:43:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:03.354581 | orchestrator | 2026-04-18 01:43:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:03.356921 | orchestrator | 2026-04-18 01:43:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:03.357002 | orchestrator | 2026-04-18 01:43:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:06.403641 | orchestrator | 2026-04-18 01:43:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:06.405186 | orchestrator | 2026-04-18 01:43:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:06.405240 | orchestrator | 2026-04-18 01:43:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:09.460532 | orchestrator | 2026-04-18 01:43:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:09.462991 | orchestrator | 2026-04-18 01:43:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:09.463359 | orchestrator | 2026-04-18 01:43:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:12.515293 | orchestrator | 2026-04-18 01:43:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:12.517153 | orchestrator | 2026-04-18 01:43:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:12.517222 | orchestrator | 2026-04-18 01:43:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:15.566250 | orchestrator | 2026-04-18 01:43:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:15.567727 | orchestrator | 2026-04-18 01:43:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:15.567780 | orchestrator | 2026-04-18 01:43:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:18.611380 | orchestrator | 2026-04-18 01:43:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:18.612496 | orchestrator | 2026-04-18 01:43:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:18.612572 | orchestrator | 2026-04-18 01:43:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:21.660996 | orchestrator | 2026-04-18 01:43:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:21.662314 | orchestrator | 2026-04-18 01:43:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:21.662628 | orchestrator | 2026-04-18 01:43:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:24.709754 | orchestrator | 2026-04-18 01:43:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:24.711321 | orchestrator | 2026-04-18 01:43:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:24.711350 | orchestrator | 2026-04-18 01:43:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:27.762602 | orchestrator | 2026-04-18 01:43:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:27.764128 | orchestrator | 2026-04-18 01:43:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:27.764281 | orchestrator | 2026-04-18 01:43:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:30.814997 | orchestrator | 2026-04-18 01:43:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:30.817476 | orchestrator | 2026-04-18 01:43:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:30.817542 | orchestrator | 2026-04-18 01:43:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:33.860872 | orchestrator | 2026-04-18 01:43:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:33.861638 | orchestrator | 2026-04-18 01:43:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:33.861698 | orchestrator | 2026-04-18 01:43:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:36.911183 | orchestrator | 2026-04-18 01:43:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:36.912808 | orchestrator | 2026-04-18 01:43:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:36.912875 | orchestrator | 2026-04-18 01:43:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:39.963477 | orchestrator | 2026-04-18 01:43:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:39.965299 | orchestrator | 2026-04-18 01:43:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:39.965338 | orchestrator | 2026-04-18 01:43:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:43.014698 | orchestrator | 2026-04-18 01:43:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:43.016756 | orchestrator | 2026-04-18 01:43:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:43.016813 | orchestrator | 2026-04-18 01:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:46.059559 | orchestrator | 2026-04-18 01:43:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:46.061138 | orchestrator | 2026-04-18 01:43:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:46.061186 | orchestrator | 2026-04-18 01:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:49.104313 | orchestrator | 2026-04-18 01:43:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:49.105188 | orchestrator | 2026-04-18 01:43:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:49.105228 | orchestrator | 2026-04-18 01:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:52.156378 | orchestrator | 2026-04-18 01:43:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:52.158159 | orchestrator | 2026-04-18 01:43:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:52.158247 | orchestrator | 2026-04-18 01:43:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:55.204743 | orchestrator | 2026-04-18 01:43:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:55.205887 | orchestrator | 2026-04-18 01:43:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:55.205964 | orchestrator | 2026-04-18 01:43:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:43:58.247875 | orchestrator | 2026-04-18 01:43:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:43:58.250253 | orchestrator | 2026-04-18 01:43:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:43:58.250324 | orchestrator | 2026-04-18 01:43:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:01.285416 | orchestrator | 2026-04-18 01:44:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:01.287164 | orchestrator | 2026-04-18 01:44:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:01.287248 | orchestrator | 2026-04-18 01:44:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:04.338695 | orchestrator | 2026-04-18 01:44:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:04.339508 | orchestrator | 2026-04-18 01:44:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:04.339557 | orchestrator | 2026-04-18 01:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:07.390333 | orchestrator | 2026-04-18 01:44:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:07.392190 | orchestrator | 2026-04-18 01:44:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:07.392238 | orchestrator | 2026-04-18 01:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:10.436010 | orchestrator | 2026-04-18 01:44:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:10.436679 | orchestrator | 2026-04-18 01:44:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:10.436768 | orchestrator | 2026-04-18 01:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:13.490153 | orchestrator | 2026-04-18 01:44:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:13.492740 | orchestrator | 2026-04-18 01:44:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:13.492835 | orchestrator | 2026-04-18 01:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:16.544771 | orchestrator | 2026-04-18 01:44:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:16.546649 | orchestrator | 2026-04-18 01:44:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:16.546735 | orchestrator | 2026-04-18 01:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:19.593943 | orchestrator | 2026-04-18 01:44:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:19.597050 | orchestrator | 2026-04-18 01:44:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:19.597189 | orchestrator | 2026-04-18 01:44:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:22.643970 | orchestrator | 2026-04-18 01:44:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:22.644866 | orchestrator | 2026-04-18 01:44:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:22.644895 | orchestrator | 2026-04-18 01:44:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:25.690778 | orchestrator | 2026-04-18 01:44:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:25.692567 | orchestrator | 2026-04-18 01:44:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:25.692835 | orchestrator | 2026-04-18 01:44:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:28.742258 | orchestrator | 2026-04-18 01:44:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:28.743406 | orchestrator | 2026-04-18 01:44:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:28.743488 | orchestrator | 2026-04-18 01:44:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:31.788880 | orchestrator | 2026-04-18 01:44:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:31.790876 | orchestrator | 2026-04-18 01:44:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:31.790979 | orchestrator | 2026-04-18 01:44:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:34.843130 | orchestrator | 2026-04-18 01:44:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:34.845487 | orchestrator | 2026-04-18 01:44:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:34.845575 | orchestrator | 2026-04-18 01:44:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:37.892479 | orchestrator | 2026-04-18 01:44:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:37.893852 | orchestrator | 2026-04-18 01:44:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:37.893897 | orchestrator | 2026-04-18 01:44:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:40.939679 | orchestrator | 2026-04-18 01:44:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:40.942380 | orchestrator | 2026-04-18 01:44:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:40.942457 | orchestrator | 2026-04-18 01:44:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:43.989058 | orchestrator | 2026-04-18 01:44:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:43.990308 | orchestrator | 2026-04-18 01:44:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:43.990661 | orchestrator | 2026-04-18 01:44:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:47.035762 | orchestrator | 2026-04-18 01:44:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:47.037506 | orchestrator | 2026-04-18 01:44:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:47.037551 | orchestrator | 2026-04-18 01:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:50.079801 | orchestrator | 2026-04-18 01:44:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:50.081094 | orchestrator | 2026-04-18 01:44:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:50.081146 | orchestrator | 2026-04-18 01:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:53.125885 | orchestrator | 2026-04-18 01:44:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:53.127930 | orchestrator | 2026-04-18 01:44:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:53.127977 | orchestrator | 2026-04-18 01:44:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:56.174975 | orchestrator | 2026-04-18 01:44:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:56.175248 | orchestrator | 2026-04-18 01:44:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:56.175296 | orchestrator | 2026-04-18 01:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:44:59.213128 | orchestrator | 2026-04-18 01:44:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:44:59.214248 | orchestrator | 2026-04-18 01:44:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:44:59.214306 | orchestrator | 2026-04-18 01:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:02.261642 | orchestrator | 2026-04-18 01:45:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:02.262862 | orchestrator | 2026-04-18 01:45:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:02.262922 | orchestrator | 2026-04-18 01:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:05.307311 | orchestrator | 2026-04-18 01:45:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:05.309029 | orchestrator | 2026-04-18 01:45:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:05.309078 | orchestrator | 2026-04-18 01:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:08.355480 | orchestrator | 2026-04-18 01:45:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:08.356510 | orchestrator | 2026-04-18 01:45:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:08.356752 | orchestrator | 2026-04-18 01:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:11.397383 | orchestrator | 2026-04-18 01:45:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:11.398590 | orchestrator | 2026-04-18 01:45:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:11.398654 | orchestrator | 2026-04-18 01:45:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:14.439893 | orchestrator | 2026-04-18 01:45:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:14.441176 | orchestrator | 2026-04-18 01:45:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:14.441240 | orchestrator | 2026-04-18 01:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:17.483593 | orchestrator | 2026-04-18 01:45:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:17.484911 | orchestrator | 2026-04-18 01:45:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:17.485085 | orchestrator | 2026-04-18 01:45:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:20.533230 | orchestrator | 2026-04-18 01:45:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:20.534273 | orchestrator | 2026-04-18 01:45:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:20.534312 | orchestrator | 2026-04-18 01:45:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:23.578498 | orchestrator | 2026-04-18 01:45:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:23.579613 | orchestrator | 2026-04-18 01:45:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:23.579697 | orchestrator | 2026-04-18 01:45:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:26.625972 | orchestrator | 2026-04-18 01:45:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:26.628319 | orchestrator | 2026-04-18 01:45:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:26.628409 | orchestrator | 2026-04-18 01:45:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:29.673954 | orchestrator | 2026-04-18 01:45:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:29.675568 | orchestrator | 2026-04-18 01:45:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:29.675985 | orchestrator | 2026-04-18 01:45:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:32.715349 | orchestrator | 2026-04-18 01:45:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:32.717638 | orchestrator | 2026-04-18 01:45:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:32.717711 | orchestrator | 2026-04-18 01:45:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:35.767128 | orchestrator | 2026-04-18 01:45:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:35.791651 | orchestrator | 2026-04-18 01:45:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:35.791724 | orchestrator | 2026-04-18 01:45:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:38.814133 | orchestrator | 2026-04-18 01:45:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:38.815610 | orchestrator | 2026-04-18 01:45:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:38.815658 | orchestrator | 2026-04-18 01:45:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:41.862968 | orchestrator | 2026-04-18 01:45:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:41.866078 | orchestrator | 2026-04-18 01:45:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:41.866138 | orchestrator | 2026-04-18 01:45:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:44.911255 | orchestrator | 2026-04-18 01:45:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:44.913275 | orchestrator | 2026-04-18 01:45:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:44.913324 | orchestrator | 2026-04-18 01:45:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:47.956383 | orchestrator | 2026-04-18 01:45:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:47.958123 | orchestrator | 2026-04-18 01:45:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:47.958226 | orchestrator | 2026-04-18 01:45:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:51.005310 | orchestrator | 2026-04-18 01:45:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:51.007320 | orchestrator | 2026-04-18 01:45:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:51.007419 | orchestrator | 2026-04-18 01:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:54.053872 | orchestrator | 2026-04-18 01:45:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:54.055851 | orchestrator | 2026-04-18 01:45:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:54.055926 | orchestrator | 2026-04-18 01:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:45:57.101667 | orchestrator | 2026-04-18 01:45:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:45:57.103662 | orchestrator | 2026-04-18 01:45:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:45:57.103718 | orchestrator | 2026-04-18 01:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:00.148328 | orchestrator | 2026-04-18 01:46:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:00.149478 | orchestrator | 2026-04-18 01:46:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:00.149701 | orchestrator | 2026-04-18 01:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:03.196803 | orchestrator | 2026-04-18 01:46:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:03.198872 | orchestrator | 2026-04-18 01:46:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:03.199250 | orchestrator | 2026-04-18 01:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:06.245771 | orchestrator | 2026-04-18 01:46:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:06.247627 | orchestrator | 2026-04-18 01:46:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:06.247989 | orchestrator | 2026-04-18 01:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:09.292285 | orchestrator | 2026-04-18 01:46:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:09.293768 | orchestrator | 2026-04-18 01:46:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:09.293846 | orchestrator | 2026-04-18 01:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:12.345435 | orchestrator | 2026-04-18 01:46:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:12.347110 | orchestrator | 2026-04-18 01:46:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:12.347188 | orchestrator | 2026-04-18 01:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:15.389507 | orchestrator | 2026-04-18 01:46:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:15.391295 | orchestrator | 2026-04-18 01:46:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:15.391395 | orchestrator | 2026-04-18 01:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:18.434816 | orchestrator | 2026-04-18 01:46:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:18.436160 | orchestrator | 2026-04-18 01:46:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:18.436205 | orchestrator | 2026-04-18 01:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:21.483587 | orchestrator | 2026-04-18 01:46:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:21.484630 | orchestrator | 2026-04-18 01:46:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:21.485265 | orchestrator | 2026-04-18 01:46:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:24.535376 | orchestrator | 2026-04-18 01:46:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:24.536810 | orchestrator | 2026-04-18 01:46:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:24.536892 | orchestrator | 2026-04-18 01:46:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:27.587724 | orchestrator | 2026-04-18 01:46:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:27.590341 | orchestrator | 2026-04-18 01:46:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:27.590446 | orchestrator | 2026-04-18 01:46:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:30.633295 | orchestrator | 2026-04-18 01:46:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:30.634415 | orchestrator | 2026-04-18 01:46:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:30.634496 | orchestrator | 2026-04-18 01:46:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:33.681478 | orchestrator | 2026-04-18 01:46:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:33.683996 | orchestrator | 2026-04-18 01:46:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:33.684112 | orchestrator | 2026-04-18 01:46:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:36.731050 | orchestrator | 2026-04-18 01:46:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:36.732697 | orchestrator | 2026-04-18 01:46:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:36.732748 | orchestrator | 2026-04-18 01:46:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:39.780496 | orchestrator | 2026-04-18 01:46:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:39.781686 | orchestrator | 2026-04-18 01:46:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:39.781728 | orchestrator | 2026-04-18 01:46:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:42.834460 | orchestrator | 2026-04-18 01:46:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:42.836736 | orchestrator | 2026-04-18 01:46:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:42.836790 | orchestrator | 2026-04-18 01:46:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:45.887342 | orchestrator | 2026-04-18 01:46:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:45.888192 | orchestrator | 2026-04-18 01:46:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:45.888212 | orchestrator | 2026-04-18 01:46:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:48.926527 | orchestrator | 2026-04-18 01:46:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:48.928506 | orchestrator | 2026-04-18 01:46:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:48.928567 | orchestrator | 2026-04-18 01:46:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:51.977558 | orchestrator | 2026-04-18 01:46:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:51.980116 | orchestrator | 2026-04-18 01:46:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:51.980181 | orchestrator | 2026-04-18 01:46:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:55.031229 | orchestrator | 2026-04-18 01:46:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:55.031794 | orchestrator | 2026-04-18 01:46:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:55.032069 | orchestrator | 2026-04-18 01:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:46:58.079980 | orchestrator | 2026-04-18 01:46:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:46:58.082234 | orchestrator | 2026-04-18 01:46:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:46:58.082319 | orchestrator | 2026-04-18 01:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:01.125397 | orchestrator | 2026-04-18 01:47:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:01.127276 | orchestrator | 2026-04-18 01:47:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:01.127323 | orchestrator | 2026-04-18 01:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:04.171372 | orchestrator | 2026-04-18 01:47:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:04.173439 | orchestrator | 2026-04-18 01:47:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:04.173508 | orchestrator | 2026-04-18 01:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:07.215876 | orchestrator | 2026-04-18 01:47:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:07.218391 | orchestrator | 2026-04-18 01:47:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:07.218485 | orchestrator | 2026-04-18 01:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:10.262292 | orchestrator | 2026-04-18 01:47:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:10.263275 | orchestrator | 2026-04-18 01:47:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:10.264287 | orchestrator | 2026-04-18 01:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:13.307559 | orchestrator | 2026-04-18 01:47:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:13.308632 | orchestrator | 2026-04-18 01:47:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:13.308676 | orchestrator | 2026-04-18 01:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:16.355933 | orchestrator | 2026-04-18 01:47:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:16.357897 | orchestrator | 2026-04-18 01:47:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:16.358147 | orchestrator | 2026-04-18 01:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:19.401780 | orchestrator | 2026-04-18 01:47:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:19.403091 | orchestrator | 2026-04-18 01:47:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:19.403130 | orchestrator | 2026-04-18 01:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:22.443734 | orchestrator | 2026-04-18 01:47:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:22.448197 | orchestrator | 2026-04-18 01:47:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:22.448258 | orchestrator | 2026-04-18 01:47:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:25.487257 | orchestrator | 2026-04-18 01:47:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:25.488671 | orchestrator | 2026-04-18 01:47:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:25.488792 | orchestrator | 2026-04-18 01:47:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:28.531385 | orchestrator | 2026-04-18 01:47:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:28.532775 | orchestrator | 2026-04-18 01:47:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:28.532838 | orchestrator | 2026-04-18 01:47:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:31.577415 | orchestrator | 2026-04-18 01:47:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:31.580573 | orchestrator | 2026-04-18 01:47:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:31.580658 | orchestrator | 2026-04-18 01:47:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:34.617824 | orchestrator | 2026-04-18 01:47:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:34.619606 | orchestrator | 2026-04-18 01:47:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:34.619655 | orchestrator | 2026-04-18 01:47:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:37.663972 | orchestrator | 2026-04-18 01:47:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:37.666340 | orchestrator | 2026-04-18 01:47:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:37.666397 | orchestrator | 2026-04-18 01:47:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:40.709889 | orchestrator | 2026-04-18 01:47:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:40.710424 | orchestrator | 2026-04-18 01:47:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:40.710589 | orchestrator | 2026-04-18 01:47:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:43.756435 | orchestrator | 2026-04-18 01:47:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:43.758398 | orchestrator | 2026-04-18 01:47:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:43.758483 | orchestrator | 2026-04-18 01:47:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:46.811578 | orchestrator | 2026-04-18 01:47:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:46.813223 | orchestrator | 2026-04-18 01:47:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:46.813280 | orchestrator | 2026-04-18 01:47:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:49.856457 | orchestrator | 2026-04-18 01:47:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:49.858475 | orchestrator | 2026-04-18 01:47:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:49.858533 | orchestrator | 2026-04-18 01:47:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:52.905235 | orchestrator | 2026-04-18 01:47:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:52.907208 | orchestrator | 2026-04-18 01:47:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:52.907249 | orchestrator | 2026-04-18 01:47:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:55.954445 | orchestrator | 2026-04-18 01:47:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:55.955978 | orchestrator | 2026-04-18 01:47:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:55.956053 | orchestrator | 2026-04-18 01:47:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:47:59.008239 | orchestrator | 2026-04-18 01:47:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:47:59.012608 | orchestrator | 2026-04-18 01:47:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:47:59.012710 | orchestrator | 2026-04-18 01:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:02.053532 | orchestrator | 2026-04-18 01:48:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:02.055678 | orchestrator | 2026-04-18 01:48:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:02.055723 | orchestrator | 2026-04-18 01:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:05.106400 | orchestrator | 2026-04-18 01:48:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:05.108192 | orchestrator | 2026-04-18 01:48:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:05.108243 | orchestrator | 2026-04-18 01:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:08.152404 | orchestrator | 2026-04-18 01:48:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:08.155830 | orchestrator | 2026-04-18 01:48:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:08.155933 | orchestrator | 2026-04-18 01:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:11.201298 | orchestrator | 2026-04-18 01:48:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:11.201877 | orchestrator | 2026-04-18 01:48:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:11.202120 | orchestrator | 2026-04-18 01:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:14.248261 | orchestrator | 2026-04-18 01:48:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:14.249915 | orchestrator | 2026-04-18 01:48:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:14.249979 | orchestrator | 2026-04-18 01:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:17.289534 | orchestrator | 2026-04-18 01:48:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:17.291436 | orchestrator | 2026-04-18 01:48:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:17.291489 | orchestrator | 2026-04-18 01:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:20.332415 | orchestrator | 2026-04-18 01:48:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:20.333214 | orchestrator | 2026-04-18 01:48:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:20.333293 | orchestrator | 2026-04-18 01:48:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:23.375095 | orchestrator | 2026-04-18 01:48:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:23.375826 | orchestrator | 2026-04-18 01:48:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:23.375991 | orchestrator | 2026-04-18 01:48:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:26.421117 | orchestrator | 2026-04-18 01:48:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:26.422871 | orchestrator | 2026-04-18 01:48:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:26.422974 | orchestrator | 2026-04-18 01:48:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:29.468598 | orchestrator | 2026-04-18 01:48:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:29.470202 | orchestrator | 2026-04-18 01:48:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:29.470246 | orchestrator | 2026-04-18 01:48:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:32.512510 | orchestrator | 2026-04-18 01:48:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:32.514720 | orchestrator | 2026-04-18 01:48:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:32.514784 | orchestrator | 2026-04-18 01:48:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:35.561847 | orchestrator | 2026-04-18 01:48:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:35.562967 | orchestrator | 2026-04-18 01:48:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:35.563001 | orchestrator | 2026-04-18 01:48:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:38.615985 | orchestrator | 2026-04-18 01:48:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:38.617150 | orchestrator | 2026-04-18 01:48:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:38.617202 | orchestrator | 2026-04-18 01:48:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:41.658286 | orchestrator | 2026-04-18 01:48:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:41.660164 | orchestrator | 2026-04-18 01:48:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:41.660215 | orchestrator | 2026-04-18 01:48:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:44.707462 | orchestrator | 2026-04-18 01:48:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:44.709594 | orchestrator | 2026-04-18 01:48:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:44.709670 | orchestrator | 2026-04-18 01:48:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:47.752953 | orchestrator | 2026-04-18 01:48:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:47.753098 | orchestrator | 2026-04-18 01:48:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:47.753116 | orchestrator | 2026-04-18 01:48:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:50.796502 | orchestrator | 2026-04-18 01:48:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:50.798709 | orchestrator | 2026-04-18 01:48:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:50.798754 | orchestrator | 2026-04-18 01:48:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:53.840699 | orchestrator | 2026-04-18 01:48:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:53.842807 | orchestrator | 2026-04-18 01:48:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:53.842899 | orchestrator | 2026-04-18 01:48:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:56.887876 | orchestrator | 2026-04-18 01:48:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:56.889559 | orchestrator | 2026-04-18 01:48:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:56.889637 | orchestrator | 2026-04-18 01:48:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:48:59.934004 | orchestrator | 2026-04-18 01:48:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:48:59.935709 | orchestrator | 2026-04-18 01:48:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:48:59.935795 | orchestrator | 2026-04-18 01:48:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:02.977877 | orchestrator | 2026-04-18 01:49:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:02.979823 | orchestrator | 2026-04-18 01:49:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:02.979894 | orchestrator | 2026-04-18 01:49:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:06.026263 | orchestrator | 2026-04-18 01:49:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:06.028614 | orchestrator | 2026-04-18 01:49:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:06.028714 | orchestrator | 2026-04-18 01:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:09.078911 | orchestrator | 2026-04-18 01:49:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:09.080877 | orchestrator | 2026-04-18 01:49:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:09.080952 | orchestrator | 2026-04-18 01:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:12.129854 | orchestrator | 2026-04-18 01:49:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:12.131517 | orchestrator | 2026-04-18 01:49:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:12.131562 | orchestrator | 2026-04-18 01:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:15.181393 | orchestrator | 2026-04-18 01:49:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:15.182917 | orchestrator | 2026-04-18 01:49:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:15.182988 | orchestrator | 2026-04-18 01:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:18.226203 | orchestrator | 2026-04-18 01:49:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:18.226838 | orchestrator | 2026-04-18 01:49:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:18.226864 | orchestrator | 2026-04-18 01:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:21.273626 | orchestrator | 2026-04-18 01:49:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:21.275255 | orchestrator | 2026-04-18 01:49:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:21.275362 | orchestrator | 2026-04-18 01:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:24.316749 | orchestrator | 2026-04-18 01:49:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:24.316834 | orchestrator | 2026-04-18 01:49:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:24.316845 | orchestrator | 2026-04-18 01:49:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:27.355062 | orchestrator | 2026-04-18 01:49:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:27.356735 | orchestrator | 2026-04-18 01:49:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:27.356772 | orchestrator | 2026-04-18 01:49:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:30.396385 | orchestrator | 2026-04-18 01:49:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:30.397855 | orchestrator | 2026-04-18 01:49:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:30.397916 | orchestrator | 2026-04-18 01:49:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:33.442133 | orchestrator | 2026-04-18 01:49:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:33.443366 | orchestrator | 2026-04-18 01:49:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:33.443520 | orchestrator | 2026-04-18 01:49:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:36.489157 | orchestrator | 2026-04-18 01:49:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:36.490903 | orchestrator | 2026-04-18 01:49:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:36.491004 | orchestrator | 2026-04-18 01:49:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:39.536495 | orchestrator | 2026-04-18 01:49:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:39.537482 | orchestrator | 2026-04-18 01:49:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:39.537705 | orchestrator | 2026-04-18 01:49:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:42.585173 | orchestrator | 2026-04-18 01:49:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:42.586309 | orchestrator | 2026-04-18 01:49:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:42.586470 | orchestrator | 2026-04-18 01:49:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:45.629891 | orchestrator | 2026-04-18 01:49:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:45.631440 | orchestrator | 2026-04-18 01:49:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:45.631496 | orchestrator | 2026-04-18 01:49:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:48.672369 | orchestrator | 2026-04-18 01:49:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:48.673296 | orchestrator | 2026-04-18 01:49:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:48.673598 | orchestrator | 2026-04-18 01:49:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:51.722455 | orchestrator | 2026-04-18 01:49:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:51.723823 | orchestrator | 2026-04-18 01:49:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:51.723924 | orchestrator | 2026-04-18 01:49:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:54.765254 | orchestrator | 2026-04-18 01:49:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:54.766340 | orchestrator | 2026-04-18 01:49:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:54.766392 | orchestrator | 2026-04-18 01:49:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:49:57.808383 | orchestrator | 2026-04-18 01:49:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:49:57.809883 | orchestrator | 2026-04-18 01:49:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:49:57.809953 | orchestrator | 2026-04-18 01:49:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:00.849088 | orchestrator | 2026-04-18 01:50:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:00.850475 | orchestrator | 2026-04-18 01:50:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:00.850496 | orchestrator | 2026-04-18 01:50:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:03.897427 | orchestrator | 2026-04-18 01:50:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:03.898197 | orchestrator | 2026-04-18 01:50:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:03.898407 | orchestrator | 2026-04-18 01:50:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:06.943315 | orchestrator | 2026-04-18 01:50:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:06.944723 | orchestrator | 2026-04-18 01:50:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:06.944853 | orchestrator | 2026-04-18 01:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:09.992645 | orchestrator | 2026-04-18 01:50:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:09.994856 | orchestrator | 2026-04-18 01:50:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:09.994976 | orchestrator | 2026-04-18 01:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:13.045515 | orchestrator | 2026-04-18 01:50:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:13.047790 | orchestrator | 2026-04-18 01:50:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:13.047858 | orchestrator | 2026-04-18 01:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:16.090897 | orchestrator | 2026-04-18 01:50:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:16.093005 | orchestrator | 2026-04-18 01:50:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:16.093098 | orchestrator | 2026-04-18 01:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:19.137953 | orchestrator | 2026-04-18 01:50:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:19.139275 | orchestrator | 2026-04-18 01:50:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:19.139394 | orchestrator | 2026-04-18 01:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:22.183578 | orchestrator | 2026-04-18 01:50:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:22.185322 | orchestrator | 2026-04-18 01:50:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:22.185409 | orchestrator | 2026-04-18 01:50:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:25.230769 | orchestrator | 2026-04-18 01:50:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:25.232814 | orchestrator | 2026-04-18 01:50:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:25.232869 | orchestrator | 2026-04-18 01:50:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:28.272879 | orchestrator | 2026-04-18 01:50:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:28.274419 | orchestrator | 2026-04-18 01:50:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:28.274508 | orchestrator | 2026-04-18 01:50:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:31.316005 | orchestrator | 2026-04-18 01:50:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:31.317500 | orchestrator | 2026-04-18 01:50:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:31.317624 | orchestrator | 2026-04-18 01:50:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:34.361851 | orchestrator | 2026-04-18 01:50:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:34.364091 | orchestrator | 2026-04-18 01:50:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:34.364159 | orchestrator | 2026-04-18 01:50:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:37.409372 | orchestrator | 2026-04-18 01:50:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:37.411554 | orchestrator | 2026-04-18 01:50:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:37.411618 | orchestrator | 2026-04-18 01:50:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:40.458432 | orchestrator | 2026-04-18 01:50:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:40.460872 | orchestrator | 2026-04-18 01:50:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:40.460935 | orchestrator | 2026-04-18 01:50:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:43.503554 | orchestrator | 2026-04-18 01:50:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:43.504897 | orchestrator | 2026-04-18 01:50:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:43.504962 | orchestrator | 2026-04-18 01:50:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:46.556056 | orchestrator | 2026-04-18 01:50:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:46.559903 | orchestrator | 2026-04-18 01:50:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:46.559962 | orchestrator | 2026-04-18 01:50:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:49.604283 | orchestrator | 2026-04-18 01:50:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:49.605485 | orchestrator | 2026-04-18 01:50:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:49.605613 | orchestrator | 2026-04-18 01:50:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:52.653899 | orchestrator | 2026-04-18 01:50:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:52.656635 | orchestrator | 2026-04-18 01:50:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:52.656760 | orchestrator | 2026-04-18 01:50:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:55.706159 | orchestrator | 2026-04-18 01:50:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:55.707304 | orchestrator | 2026-04-18 01:50:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:55.707346 | orchestrator | 2026-04-18 01:50:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:50:58.755742 | orchestrator | 2026-04-18 01:50:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:50:58.756993 | orchestrator | 2026-04-18 01:50:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:50:58.757089 | orchestrator | 2026-04-18 01:50:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:51:01.789421 | orchestrator | 2026-04-18 01:51:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:51:01.790833 | orchestrator | 2026-04-18 01:51:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:51:01.790989 | orchestrator | 2026-04-18 01:51:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:51:04.838819 | orchestrator | 2026-04-18 01:51:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:51:04.844570 | orchestrator | 2026-04-18 01:51:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:51:04.844654 | orchestrator | 2026-04-18 01:51:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:51:07.887383 | orchestrator | 2026-04-18 01:51:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:07.988128 | orchestrator | 2026-04-18 01:53:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:07.988240 | orchestrator | 2026-04-18 01:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:11.039397 | orchestrator | 2026-04-18 01:53:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:11.040564 | orchestrator | 2026-04-18 01:53:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:11.040722 | orchestrator | 2026-04-18 01:53:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:14.081400 | orchestrator | 2026-04-18 01:53:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:14.083126 | orchestrator | 2026-04-18 01:53:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:14.083430 | orchestrator | 2026-04-18 01:53:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:17.123899 | orchestrator | 2026-04-18 01:53:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:17.125539 | orchestrator | 2026-04-18 01:53:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:17.125607 | orchestrator | 2026-04-18 01:53:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:20.158363 | orchestrator | 2026-04-18 01:53:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:20.159477 | orchestrator | 2026-04-18 01:53:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:20.159585 | orchestrator | 2026-04-18 01:53:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:23.201733 | orchestrator | 2026-04-18 01:53:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:23.203259 | orchestrator | 2026-04-18 01:53:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:23.203318 | orchestrator | 2026-04-18 01:53:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:26.245379 | orchestrator | 2026-04-18 01:53:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:26.247138 | orchestrator | 2026-04-18 01:53:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:26.247222 | orchestrator | 2026-04-18 01:53:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:29.294273 | orchestrator | 2026-04-18 01:53:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:29.295422 | orchestrator | 2026-04-18 01:53:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:29.295883 | orchestrator | 2026-04-18 01:53:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:32.348552 | orchestrator | 2026-04-18 01:53:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:32.350256 | orchestrator | 2026-04-18 01:53:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:32.350333 | orchestrator | 2026-04-18 01:53:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:35.397583 | orchestrator | 2026-04-18 01:53:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:35.398803 | orchestrator | 2026-04-18 01:53:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:35.398919 | orchestrator | 2026-04-18 01:53:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:38.446466 | orchestrator | 2026-04-18 01:53:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:38.447669 | orchestrator | 2026-04-18 01:53:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:38.447714 | orchestrator | 2026-04-18 01:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:41.506243 | orchestrator | 2026-04-18 01:53:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:41.508435 | orchestrator | 2026-04-18 01:53:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:41.508507 | orchestrator | 2026-04-18 01:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:44.562381 | orchestrator | 2026-04-18 01:53:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:44.564208 | orchestrator | 2026-04-18 01:53:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:44.564255 | orchestrator | 2026-04-18 01:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:47.614447 | orchestrator | 2026-04-18 01:53:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:47.617306 | orchestrator | 2026-04-18 01:53:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:47.617394 | orchestrator | 2026-04-18 01:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:50.659563 | orchestrator | 2026-04-18 01:53:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:50.661132 | orchestrator | 2026-04-18 01:53:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:50.661181 | orchestrator | 2026-04-18 01:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:53.703281 | orchestrator | 2026-04-18 01:53:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:53.703975 | orchestrator | 2026-04-18 01:53:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:53.704141 | orchestrator | 2026-04-18 01:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:56.750601 | orchestrator | 2026-04-18 01:53:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:56.752305 | orchestrator | 2026-04-18 01:53:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:56.752453 | orchestrator | 2026-04-18 01:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:53:59.796850 | orchestrator | 2026-04-18 01:53:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:53:59.798246 | orchestrator | 2026-04-18 01:53:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:53:59.798301 | orchestrator | 2026-04-18 01:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:02.834762 | orchestrator | 2026-04-18 01:54:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:02.835866 | orchestrator | 2026-04-18 01:54:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:02.835900 | orchestrator | 2026-04-18 01:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:05.885620 | orchestrator | 2026-04-18 01:54:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:05.887414 | orchestrator | 2026-04-18 01:54:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:05.887521 | orchestrator | 2026-04-18 01:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:08.930561 | orchestrator | 2026-04-18 01:54:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:08.931914 | orchestrator | 2026-04-18 01:54:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:08.931960 | orchestrator | 2026-04-18 01:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:11.977428 | orchestrator | 2026-04-18 01:54:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:11.979401 | orchestrator | 2026-04-18 01:54:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:11.979449 | orchestrator | 2026-04-18 01:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:15.033074 | orchestrator | 2026-04-18 01:54:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:15.036117 | orchestrator | 2026-04-18 01:54:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:15.036200 | orchestrator | 2026-04-18 01:54:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:18.085213 | orchestrator | 2026-04-18 01:54:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:18.086766 | orchestrator | 2026-04-18 01:54:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:18.086826 | orchestrator | 2026-04-18 01:54:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:21.137135 | orchestrator | 2026-04-18 01:54:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:21.138821 | orchestrator | 2026-04-18 01:54:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:21.138870 | orchestrator | 2026-04-18 01:54:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:24.189497 | orchestrator | 2026-04-18 01:54:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:24.191719 | orchestrator | 2026-04-18 01:54:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:24.191813 | orchestrator | 2026-04-18 01:54:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:27.241017 | orchestrator | 2026-04-18 01:54:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:27.243906 | orchestrator | 2026-04-18 01:54:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:27.243971 | orchestrator | 2026-04-18 01:54:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:30.288982 | orchestrator | 2026-04-18 01:54:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:30.290324 | orchestrator | 2026-04-18 01:54:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:30.290428 | orchestrator | 2026-04-18 01:54:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:33.338241 | orchestrator | 2026-04-18 01:54:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:33.339305 | orchestrator | 2026-04-18 01:54:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:33.339332 | orchestrator | 2026-04-18 01:54:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:36.386690 | orchestrator | 2026-04-18 01:54:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:36.388144 | orchestrator | 2026-04-18 01:54:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:36.388253 | orchestrator | 2026-04-18 01:54:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:39.435588 | orchestrator | 2026-04-18 01:54:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:39.437338 | orchestrator | 2026-04-18 01:54:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:39.437410 | orchestrator | 2026-04-18 01:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:42.477963 | orchestrator | 2026-04-18 01:54:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:42.478621 | orchestrator | 2026-04-18 01:54:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:42.478662 | orchestrator | 2026-04-18 01:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:45.525585 | orchestrator | 2026-04-18 01:54:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:45.527538 | orchestrator | 2026-04-18 01:54:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:45.527591 | orchestrator | 2026-04-18 01:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:48.572077 | orchestrator | 2026-04-18 01:54:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:48.574103 | orchestrator | 2026-04-18 01:54:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:48.574172 | orchestrator | 2026-04-18 01:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:51.615580 | orchestrator | 2026-04-18 01:54:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:51.616964 | orchestrator | 2026-04-18 01:54:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:51.617100 | orchestrator | 2026-04-18 01:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:54.664084 | orchestrator | 2026-04-18 01:54:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:54.664621 | orchestrator | 2026-04-18 01:54:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:54.665075 | orchestrator | 2026-04-18 01:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:54:57.718146 | orchestrator | 2026-04-18 01:54:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:54:57.720439 | orchestrator | 2026-04-18 01:54:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:54:57.720492 | orchestrator | 2026-04-18 01:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:00.766702 | orchestrator | 2026-04-18 01:55:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:00.768913 | orchestrator | 2026-04-18 01:55:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:00.768965 | orchestrator | 2026-04-18 01:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:03.817960 | orchestrator | 2026-04-18 01:55:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:03.820177 | orchestrator | 2026-04-18 01:55:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:03.820252 | orchestrator | 2026-04-18 01:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:06.869847 | orchestrator | 2026-04-18 01:55:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:06.871286 | orchestrator | 2026-04-18 01:55:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:06.871350 | orchestrator | 2026-04-18 01:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:09.916492 | orchestrator | 2026-04-18 01:55:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:09.919067 | orchestrator | 2026-04-18 01:55:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:09.919926 | orchestrator | 2026-04-18 01:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:12.966353 | orchestrator | 2026-04-18 01:55:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:12.968184 | orchestrator | 2026-04-18 01:55:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:12.968234 | orchestrator | 2026-04-18 01:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:16.016062 | orchestrator | 2026-04-18 01:55:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:16.017437 | orchestrator | 2026-04-18 01:55:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:16.017617 | orchestrator | 2026-04-18 01:55:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:19.063517 | orchestrator | 2026-04-18 01:55:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:19.066333 | orchestrator | 2026-04-18 01:55:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:19.066642 | orchestrator | 2026-04-18 01:55:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:22.111506 | orchestrator | 2026-04-18 01:55:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:22.112875 | orchestrator | 2026-04-18 01:55:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:22.112936 | orchestrator | 2026-04-18 01:55:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:25.162840 | orchestrator | 2026-04-18 01:55:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:25.164258 | orchestrator | 2026-04-18 01:55:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:25.164322 | orchestrator | 2026-04-18 01:55:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:28.208537 | orchestrator | 2026-04-18 01:55:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:28.212137 | orchestrator | 2026-04-18 01:55:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:28.212217 | orchestrator | 2026-04-18 01:55:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:31.260077 | orchestrator | 2026-04-18 01:55:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:31.261171 | orchestrator | 2026-04-18 01:55:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:31.261211 | orchestrator | 2026-04-18 01:55:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:34.308509 | orchestrator | 2026-04-18 01:55:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:34.309338 | orchestrator | 2026-04-18 01:55:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:34.309391 | orchestrator | 2026-04-18 01:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:37.353538 | orchestrator | 2026-04-18 01:55:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:37.355463 | orchestrator | 2026-04-18 01:55:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:37.355539 | orchestrator | 2026-04-18 01:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:40.399824 | orchestrator | 2026-04-18 01:55:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:40.401368 | orchestrator | 2026-04-18 01:55:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:40.401512 | orchestrator | 2026-04-18 01:55:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:43.450906 | orchestrator | 2026-04-18 01:55:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:43.452548 | orchestrator | 2026-04-18 01:55:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:43.452646 | orchestrator | 2026-04-18 01:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:46.495908 | orchestrator | 2026-04-18 01:55:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:46.497067 | orchestrator | 2026-04-18 01:55:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:46.497324 | orchestrator | 2026-04-18 01:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:49.540665 | orchestrator | 2026-04-18 01:55:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:49.541414 | orchestrator | 2026-04-18 01:55:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:49.541584 | orchestrator | 2026-04-18 01:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:52.587117 | orchestrator | 2026-04-18 01:55:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:52.589066 | orchestrator | 2026-04-18 01:55:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:52.589134 | orchestrator | 2026-04-18 01:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:55.631617 | orchestrator | 2026-04-18 01:55:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:55.632595 | orchestrator | 2026-04-18 01:55:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:55.632745 | orchestrator | 2026-04-18 01:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:55:58.680551 | orchestrator | 2026-04-18 01:55:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:55:58.682308 | orchestrator | 2026-04-18 01:55:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:55:58.682379 | orchestrator | 2026-04-18 01:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:01.727672 | orchestrator | 2026-04-18 01:56:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:01.728832 | orchestrator | 2026-04-18 01:56:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:01.728898 | orchestrator | 2026-04-18 01:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:04.772326 | orchestrator | 2026-04-18 01:56:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:04.774296 | orchestrator | 2026-04-18 01:56:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:04.774395 | orchestrator | 2026-04-18 01:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:07.823785 | orchestrator | 2026-04-18 01:56:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:07.826487 | orchestrator | 2026-04-18 01:56:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:07.826578 | orchestrator | 2026-04-18 01:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:10.868357 | orchestrator | 2026-04-18 01:56:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:10.870730 | orchestrator | 2026-04-18 01:56:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:10.870781 | orchestrator | 2026-04-18 01:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:13.919369 | orchestrator | 2026-04-18 01:56:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:13.920838 | orchestrator | 2026-04-18 01:56:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:13.920863 | orchestrator | 2026-04-18 01:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:16.967717 | orchestrator | 2026-04-18 01:56:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:16.970337 | orchestrator | 2026-04-18 01:56:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:16.970452 | orchestrator | 2026-04-18 01:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:20.015860 | orchestrator | 2026-04-18 01:56:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:20.018273 | orchestrator | 2026-04-18 01:56:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:20.018468 | orchestrator | 2026-04-18 01:56:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:23.058636 | orchestrator | 2026-04-18 01:56:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:23.060799 | orchestrator | 2026-04-18 01:56:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:23.060884 | orchestrator | 2026-04-18 01:56:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:26.096303 | orchestrator | 2026-04-18 01:56:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:26.097903 | orchestrator | 2026-04-18 01:56:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:26.097962 | orchestrator | 2026-04-18 01:56:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:29.143499 | orchestrator | 2026-04-18 01:56:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:29.145840 | orchestrator | 2026-04-18 01:56:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:29.145912 | orchestrator | 2026-04-18 01:56:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:32.188785 | orchestrator | 2026-04-18 01:56:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:32.190341 | orchestrator | 2026-04-18 01:56:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:32.190420 | orchestrator | 2026-04-18 01:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:35.236626 | orchestrator | 2026-04-18 01:56:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:35.238223 | orchestrator | 2026-04-18 01:56:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:35.238280 | orchestrator | 2026-04-18 01:56:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:38.284847 | orchestrator | 2026-04-18 01:56:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:38.286987 | orchestrator | 2026-04-18 01:56:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:38.287146 | orchestrator | 2026-04-18 01:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:41.334375 | orchestrator | 2026-04-18 01:56:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:41.336591 | orchestrator | 2026-04-18 01:56:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:41.336799 | orchestrator | 2026-04-18 01:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:44.387674 | orchestrator | 2026-04-18 01:56:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:44.389762 | orchestrator | 2026-04-18 01:56:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:44.389805 | orchestrator | 2026-04-18 01:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:47.437216 | orchestrator | 2026-04-18 01:56:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:47.438739 | orchestrator | 2026-04-18 01:56:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:47.438789 | orchestrator | 2026-04-18 01:56:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:50.491106 | orchestrator | 2026-04-18 01:56:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:50.495609 | orchestrator | 2026-04-18 01:56:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:50.495666 | orchestrator | 2026-04-18 01:56:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:53.544365 | orchestrator | 2026-04-18 01:56:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:53.545364 | orchestrator | 2026-04-18 01:56:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:53.545418 | orchestrator | 2026-04-18 01:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:56.583559 | orchestrator | 2026-04-18 01:56:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:56.585416 | orchestrator | 2026-04-18 01:56:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:56.585458 | orchestrator | 2026-04-18 01:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:56:59.633544 | orchestrator | 2026-04-18 01:56:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:56:59.635189 | orchestrator | 2026-04-18 01:56:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:56:59.635237 | orchestrator | 2026-04-18 01:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:02.679359 | orchestrator | 2026-04-18 01:57:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:02.681293 | orchestrator | 2026-04-18 01:57:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:02.681384 | orchestrator | 2026-04-18 01:57:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:05.724898 | orchestrator | 2026-04-18 01:57:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:05.726626 | orchestrator | 2026-04-18 01:57:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:05.726682 | orchestrator | 2026-04-18 01:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:08.769815 | orchestrator | 2026-04-18 01:57:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:08.771003 | orchestrator | 2026-04-18 01:57:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:08.771105 | orchestrator | 2026-04-18 01:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:11.814409 | orchestrator | 2026-04-18 01:57:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:11.815905 | orchestrator | 2026-04-18 01:57:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:11.816133 | orchestrator | 2026-04-18 01:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:14.861193 | orchestrator | 2026-04-18 01:57:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:14.862763 | orchestrator | 2026-04-18 01:57:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:14.862798 | orchestrator | 2026-04-18 01:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:17.912455 | orchestrator | 2026-04-18 01:57:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:17.912637 | orchestrator | 2026-04-18 01:57:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:17.912681 | orchestrator | 2026-04-18 01:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:20.956427 | orchestrator | 2026-04-18 01:57:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:20.958529 | orchestrator | 2026-04-18 01:57:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:20.958635 | orchestrator | 2026-04-18 01:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:24.005751 | orchestrator | 2026-04-18 01:57:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:24.008631 | orchestrator | 2026-04-18 01:57:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:24.008755 | orchestrator | 2026-04-18 01:57:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:27.051915 | orchestrator | 2026-04-18 01:57:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:27.053657 | orchestrator | 2026-04-18 01:57:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:27.053704 | orchestrator | 2026-04-18 01:57:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:30.103827 | orchestrator | 2026-04-18 01:57:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:30.105666 | orchestrator | 2026-04-18 01:57:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:30.105721 | orchestrator | 2026-04-18 01:57:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:33.148952 | orchestrator | 2026-04-18 01:57:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:33.150183 | orchestrator | 2026-04-18 01:57:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:33.150261 | orchestrator | 2026-04-18 01:57:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:36.198317 | orchestrator | 2026-04-18 01:57:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:36.199815 | orchestrator | 2026-04-18 01:57:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:36.199868 | orchestrator | 2026-04-18 01:57:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:39.241652 | orchestrator | 2026-04-18 01:57:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:39.244083 | orchestrator | 2026-04-18 01:57:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:39.244173 | orchestrator | 2026-04-18 01:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:42.285041 | orchestrator | 2026-04-18 01:57:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:42.286644 | orchestrator | 2026-04-18 01:57:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:42.286738 | orchestrator | 2026-04-18 01:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:45.331105 | orchestrator | 2026-04-18 01:57:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:45.332466 | orchestrator | 2026-04-18 01:57:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:45.332613 | orchestrator | 2026-04-18 01:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:48.395791 | orchestrator | 2026-04-18 01:57:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:48.395938 | orchestrator | 2026-04-18 01:57:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:48.395957 | orchestrator | 2026-04-18 01:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:51.442362 | orchestrator | 2026-04-18 01:57:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:51.443839 | orchestrator | 2026-04-18 01:57:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:51.443923 | orchestrator | 2026-04-18 01:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:54.501619 | orchestrator | 2026-04-18 01:57:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:54.503970 | orchestrator | 2026-04-18 01:57:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:54.504160 | orchestrator | 2026-04-18 01:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:57:57.555670 | orchestrator | 2026-04-18 01:57:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:57:57.557714 | orchestrator | 2026-04-18 01:57:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:57:57.557773 | orchestrator | 2026-04-18 01:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:00.606163 | orchestrator | 2026-04-18 01:58:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:00.608884 | orchestrator | 2026-04-18 01:58:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:00.608980 | orchestrator | 2026-04-18 01:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:03.652375 | orchestrator | 2026-04-18 01:58:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:03.653295 | orchestrator | 2026-04-18 01:58:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:03.653370 | orchestrator | 2026-04-18 01:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:06.692123 | orchestrator | 2026-04-18 01:58:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:06.694199 | orchestrator | 2026-04-18 01:58:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:06.694343 | orchestrator | 2026-04-18 01:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:09.741366 | orchestrator | 2026-04-18 01:58:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:09.742857 | orchestrator | 2026-04-18 01:58:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:09.743075 | orchestrator | 2026-04-18 01:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:12.791089 | orchestrator | 2026-04-18 01:58:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:12.792726 | orchestrator | 2026-04-18 01:58:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:12.792779 | orchestrator | 2026-04-18 01:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:15.841275 | orchestrator | 2026-04-18 01:58:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:15.843241 | orchestrator | 2026-04-18 01:58:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:15.843306 | orchestrator | 2026-04-18 01:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:18.887917 | orchestrator | 2026-04-18 01:58:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:18.889108 | orchestrator | 2026-04-18 01:58:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:18.889161 | orchestrator | 2026-04-18 01:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:21.940450 | orchestrator | 2026-04-18 01:58:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:21.941407 | orchestrator | 2026-04-18 01:58:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:21.941442 | orchestrator | 2026-04-18 01:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:24.991963 | orchestrator | 2026-04-18 01:58:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:24.993254 | orchestrator | 2026-04-18 01:58:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:24.993305 | orchestrator | 2026-04-18 01:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:28.040408 | orchestrator | 2026-04-18 01:58:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:28.041969 | orchestrator | 2026-04-18 01:58:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:28.042239 | orchestrator | 2026-04-18 01:58:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:31.098741 | orchestrator | 2026-04-18 01:58:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:31.099878 | orchestrator | 2026-04-18 01:58:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:31.099943 | orchestrator | 2026-04-18 01:58:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:34.143268 | orchestrator | 2026-04-18 01:58:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:34.144473 | orchestrator | 2026-04-18 01:58:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:34.144818 | orchestrator | 2026-04-18 01:58:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:37.199694 | orchestrator | 2026-04-18 01:58:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:37.202256 | orchestrator | 2026-04-18 01:58:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:37.202315 | orchestrator | 2026-04-18 01:58:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:40.243846 | orchestrator | 2026-04-18 01:58:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:40.245551 | orchestrator | 2026-04-18 01:58:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:40.245753 | orchestrator | 2026-04-18 01:58:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:43.291545 | orchestrator | 2026-04-18 01:58:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:43.293150 | orchestrator | 2026-04-18 01:58:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:43.293191 | orchestrator | 2026-04-18 01:58:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:46.340381 | orchestrator | 2026-04-18 01:58:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:46.341270 | orchestrator | 2026-04-18 01:58:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:46.341516 | orchestrator | 2026-04-18 01:58:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:49.395830 | orchestrator | 2026-04-18 01:58:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:49.395961 | orchestrator | 2026-04-18 01:58:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:49.395972 | orchestrator | 2026-04-18 01:58:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:52.444492 | orchestrator | 2026-04-18 01:58:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:52.445954 | orchestrator | 2026-04-18 01:58:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:52.446109 | orchestrator | 2026-04-18 01:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:55.494793 | orchestrator | 2026-04-18 01:58:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:55.496314 | orchestrator | 2026-04-18 01:58:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:55.496379 | orchestrator | 2026-04-18 01:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:58:58.545173 | orchestrator | 2026-04-18 01:58:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:58:58.546433 | orchestrator | 2026-04-18 01:58:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:58:58.546607 | orchestrator | 2026-04-18 01:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:01.601555 | orchestrator | 2026-04-18 01:59:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:01.602739 | orchestrator | 2026-04-18 01:59:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:01.602785 | orchestrator | 2026-04-18 01:59:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:04.643507 | orchestrator | 2026-04-18 01:59:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:04.644873 | orchestrator | 2026-04-18 01:59:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:04.644931 | orchestrator | 2026-04-18 01:59:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:07.691615 | orchestrator | 2026-04-18 01:59:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:07.693182 | orchestrator | 2026-04-18 01:59:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:07.693226 | orchestrator | 2026-04-18 01:59:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:10.739826 | orchestrator | 2026-04-18 01:59:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:10.741184 | orchestrator | 2026-04-18 01:59:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:10.741220 | orchestrator | 2026-04-18 01:59:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:13.791446 | orchestrator | 2026-04-18 01:59:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:13.793394 | orchestrator | 2026-04-18 01:59:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:13.793505 | orchestrator | 2026-04-18 01:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:16.838863 | orchestrator | 2026-04-18 01:59:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:16.841240 | orchestrator | 2026-04-18 01:59:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:16.841299 | orchestrator | 2026-04-18 01:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:19.882937 | orchestrator | 2026-04-18 01:59:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:19.884248 | orchestrator | 2026-04-18 01:59:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:19.884448 | orchestrator | 2026-04-18 01:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:22.928406 | orchestrator | 2026-04-18 01:59:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:22.929766 | orchestrator | 2026-04-18 01:59:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:22.929831 | orchestrator | 2026-04-18 01:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:25.976649 | orchestrator | 2026-04-18 01:59:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:25.978192 | orchestrator | 2026-04-18 01:59:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:25.978364 | orchestrator | 2026-04-18 01:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:29.022397 | orchestrator | 2026-04-18 01:59:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:29.023245 | orchestrator | 2026-04-18 01:59:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:29.023292 | orchestrator | 2026-04-18 01:59:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:32.074513 | orchestrator | 2026-04-18 01:59:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:32.075843 | orchestrator | 2026-04-18 01:59:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:32.076224 | orchestrator | 2026-04-18 01:59:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:35.120438 | orchestrator | 2026-04-18 01:59:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:35.121877 | orchestrator | 2026-04-18 01:59:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:35.122091 | orchestrator | 2026-04-18 01:59:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:38.167714 | orchestrator | 2026-04-18 01:59:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:38.169233 | orchestrator | 2026-04-18 01:59:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:38.169254 | orchestrator | 2026-04-18 01:59:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:41.217534 | orchestrator | 2026-04-18 01:59:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:41.219093 | orchestrator | 2026-04-18 01:59:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:41.219148 | orchestrator | 2026-04-18 01:59:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:44.265391 | orchestrator | 2026-04-18 01:59:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:44.266340 | orchestrator | 2026-04-18 01:59:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:44.266558 | orchestrator | 2026-04-18 01:59:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:47.313318 | orchestrator | 2026-04-18 01:59:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:47.314211 | orchestrator | 2026-04-18 01:59:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:47.314682 | orchestrator | 2026-04-18 01:59:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:50.355169 | orchestrator | 2026-04-18 01:59:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:50.355686 | orchestrator | 2026-04-18 01:59:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:50.355844 | orchestrator | 2026-04-18 01:59:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:53.399541 | orchestrator | 2026-04-18 01:59:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:53.400971 | orchestrator | 2026-04-18 01:59:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:53.401010 | orchestrator | 2026-04-18 01:59:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:56.446481 | orchestrator | 2026-04-18 01:59:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:56.447698 | orchestrator | 2026-04-18 01:59:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:56.447784 | orchestrator | 2026-04-18 01:59:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 01:59:59.491948 | orchestrator | 2026-04-18 01:59:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 01:59:59.492329 | orchestrator | 2026-04-18 01:59:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 01:59:59.492403 | orchestrator | 2026-04-18 01:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:02.534845 | orchestrator | 2026-04-18 02:00:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:02.537790 | orchestrator | 2026-04-18 02:00:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:02.537928 | orchestrator | 2026-04-18 02:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:05.581187 | orchestrator | 2026-04-18 02:00:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:05.582691 | orchestrator | 2026-04-18 02:00:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:05.582755 | orchestrator | 2026-04-18 02:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:08.619951 | orchestrator | 2026-04-18 02:00:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:08.621013 | orchestrator | 2026-04-18 02:00:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:08.621039 | orchestrator | 2026-04-18 02:00:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:11.671842 | orchestrator | 2026-04-18 02:00:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:11.673626 | orchestrator | 2026-04-18 02:00:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:11.673690 | orchestrator | 2026-04-18 02:00:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:14.718224 | orchestrator | 2026-04-18 02:00:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:14.721260 | orchestrator | 2026-04-18 02:00:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:14.721327 | orchestrator | 2026-04-18 02:00:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:17.772788 | orchestrator | 2026-04-18 02:00:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:17.774538 | orchestrator | 2026-04-18 02:00:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:17.774616 | orchestrator | 2026-04-18 02:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:20.826318 | orchestrator | 2026-04-18 02:00:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:20.828165 | orchestrator | 2026-04-18 02:00:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:20.828208 | orchestrator | 2026-04-18 02:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:23.868289 | orchestrator | 2026-04-18 02:00:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:23.869819 | orchestrator | 2026-04-18 02:00:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:23.869897 | orchestrator | 2026-04-18 02:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:26.913263 | orchestrator | 2026-04-18 02:00:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:26.915669 | orchestrator | 2026-04-18 02:00:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:26.915731 | orchestrator | 2026-04-18 02:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:29.962149 | orchestrator | 2026-04-18 02:00:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:29.964111 | orchestrator | 2026-04-18 02:00:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:29.964203 | orchestrator | 2026-04-18 02:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:33.016432 | orchestrator | 2026-04-18 02:00:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:33.018221 | orchestrator | 2026-04-18 02:00:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:33.018317 | orchestrator | 2026-04-18 02:00:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:36.061355 | orchestrator | 2026-04-18 02:00:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:36.063170 | orchestrator | 2026-04-18 02:00:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:36.063216 | orchestrator | 2026-04-18 02:00:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:39.115513 | orchestrator | 2026-04-18 02:00:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:39.117336 | orchestrator | 2026-04-18 02:00:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:39.117435 | orchestrator | 2026-04-18 02:00:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:42.170229 | orchestrator | 2026-04-18 02:00:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:42.172716 | orchestrator | 2026-04-18 02:00:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:42.172768 | orchestrator | 2026-04-18 02:00:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:45.212979 | orchestrator | 2026-04-18 02:00:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:45.214240 | orchestrator | 2026-04-18 02:00:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:45.214320 | orchestrator | 2026-04-18 02:00:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:48.260944 | orchestrator | 2026-04-18 02:00:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:48.261836 | orchestrator | 2026-04-18 02:00:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:48.261966 | orchestrator | 2026-04-18 02:00:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:51.305600 | orchestrator | 2026-04-18 02:00:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:51.307370 | orchestrator | 2026-04-18 02:00:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:51.307485 | orchestrator | 2026-04-18 02:00:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:54.353571 | orchestrator | 2026-04-18 02:00:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:54.355522 | orchestrator | 2026-04-18 02:00:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:54.355647 | orchestrator | 2026-04-18 02:00:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:00:57.400525 | orchestrator | 2026-04-18 02:00:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:00:57.402760 | orchestrator | 2026-04-18 02:00:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:00:57.402837 | orchestrator | 2026-04-18 02:00:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:00.443799 | orchestrator | 2026-04-18 02:01:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:00.445029 | orchestrator | 2026-04-18 02:01:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:00.445527 | orchestrator | 2026-04-18 02:01:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:03.485579 | orchestrator | 2026-04-18 02:01:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:03.487222 | orchestrator | 2026-04-18 02:01:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:03.487279 | orchestrator | 2026-04-18 02:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:06.531479 | orchestrator | 2026-04-18 02:01:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:06.532658 | orchestrator | 2026-04-18 02:01:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:06.532703 | orchestrator | 2026-04-18 02:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:09.578628 | orchestrator | 2026-04-18 02:01:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:09.579846 | orchestrator | 2026-04-18 02:01:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:09.579936 | orchestrator | 2026-04-18 02:01:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:12.629635 | orchestrator | 2026-04-18 02:01:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:12.631374 | orchestrator | 2026-04-18 02:01:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:12.631438 | orchestrator | 2026-04-18 02:01:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:15.671915 | orchestrator | 2026-04-18 02:01:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:15.673728 | orchestrator | 2026-04-18 02:01:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:15.673848 | orchestrator | 2026-04-18 02:01:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:18.723882 | orchestrator | 2026-04-18 02:01:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:18.725229 | orchestrator | 2026-04-18 02:01:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:18.725264 | orchestrator | 2026-04-18 02:01:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:21.771061 | orchestrator | 2026-04-18 02:01:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:21.772872 | orchestrator | 2026-04-18 02:01:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:21.772926 | orchestrator | 2026-04-18 02:01:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:24.820194 | orchestrator | 2026-04-18 02:01:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:24.822238 | orchestrator | 2026-04-18 02:01:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:24.822287 | orchestrator | 2026-04-18 02:01:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:27.868064 | orchestrator | 2026-04-18 02:01:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:27.870327 | orchestrator | 2026-04-18 02:01:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:27.870424 | orchestrator | 2026-04-18 02:01:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:30.915141 | orchestrator | 2026-04-18 02:01:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:30.915921 | orchestrator | 2026-04-18 02:01:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:30.916189 | orchestrator | 2026-04-18 02:01:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:33.970178 | orchestrator | 2026-04-18 02:01:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:33.971479 | orchestrator | 2026-04-18 02:01:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:33.972053 | orchestrator | 2026-04-18 02:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:37.016566 | orchestrator | 2026-04-18 02:01:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:37.018081 | orchestrator | 2026-04-18 02:01:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:37.018132 | orchestrator | 2026-04-18 02:01:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:40.065098 | orchestrator | 2026-04-18 02:01:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:40.066326 | orchestrator | 2026-04-18 02:01:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:40.066488 | orchestrator | 2026-04-18 02:01:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:43.106684 | orchestrator | 2026-04-18 02:01:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:43.109070 | orchestrator | 2026-04-18 02:01:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:43.109133 | orchestrator | 2026-04-18 02:01:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:46.148044 | orchestrator | 2026-04-18 02:01:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:46.149498 | orchestrator | 2026-04-18 02:01:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:46.149548 | orchestrator | 2026-04-18 02:01:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:49.193282 | orchestrator | 2026-04-18 02:01:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:49.194091 | orchestrator | 2026-04-18 02:01:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:49.194124 | orchestrator | 2026-04-18 02:01:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:52.245017 | orchestrator | 2026-04-18 02:01:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:52.246896 | orchestrator | 2026-04-18 02:01:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:52.247038 | orchestrator | 2026-04-18 02:01:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:55.295785 | orchestrator | 2026-04-18 02:01:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:55.297316 | orchestrator | 2026-04-18 02:01:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:55.297420 | orchestrator | 2026-04-18 02:01:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:01:58.344527 | orchestrator | 2026-04-18 02:01:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:01:58.346718 | orchestrator | 2026-04-18 02:01:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:01:58.346857 | orchestrator | 2026-04-18 02:01:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:01.393003 | orchestrator | 2026-04-18 02:02:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:01.394321 | orchestrator | 2026-04-18 02:02:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:01.394392 | orchestrator | 2026-04-18 02:02:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:04.442358 | orchestrator | 2026-04-18 02:02:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:04.443945 | orchestrator | 2026-04-18 02:02:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:04.444121 | orchestrator | 2026-04-18 02:02:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:07.501259 | orchestrator | 2026-04-18 02:02:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:07.504811 | orchestrator | 2026-04-18 02:02:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:07.504882 | orchestrator | 2026-04-18 02:02:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:10.547431 | orchestrator | 2026-04-18 02:02:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:10.548071 | orchestrator | 2026-04-18 02:02:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:10.548256 | orchestrator | 2026-04-18 02:02:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:13.601485 | orchestrator | 2026-04-18 02:02:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:13.603792 | orchestrator | 2026-04-18 02:02:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:13.603858 | orchestrator | 2026-04-18 02:02:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:16.649704 | orchestrator | 2026-04-18 02:02:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:16.651441 | orchestrator | 2026-04-18 02:02:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:16.651495 | orchestrator | 2026-04-18 02:02:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:19.693312 | orchestrator | 2026-04-18 02:02:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:19.694604 | orchestrator | 2026-04-18 02:02:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:19.694662 | orchestrator | 2026-04-18 02:02:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:22.739065 | orchestrator | 2026-04-18 02:02:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:22.740450 | orchestrator | 2026-04-18 02:02:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:22.740505 | orchestrator | 2026-04-18 02:02:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:25.787663 | orchestrator | 2026-04-18 02:02:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:25.789567 | orchestrator | 2026-04-18 02:02:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:25.789617 | orchestrator | 2026-04-18 02:02:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:28.837884 | orchestrator | 2026-04-18 02:02:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:28.839520 | orchestrator | 2026-04-18 02:02:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:28.839629 | orchestrator | 2026-04-18 02:02:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:31.886753 | orchestrator | 2026-04-18 02:02:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:31.888081 | orchestrator | 2026-04-18 02:02:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:31.888320 | orchestrator | 2026-04-18 02:02:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:34.935309 | orchestrator | 2026-04-18 02:02:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:34.937344 | orchestrator | 2026-04-18 02:02:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:34.937392 | orchestrator | 2026-04-18 02:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:37.985714 | orchestrator | 2026-04-18 02:02:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:37.987749 | orchestrator | 2026-04-18 02:02:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:37.987798 | orchestrator | 2026-04-18 02:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:41.038443 | orchestrator | 2026-04-18 02:02:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:41.039079 | orchestrator | 2026-04-18 02:02:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:41.039140 | orchestrator | 2026-04-18 02:02:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:44.085987 | orchestrator | 2026-04-18 02:02:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:44.088022 | orchestrator | 2026-04-18 02:02:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:44.088142 | orchestrator | 2026-04-18 02:02:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:47.131629 | orchestrator | 2026-04-18 02:02:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:47.133802 | orchestrator | 2026-04-18 02:02:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:47.133935 | orchestrator | 2026-04-18 02:02:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:50.179744 | orchestrator | 2026-04-18 02:02:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:50.181595 | orchestrator | 2026-04-18 02:02:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:50.181637 | orchestrator | 2026-04-18 02:02:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:53.220699 | orchestrator | 2026-04-18 02:02:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:53.221881 | orchestrator | 2026-04-18 02:02:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:53.222073 | orchestrator | 2026-04-18 02:02:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:56.264396 | orchestrator | 2026-04-18 02:02:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:56.266074 | orchestrator | 2026-04-18 02:02:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:56.266143 | orchestrator | 2026-04-18 02:02:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:02:59.305926 | orchestrator | 2026-04-18 02:02:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:02:59.307478 | orchestrator | 2026-04-18 02:02:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:02:59.307948 | orchestrator | 2026-04-18 02:02:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:02.355734 | orchestrator | 2026-04-18 02:03:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:02.357120 | orchestrator | 2026-04-18 02:03:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:02.357193 | orchestrator | 2026-04-18 02:03:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:05.408766 | orchestrator | 2026-04-18 02:03:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:05.410265 | orchestrator | 2026-04-18 02:03:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:05.410433 | orchestrator | 2026-04-18 02:03:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:08.461820 | orchestrator | 2026-04-18 02:03:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:08.464015 | orchestrator | 2026-04-18 02:03:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:08.464072 | orchestrator | 2026-04-18 02:03:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:11.515646 | orchestrator | 2026-04-18 02:03:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:11.518126 | orchestrator | 2026-04-18 02:03:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:11.518180 | orchestrator | 2026-04-18 02:03:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:14.567322 | orchestrator | 2026-04-18 02:03:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:14.571441 | orchestrator | 2026-04-18 02:03:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:14.571532 | orchestrator | 2026-04-18 02:03:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:17.622723 | orchestrator | 2026-04-18 02:03:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:17.627174 | orchestrator | 2026-04-18 02:03:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:17.627259 | orchestrator | 2026-04-18 02:03:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:20.675643 | orchestrator | 2026-04-18 02:03:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:20.677580 | orchestrator | 2026-04-18 02:03:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:20.677644 | orchestrator | 2026-04-18 02:03:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:23.727268 | orchestrator | 2026-04-18 02:03:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:23.730328 | orchestrator | 2026-04-18 02:03:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:23.730432 | orchestrator | 2026-04-18 02:03:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:26.781972 | orchestrator | 2026-04-18 02:03:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:26.786881 | orchestrator | 2026-04-18 02:03:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:26.786937 | orchestrator | 2026-04-18 02:03:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:29.837877 | orchestrator | 2026-04-18 02:03:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:29.839890 | orchestrator | 2026-04-18 02:03:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:29.839967 | orchestrator | 2026-04-18 02:03:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:32.886526 | orchestrator | 2026-04-18 02:03:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:32.887014 | orchestrator | 2026-04-18 02:03:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:32.887043 | orchestrator | 2026-04-18 02:03:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:35.932026 | orchestrator | 2026-04-18 02:03:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:35.933545 | orchestrator | 2026-04-18 02:03:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:35.933623 | orchestrator | 2026-04-18 02:03:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:38.977672 | orchestrator | 2026-04-18 02:03:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:38.978876 | orchestrator | 2026-04-18 02:03:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:38.978931 | orchestrator | 2026-04-18 02:03:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:42.026450 | orchestrator | 2026-04-18 02:03:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:42.028458 | orchestrator | 2026-04-18 02:03:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:42.028554 | orchestrator | 2026-04-18 02:03:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:45.092478 | orchestrator | 2026-04-18 02:03:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:45.093847 | orchestrator | 2026-04-18 02:03:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:45.093953 | orchestrator | 2026-04-18 02:03:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:48.144288 | orchestrator | 2026-04-18 02:03:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:48.145960 | orchestrator | 2026-04-18 02:03:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:48.146119 | orchestrator | 2026-04-18 02:03:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:51.188021 | orchestrator | 2026-04-18 02:03:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:51.189672 | orchestrator | 2026-04-18 02:03:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:51.189731 | orchestrator | 2026-04-18 02:03:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:54.236763 | orchestrator | 2026-04-18 02:03:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:54.238279 | orchestrator | 2026-04-18 02:03:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:54.238471 | orchestrator | 2026-04-18 02:03:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:03:57.286714 | orchestrator | 2026-04-18 02:03:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:03:57.288434 | orchestrator | 2026-04-18 02:03:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:03:57.288562 | orchestrator | 2026-04-18 02:03:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:00.331743 | orchestrator | 2026-04-18 02:04:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:00.331826 | orchestrator | 2026-04-18 02:04:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:00.331835 | orchestrator | 2026-04-18 02:04:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:03.374992 | orchestrator | 2026-04-18 02:04:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:03.376342 | orchestrator | 2026-04-18 02:04:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:03.376403 | orchestrator | 2026-04-18 02:04:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:06.418747 | orchestrator | 2026-04-18 02:04:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:06.421508 | orchestrator | 2026-04-18 02:04:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:06.421659 | orchestrator | 2026-04-18 02:04:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:09.474335 | orchestrator | 2026-04-18 02:04:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:09.475584 | orchestrator | 2026-04-18 02:04:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:09.475951 | orchestrator | 2026-04-18 02:04:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:12.525093 | orchestrator | 2026-04-18 02:04:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:12.526099 | orchestrator | 2026-04-18 02:04:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:12.526139 | orchestrator | 2026-04-18 02:04:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:15.573715 | orchestrator | 2026-04-18 02:04:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:15.575248 | orchestrator | 2026-04-18 02:04:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:15.575289 | orchestrator | 2026-04-18 02:04:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:18.627028 | orchestrator | 2026-04-18 02:04:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:18.630277 | orchestrator | 2026-04-18 02:04:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:18.630396 | orchestrator | 2026-04-18 02:04:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:21.676410 | orchestrator | 2026-04-18 02:04:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:21.678347 | orchestrator | 2026-04-18 02:04:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:21.678492 | orchestrator | 2026-04-18 02:04:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:24.722123 | orchestrator | 2026-04-18 02:04:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:24.724133 | orchestrator | 2026-04-18 02:04:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:24.724193 | orchestrator | 2026-04-18 02:04:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:27.773065 | orchestrator | 2026-04-18 02:04:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:27.774558 | orchestrator | 2026-04-18 02:04:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:27.774625 | orchestrator | 2026-04-18 02:04:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:30.819077 | orchestrator | 2026-04-18 02:04:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:30.820835 | orchestrator | 2026-04-18 02:04:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:30.820897 | orchestrator | 2026-04-18 02:04:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:33.869817 | orchestrator | 2026-04-18 02:04:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:33.870545 | orchestrator | 2026-04-18 02:04:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:33.870819 | orchestrator | 2026-04-18 02:04:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:36.914673 | orchestrator | 2026-04-18 02:04:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:36.915545 | orchestrator | 2026-04-18 02:04:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:36.915842 | orchestrator | 2026-04-18 02:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:39.961688 | orchestrator | 2026-04-18 02:04:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:39.963053 | orchestrator | 2026-04-18 02:04:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:39.963154 | orchestrator | 2026-04-18 02:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:43.007657 | orchestrator | 2026-04-18 02:04:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:43.008766 | orchestrator | 2026-04-18 02:04:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:43.008820 | orchestrator | 2026-04-18 02:04:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:46.062377 | orchestrator | 2026-04-18 02:04:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:46.065129 | orchestrator | 2026-04-18 02:04:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:46.065200 | orchestrator | 2026-04-18 02:04:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:49.109222 | orchestrator | 2026-04-18 02:04:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:49.111633 | orchestrator | 2026-04-18 02:04:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:49.111692 | orchestrator | 2026-04-18 02:04:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:52.159761 | orchestrator | 2026-04-18 02:04:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:52.161700 | orchestrator | 2026-04-18 02:04:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:52.161825 | orchestrator | 2026-04-18 02:04:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:55.213663 | orchestrator | 2026-04-18 02:04:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:55.215128 | orchestrator | 2026-04-18 02:04:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:55.215179 | orchestrator | 2026-04-18 02:04:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:04:58.261188 | orchestrator | 2026-04-18 02:04:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:04:58.263653 | orchestrator | 2026-04-18 02:04:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:04:58.263789 | orchestrator | 2026-04-18 02:04:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:01.311455 | orchestrator | 2026-04-18 02:05:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:01.312785 | orchestrator | 2026-04-18 02:05:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:01.312869 | orchestrator | 2026-04-18 02:05:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:04.357764 | orchestrator | 2026-04-18 02:05:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:04.358961 | orchestrator | 2026-04-18 02:05:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:04.359243 | orchestrator | 2026-04-18 02:05:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:07.407309 | orchestrator | 2026-04-18 02:05:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:07.410718 | orchestrator | 2026-04-18 02:05:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:07.410783 | orchestrator | 2026-04-18 02:05:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:10.470654 | orchestrator | 2026-04-18 02:05:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:10.473910 | orchestrator | 2026-04-18 02:05:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:10.474140 | orchestrator | 2026-04-18 02:05:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:13.517572 | orchestrator | 2026-04-18 02:05:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:13.518751 | orchestrator | 2026-04-18 02:05:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:13.518811 | orchestrator | 2026-04-18 02:05:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:16.561233 | orchestrator | 2026-04-18 02:05:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:16.563313 | orchestrator | 2026-04-18 02:05:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:16.563369 | orchestrator | 2026-04-18 02:05:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:19.608757 | orchestrator | 2026-04-18 02:05:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:19.611084 | orchestrator | 2026-04-18 02:05:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:19.611144 | orchestrator | 2026-04-18 02:05:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:22.661010 | orchestrator | 2026-04-18 02:05:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:22.661403 | orchestrator | 2026-04-18 02:05:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:22.661428 | orchestrator | 2026-04-18 02:05:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:25.709081 | orchestrator | 2026-04-18 02:05:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:25.709776 | orchestrator | 2026-04-18 02:05:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:25.709868 | orchestrator | 2026-04-18 02:05:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:28.750318 | orchestrator | 2026-04-18 02:05:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:28.751407 | orchestrator | 2026-04-18 02:05:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:28.751495 | orchestrator | 2026-04-18 02:05:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:31.803231 | orchestrator | 2026-04-18 02:05:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:31.804964 | orchestrator | 2026-04-18 02:05:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:31.805023 | orchestrator | 2026-04-18 02:05:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:34.849661 | orchestrator | 2026-04-18 02:05:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:34.851396 | orchestrator | 2026-04-18 02:05:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:34.851547 | orchestrator | 2026-04-18 02:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:37.900205 | orchestrator | 2026-04-18 02:05:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:37.901863 | orchestrator | 2026-04-18 02:05:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:37.901969 | orchestrator | 2026-04-18 02:05:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:40.952105 | orchestrator | 2026-04-18 02:05:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:40.954318 | orchestrator | 2026-04-18 02:05:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:40.954409 | orchestrator | 2026-04-18 02:05:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:44.006656 | orchestrator | 2026-04-18 02:05:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:44.008911 | orchestrator | 2026-04-18 02:05:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:44.009092 | orchestrator | 2026-04-18 02:05:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:47.062104 | orchestrator | 2026-04-18 02:05:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:47.063734 | orchestrator | 2026-04-18 02:05:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:47.063803 | orchestrator | 2026-04-18 02:05:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:50.115412 | orchestrator | 2026-04-18 02:05:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:50.116796 | orchestrator | 2026-04-18 02:05:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:50.116925 | orchestrator | 2026-04-18 02:05:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:53.165625 | orchestrator | 2026-04-18 02:05:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:53.167723 | orchestrator | 2026-04-18 02:05:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:53.167783 | orchestrator | 2026-04-18 02:05:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:56.219798 | orchestrator | 2026-04-18 02:05:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:56.220922 | orchestrator | 2026-04-18 02:05:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:56.220964 | orchestrator | 2026-04-18 02:05:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:05:59.268236 | orchestrator | 2026-04-18 02:05:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:05:59.270204 | orchestrator | 2026-04-18 02:05:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:05:59.270238 | orchestrator | 2026-04-18 02:05:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:02.320717 | orchestrator | 2026-04-18 02:06:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:02.321734 | orchestrator | 2026-04-18 02:06:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:02.321788 | orchestrator | 2026-04-18 02:06:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:05.369909 | orchestrator | 2026-04-18 02:06:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:05.371731 | orchestrator | 2026-04-18 02:06:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:05.371796 | orchestrator | 2026-04-18 02:06:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:08.424875 | orchestrator | 2026-04-18 02:06:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:08.426659 | orchestrator | 2026-04-18 02:06:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:08.426734 | orchestrator | 2026-04-18 02:06:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:11.487726 | orchestrator | 2026-04-18 02:06:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:11.490494 | orchestrator | 2026-04-18 02:06:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:11.490561 | orchestrator | 2026-04-18 02:06:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:14.542003 | orchestrator | 2026-04-18 02:06:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:14.542996 | orchestrator | 2026-04-18 02:06:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:14.543115 | orchestrator | 2026-04-18 02:06:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:17.593659 | orchestrator | 2026-04-18 02:06:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:17.595820 | orchestrator | 2026-04-18 02:06:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:17.595905 | orchestrator | 2026-04-18 02:06:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:20.645712 | orchestrator | 2026-04-18 02:06:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:20.646237 | orchestrator | 2026-04-18 02:06:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:20.646275 | orchestrator | 2026-04-18 02:06:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:23.688571 | orchestrator | 2026-04-18 02:06:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:23.691250 | orchestrator | 2026-04-18 02:06:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:23.691318 | orchestrator | 2026-04-18 02:06:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:26.744787 | orchestrator | 2026-04-18 02:06:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:26.747660 | orchestrator | 2026-04-18 02:06:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:26.747797 | orchestrator | 2026-04-18 02:06:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:29.792801 | orchestrator | 2026-04-18 02:06:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:29.794415 | orchestrator | 2026-04-18 02:06:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:29.794474 | orchestrator | 2026-04-18 02:06:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:32.833195 | orchestrator | 2026-04-18 02:06:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:32.835284 | orchestrator | 2026-04-18 02:06:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:32.835350 | orchestrator | 2026-04-18 02:06:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:35.878729 | orchestrator | 2026-04-18 02:06:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:35.879913 | orchestrator | 2026-04-18 02:06:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:35.879949 | orchestrator | 2026-04-18 02:06:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:38.927476 | orchestrator | 2026-04-18 02:06:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:38.929107 | orchestrator | 2026-04-18 02:06:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:38.929152 | orchestrator | 2026-04-18 02:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:41.982269 | orchestrator | 2026-04-18 02:06:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:41.983421 | orchestrator | 2026-04-18 02:06:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:41.983565 | orchestrator | 2026-04-18 02:06:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:45.031358 | orchestrator | 2026-04-18 02:06:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:45.032725 | orchestrator | 2026-04-18 02:06:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:45.032822 | orchestrator | 2026-04-18 02:06:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:48.082649 | orchestrator | 2026-04-18 02:06:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:48.084377 | orchestrator | 2026-04-18 02:06:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:48.084444 | orchestrator | 2026-04-18 02:06:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:51.141189 | orchestrator | 2026-04-18 02:06:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:51.143165 | orchestrator | 2026-04-18 02:06:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:51.143339 | orchestrator | 2026-04-18 02:06:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:54.189414 | orchestrator | 2026-04-18 02:06:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:54.191948 | orchestrator | 2026-04-18 02:06:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:54.192064 | orchestrator | 2026-04-18 02:06:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:06:57.240635 | orchestrator | 2026-04-18 02:06:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:06:57.241514 | orchestrator | 2026-04-18 02:06:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:06:57.241553 | orchestrator | 2026-04-18 02:06:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:00.284951 | orchestrator | 2026-04-18 02:07:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:00.286186 | orchestrator | 2026-04-18 02:07:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:00.286276 | orchestrator | 2026-04-18 02:07:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:03.334338 | orchestrator | 2026-04-18 02:07:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:03.334956 | orchestrator | 2026-04-18 02:07:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:03.335008 | orchestrator | 2026-04-18 02:07:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:06.382686 | orchestrator | 2026-04-18 02:07:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:06.384848 | orchestrator | 2026-04-18 02:07:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:06.384933 | orchestrator | 2026-04-18 02:07:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:09.437678 | orchestrator | 2026-04-18 02:07:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:09.439854 | orchestrator | 2026-04-18 02:07:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:09.440363 | orchestrator | 2026-04-18 02:07:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:12.494816 | orchestrator | 2026-04-18 02:07:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:12.496387 | orchestrator | 2026-04-18 02:07:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:12.496467 | orchestrator | 2026-04-18 02:07:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:15.539523 | orchestrator | 2026-04-18 02:07:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:15.540000 | orchestrator | 2026-04-18 02:07:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:15.540352 | orchestrator | 2026-04-18 02:07:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:18.580781 | orchestrator | 2026-04-18 02:07:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:18.581247 | orchestrator | 2026-04-18 02:07:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:18.581281 | orchestrator | 2026-04-18 02:07:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:21.628561 | orchestrator | 2026-04-18 02:07:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:21.630072 | orchestrator | 2026-04-18 02:07:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:21.630117 | orchestrator | 2026-04-18 02:07:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:24.681089 | orchestrator | 2026-04-18 02:07:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:24.682997 | orchestrator | 2026-04-18 02:07:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:24.683060 | orchestrator | 2026-04-18 02:07:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:27.743366 | orchestrator | 2026-04-18 02:07:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:27.747032 | orchestrator | 2026-04-18 02:07:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:27.747091 | orchestrator | 2026-04-18 02:07:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:30.795561 | orchestrator | 2026-04-18 02:07:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:30.797711 | orchestrator | 2026-04-18 02:07:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:30.797767 | orchestrator | 2026-04-18 02:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:33.859393 | orchestrator | 2026-04-18 02:07:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:33.861863 | orchestrator | 2026-04-18 02:07:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:33.861914 | orchestrator | 2026-04-18 02:07:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:36.910778 | orchestrator | 2026-04-18 02:07:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:36.914480 | orchestrator | 2026-04-18 02:07:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:36.914542 | orchestrator | 2026-04-18 02:07:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:39.968600 | orchestrator | 2026-04-18 02:07:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:39.972720 | orchestrator | 2026-04-18 02:07:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:39.972843 | orchestrator | 2026-04-18 02:07:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:43.030639 | orchestrator | 2026-04-18 02:07:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:43.031878 | orchestrator | 2026-04-18 02:07:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:43.031959 | orchestrator | 2026-04-18 02:07:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:46.077698 | orchestrator | 2026-04-18 02:07:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:46.081258 | orchestrator | 2026-04-18 02:07:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:46.081334 | orchestrator | 2026-04-18 02:07:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:49.128546 | orchestrator | 2026-04-18 02:07:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:49.130146 | orchestrator | 2026-04-18 02:07:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:49.130206 | orchestrator | 2026-04-18 02:07:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:52.187177 | orchestrator | 2026-04-18 02:07:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:52.190078 | orchestrator | 2026-04-18 02:07:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:52.190151 | orchestrator | 2026-04-18 02:07:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:55.242076 | orchestrator | 2026-04-18 02:07:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:55.242156 | orchestrator | 2026-04-18 02:07:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:55.242203 | orchestrator | 2026-04-18 02:07:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:07:58.296695 | orchestrator | 2026-04-18 02:07:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:07:58.299318 | orchestrator | 2026-04-18 02:07:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:07:58.299367 | orchestrator | 2026-04-18 02:07:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:01.345593 | orchestrator | 2026-04-18 02:08:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:01.347443 | orchestrator | 2026-04-18 02:08:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:01.347505 | orchestrator | 2026-04-18 02:08:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:04.401523 | orchestrator | 2026-04-18 02:08:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:04.402817 | orchestrator | 2026-04-18 02:08:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:04.402866 | orchestrator | 2026-04-18 02:08:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:07.452405 | orchestrator | 2026-04-18 02:08:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:07.454338 | orchestrator | 2026-04-18 02:08:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:07.454382 | orchestrator | 2026-04-18 02:08:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:10.506717 | orchestrator | 2026-04-18 02:08:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:10.506958 | orchestrator | 2026-04-18 02:08:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:10.507122 | orchestrator | 2026-04-18 02:08:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:13.557032 | orchestrator | 2026-04-18 02:08:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:13.558255 | orchestrator | 2026-04-18 02:08:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:13.558302 | orchestrator | 2026-04-18 02:08:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:16.606304 | orchestrator | 2026-04-18 02:08:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:16.607550 | orchestrator | 2026-04-18 02:08:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:16.607638 | orchestrator | 2026-04-18 02:08:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:19.653734 | orchestrator | 2026-04-18 02:08:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:19.654813 | orchestrator | 2026-04-18 02:08:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:19.654867 | orchestrator | 2026-04-18 02:08:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:22.701212 | orchestrator | 2026-04-18 02:08:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:22.702411 | orchestrator | 2026-04-18 02:08:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:22.702473 | orchestrator | 2026-04-18 02:08:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:25.751187 | orchestrator | 2026-04-18 02:08:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:25.752919 | orchestrator | 2026-04-18 02:08:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:25.752983 | orchestrator | 2026-04-18 02:08:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:28.807881 | orchestrator | 2026-04-18 02:08:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:28.810780 | orchestrator | 2026-04-18 02:08:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:28.810853 | orchestrator | 2026-04-18 02:08:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:31.861482 | orchestrator | 2026-04-18 02:08:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:31.863395 | orchestrator | 2026-04-18 02:08:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:31.863465 | orchestrator | 2026-04-18 02:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:34.912678 | orchestrator | 2026-04-18 02:08:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:34.914003 | orchestrator | 2026-04-18 02:08:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:34.914160 | orchestrator | 2026-04-18 02:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:37.962878 | orchestrator | 2026-04-18 02:08:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:37.964070 | orchestrator | 2026-04-18 02:08:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:37.964120 | orchestrator | 2026-04-18 02:08:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:41.018704 | orchestrator | 2026-04-18 02:08:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:41.021122 | orchestrator | 2026-04-18 02:08:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:41.021193 | orchestrator | 2026-04-18 02:08:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:44.087104 | orchestrator | 2026-04-18 02:08:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:44.088990 | orchestrator | 2026-04-18 02:08:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:44.089121 | orchestrator | 2026-04-18 02:08:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:47.133519 | orchestrator | 2026-04-18 02:08:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:47.135265 | orchestrator | 2026-04-18 02:08:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:47.135297 | orchestrator | 2026-04-18 02:08:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:50.179706 | orchestrator | 2026-04-18 02:08:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:50.181703 | orchestrator | 2026-04-18 02:08:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:50.181764 | orchestrator | 2026-04-18 02:08:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:53.227359 | orchestrator | 2026-04-18 02:08:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:53.229314 | orchestrator | 2026-04-18 02:08:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:53.229384 | orchestrator | 2026-04-18 02:08:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:56.271213 | orchestrator | 2026-04-18 02:08:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:56.272459 | orchestrator | 2026-04-18 02:08:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:56.273045 | orchestrator | 2026-04-18 02:08:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:08:59.322493 | orchestrator | 2026-04-18 02:08:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:08:59.324356 | orchestrator | 2026-04-18 02:08:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:08:59.324498 | orchestrator | 2026-04-18 02:08:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:02.376384 | orchestrator | 2026-04-18 02:09:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:02.378181 | orchestrator | 2026-04-18 02:09:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:02.378266 | orchestrator | 2026-04-18 02:09:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:05.426497 | orchestrator | 2026-04-18 02:09:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:05.428328 | orchestrator | 2026-04-18 02:09:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:05.428493 | orchestrator | 2026-04-18 02:09:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:08.487696 | orchestrator | 2026-04-18 02:09:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:08.488689 | orchestrator | 2026-04-18 02:09:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:08.488742 | orchestrator | 2026-04-18 02:09:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:11.540026 | orchestrator | 2026-04-18 02:09:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:11.541780 | orchestrator | 2026-04-18 02:09:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:11.541886 | orchestrator | 2026-04-18 02:09:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:14.590771 | orchestrator | 2026-04-18 02:09:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:14.591400 | orchestrator | 2026-04-18 02:09:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:14.591432 | orchestrator | 2026-04-18 02:09:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:17.644230 | orchestrator | 2026-04-18 02:09:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:17.646220 | orchestrator | 2026-04-18 02:09:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:17.646283 | orchestrator | 2026-04-18 02:09:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:20.696307 | orchestrator | 2026-04-18 02:09:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:20.697667 | orchestrator | 2026-04-18 02:09:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:20.697731 | orchestrator | 2026-04-18 02:09:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:23.744451 | orchestrator | 2026-04-18 02:09:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:23.746994 | orchestrator | 2026-04-18 02:09:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:23.747042 | orchestrator | 2026-04-18 02:09:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:26.795965 | orchestrator | 2026-04-18 02:09:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:26.799146 | orchestrator | 2026-04-18 02:09:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:26.799218 | orchestrator | 2026-04-18 02:09:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:29.846402 | orchestrator | 2026-04-18 02:09:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:29.848342 | orchestrator | 2026-04-18 02:09:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:29.848395 | orchestrator | 2026-04-18 02:09:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:32.894224 | orchestrator | 2026-04-18 02:09:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:32.895392 | orchestrator | 2026-04-18 02:09:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:32.895433 | orchestrator | 2026-04-18 02:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:35.943937 | orchestrator | 2026-04-18 02:09:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:35.945546 | orchestrator | 2026-04-18 02:09:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:35.945594 | orchestrator | 2026-04-18 02:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:38.993349 | orchestrator | 2026-04-18 02:09:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:38.995010 | orchestrator | 2026-04-18 02:09:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:38.995059 | orchestrator | 2026-04-18 02:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:42.044778 | orchestrator | 2026-04-18 02:09:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:42.046541 | orchestrator | 2026-04-18 02:09:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:42.046671 | orchestrator | 2026-04-18 02:09:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:45.092882 | orchestrator | 2026-04-18 02:09:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:45.094135 | orchestrator | 2026-04-18 02:09:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:45.094237 | orchestrator | 2026-04-18 02:09:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:48.143483 | orchestrator | 2026-04-18 02:09:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:48.144626 | orchestrator | 2026-04-18 02:09:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:48.144702 | orchestrator | 2026-04-18 02:09:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:51.196213 | orchestrator | 2026-04-18 02:09:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:51.198317 | orchestrator | 2026-04-18 02:09:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:51.198370 | orchestrator | 2026-04-18 02:09:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:54.248934 | orchestrator | 2026-04-18 02:09:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:54.252715 | orchestrator | 2026-04-18 02:09:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:54.252811 | orchestrator | 2026-04-18 02:09:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:09:57.300055 | orchestrator | 2026-04-18 02:09:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:09:57.302066 | orchestrator | 2026-04-18 02:09:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:09:57.302135 | orchestrator | 2026-04-18 02:09:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:00.346247 | orchestrator | 2026-04-18 02:10:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:00.348030 | orchestrator | 2026-04-18 02:10:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:00.348079 | orchestrator | 2026-04-18 02:10:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:03.400451 | orchestrator | 2026-04-18 02:10:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:03.402940 | orchestrator | 2026-04-18 02:10:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:03.403029 | orchestrator | 2026-04-18 02:10:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:06.450112 | orchestrator | 2026-04-18 02:10:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:06.451887 | orchestrator | 2026-04-18 02:10:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:06.451945 | orchestrator | 2026-04-18 02:10:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:09.501393 | orchestrator | 2026-04-18 02:10:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:09.502654 | orchestrator | 2026-04-18 02:10:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:09.502796 | orchestrator | 2026-04-18 02:10:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:12.548524 | orchestrator | 2026-04-18 02:10:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:12.549889 | orchestrator | 2026-04-18 02:10:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:12.549953 | orchestrator | 2026-04-18 02:10:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:15.588836 | orchestrator | 2026-04-18 02:10:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:15.589132 | orchestrator | 2026-04-18 02:10:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:15.589194 | orchestrator | 2026-04-18 02:10:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:18.632449 | orchestrator | 2026-04-18 02:10:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:18.634956 | orchestrator | 2026-04-18 02:10:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:18.635144 | orchestrator | 2026-04-18 02:10:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:21.672516 | orchestrator | 2026-04-18 02:10:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:21.673498 | orchestrator | 2026-04-18 02:10:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:21.673584 | orchestrator | 2026-04-18 02:10:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:24.721599 | orchestrator | 2026-04-18 02:10:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:24.722942 | orchestrator | 2026-04-18 02:10:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:24.722999 | orchestrator | 2026-04-18 02:10:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:27.768911 | orchestrator | 2026-04-18 02:10:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:27.770374 | orchestrator | 2026-04-18 02:10:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:27.770508 | orchestrator | 2026-04-18 02:10:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:30.816693 | orchestrator | 2026-04-18 02:10:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:30.818715 | orchestrator | 2026-04-18 02:10:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:30.818813 | orchestrator | 2026-04-18 02:10:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:33.869511 | orchestrator | 2026-04-18 02:10:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:33.871966 | orchestrator | 2026-04-18 02:10:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:33.872037 | orchestrator | 2026-04-18 02:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:36.922652 | orchestrator | 2026-04-18 02:10:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:36.923787 | orchestrator | 2026-04-18 02:10:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:36.923828 | orchestrator | 2026-04-18 02:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:39.978303 | orchestrator | 2026-04-18 02:10:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:39.981153 | orchestrator | 2026-04-18 02:10:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:39.981232 | orchestrator | 2026-04-18 02:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:43.023508 | orchestrator | 2026-04-18 02:10:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:43.025155 | orchestrator | 2026-04-18 02:10:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:43.025424 | orchestrator | 2026-04-18 02:10:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:46.068865 | orchestrator | 2026-04-18 02:10:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:46.070898 | orchestrator | 2026-04-18 02:10:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:46.071045 | orchestrator | 2026-04-18 02:10:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:49.118103 | orchestrator | 2026-04-18 02:10:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:49.119087 | orchestrator | 2026-04-18 02:10:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:49.119132 | orchestrator | 2026-04-18 02:10:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:52.165230 | orchestrator | 2026-04-18 02:10:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:52.166251 | orchestrator | 2026-04-18 02:10:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:52.166311 | orchestrator | 2026-04-18 02:10:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:55.219349 | orchestrator | 2026-04-18 02:10:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:55.221245 | orchestrator | 2026-04-18 02:10:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:55.221294 | orchestrator | 2026-04-18 02:10:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:10:58.266766 | orchestrator | 2026-04-18 02:10:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:10:58.268384 | orchestrator | 2026-04-18 02:10:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:10:58.268442 | orchestrator | 2026-04-18 02:10:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:01.313961 | orchestrator | 2026-04-18 02:11:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:01.315506 | orchestrator | 2026-04-18 02:11:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:01.315606 | orchestrator | 2026-04-18 02:11:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:04.364645 | orchestrator | 2026-04-18 02:11:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:04.365764 | orchestrator | 2026-04-18 02:11:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:04.365927 | orchestrator | 2026-04-18 02:11:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:07.416766 | orchestrator | 2026-04-18 02:11:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:07.418363 | orchestrator | 2026-04-18 02:11:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:07.418442 | orchestrator | 2026-04-18 02:11:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:10.466757 | orchestrator | 2026-04-18 02:11:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:10.467869 | orchestrator | 2026-04-18 02:11:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:10.467927 | orchestrator | 2026-04-18 02:11:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:13.515146 | orchestrator | 2026-04-18 02:11:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:13.516268 | orchestrator | 2026-04-18 02:11:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:13.516310 | orchestrator | 2026-04-18 02:11:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:16.562406 | orchestrator | 2026-04-18 02:11:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:16.564623 | orchestrator | 2026-04-18 02:11:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:16.564693 | orchestrator | 2026-04-18 02:11:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:19.612253 | orchestrator | 2026-04-18 02:11:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:19.614712 | orchestrator | 2026-04-18 02:11:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:19.614805 | orchestrator | 2026-04-18 02:11:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:22.652301 | orchestrator | 2026-04-18 02:11:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:22.653899 | orchestrator | 2026-04-18 02:11:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:22.654138 | orchestrator | 2026-04-18 02:11:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:25.702908 | orchestrator | 2026-04-18 02:11:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:25.706254 | orchestrator | 2026-04-18 02:11:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:25.707650 | orchestrator | 2026-04-18 02:11:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:28.754746 | orchestrator | 2026-04-18 02:11:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:28.757713 | orchestrator | 2026-04-18 02:11:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:28.757771 | orchestrator | 2026-04-18 02:11:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:31.808210 | orchestrator | 2026-04-18 02:11:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:31.810675 | orchestrator | 2026-04-18 02:11:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:31.810737 | orchestrator | 2026-04-18 02:11:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:34.859310 | orchestrator | 2026-04-18 02:11:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:34.861675 | orchestrator | 2026-04-18 02:11:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:34.861727 | orchestrator | 2026-04-18 02:11:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:37.907060 | orchestrator | 2026-04-18 02:11:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:37.909993 | orchestrator | 2026-04-18 02:11:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:37.910076 | orchestrator | 2026-04-18 02:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:40.957008 | orchestrator | 2026-04-18 02:11:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:40.959550 | orchestrator | 2026-04-18 02:11:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:40.959595 | orchestrator | 2026-04-18 02:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:44.011732 | orchestrator | 2026-04-18 02:11:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:44.013390 | orchestrator | 2026-04-18 02:11:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:44.013482 | orchestrator | 2026-04-18 02:11:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:47.063906 | orchestrator | 2026-04-18 02:11:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:47.064492 | orchestrator | 2026-04-18 02:11:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:47.064535 | orchestrator | 2026-04-18 02:11:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:50.111056 | orchestrator | 2026-04-18 02:11:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:50.114913 | orchestrator | 2026-04-18 02:11:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:50.114997 | orchestrator | 2026-04-18 02:11:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:53.160318 | orchestrator | 2026-04-18 02:11:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:53.162252 | orchestrator | 2026-04-18 02:11:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:53.162319 | orchestrator | 2026-04-18 02:11:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:56.212310 | orchestrator | 2026-04-18 02:11:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:56.214328 | orchestrator | 2026-04-18 02:11:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:56.214379 | orchestrator | 2026-04-18 02:11:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:11:59.257124 | orchestrator | 2026-04-18 02:11:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:11:59.257811 | orchestrator | 2026-04-18 02:11:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:11:59.257867 | orchestrator | 2026-04-18 02:11:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:02.305314 | orchestrator | 2026-04-18 02:12:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:02.307091 | orchestrator | 2026-04-18 02:12:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:02.307185 | orchestrator | 2026-04-18 02:12:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:05.352766 | orchestrator | 2026-04-18 02:12:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:05.354375 | orchestrator | 2026-04-18 02:12:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:05.354422 | orchestrator | 2026-04-18 02:12:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:08.400116 | orchestrator | 2026-04-18 02:12:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:08.401566 | orchestrator | 2026-04-18 02:12:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:08.401701 | orchestrator | 2026-04-18 02:12:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:11.449340 | orchestrator | 2026-04-18 02:12:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:11.450415 | orchestrator | 2026-04-18 02:12:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:11.450450 | orchestrator | 2026-04-18 02:12:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:14.499288 | orchestrator | 2026-04-18 02:12:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:14.501170 | orchestrator | 2026-04-18 02:12:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:14.501231 | orchestrator | 2026-04-18 02:12:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:17.553097 | orchestrator | 2026-04-18 02:12:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:17.555314 | orchestrator | 2026-04-18 02:12:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:17.555376 | orchestrator | 2026-04-18 02:12:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:20.601139 | orchestrator | 2026-04-18 02:12:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:20.602264 | orchestrator | 2026-04-18 02:12:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:20.602298 | orchestrator | 2026-04-18 02:12:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:23.648125 | orchestrator | 2026-04-18 02:12:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:23.649907 | orchestrator | 2026-04-18 02:12:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:23.649980 | orchestrator | 2026-04-18 02:12:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:26.697419 | orchestrator | 2026-04-18 02:12:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:26.699725 | orchestrator | 2026-04-18 02:12:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:26.699811 | orchestrator | 2026-04-18 02:12:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:29.742178 | orchestrator | 2026-04-18 02:12:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:29.742593 | orchestrator | 2026-04-18 02:12:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:29.742639 | orchestrator | 2026-04-18 02:12:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:32.790352 | orchestrator | 2026-04-18 02:12:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:32.792385 | orchestrator | 2026-04-18 02:12:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:32.792612 | orchestrator | 2026-04-18 02:12:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:35.840616 | orchestrator | 2026-04-18 02:12:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:35.842442 | orchestrator | 2026-04-18 02:12:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:35.842547 | orchestrator | 2026-04-18 02:12:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:38.889331 | orchestrator | 2026-04-18 02:12:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:38.890267 | orchestrator | 2026-04-18 02:12:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:38.890324 | orchestrator | 2026-04-18 02:12:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:41.931870 | orchestrator | 2026-04-18 02:12:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:41.932827 | orchestrator | 2026-04-18 02:12:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:41.932858 | orchestrator | 2026-04-18 02:12:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:44.972863 | orchestrator | 2026-04-18 02:12:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:44.975170 | orchestrator | 2026-04-18 02:12:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:44.975257 | orchestrator | 2026-04-18 02:12:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:48.027660 | orchestrator | 2026-04-18 02:12:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:48.029876 | orchestrator | 2026-04-18 02:12:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:48.030124 | orchestrator | 2026-04-18 02:12:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:51.079323 | orchestrator | 2026-04-18 02:12:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:51.082013 | orchestrator | 2026-04-18 02:12:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:51.082112 | orchestrator | 2026-04-18 02:12:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:54.128691 | orchestrator | 2026-04-18 02:12:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:54.130512 | orchestrator | 2026-04-18 02:12:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:54.130612 | orchestrator | 2026-04-18 02:12:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:12:57.177700 | orchestrator | 2026-04-18 02:12:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:12:57.179523 | orchestrator | 2026-04-18 02:12:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:12:57.179592 | orchestrator | 2026-04-18 02:12:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:00.223711 | orchestrator | 2026-04-18 02:13:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:00.225095 | orchestrator | 2026-04-18 02:13:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:00.225151 | orchestrator | 2026-04-18 02:13:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:03.274126 | orchestrator | 2026-04-18 02:13:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:03.275739 | orchestrator | 2026-04-18 02:13:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:03.275939 | orchestrator | 2026-04-18 02:13:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:06.316076 | orchestrator | 2026-04-18 02:13:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:06.317192 | orchestrator | 2026-04-18 02:13:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:06.317344 | orchestrator | 2026-04-18 02:13:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:09.365018 | orchestrator | 2026-04-18 02:13:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:09.366915 | orchestrator | 2026-04-18 02:13:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:09.366967 | orchestrator | 2026-04-18 02:13:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:12.416230 | orchestrator | 2026-04-18 02:13:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:12.417630 | orchestrator | 2026-04-18 02:13:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:12.417976 | orchestrator | 2026-04-18 02:13:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:15.467289 | orchestrator | 2026-04-18 02:13:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:15.470074 | orchestrator | 2026-04-18 02:13:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:15.470187 | orchestrator | 2026-04-18 02:13:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:18.518210 | orchestrator | 2026-04-18 02:13:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:18.519885 | orchestrator | 2026-04-18 02:13:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:18.519934 | orchestrator | 2026-04-18 02:13:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:21.570558 | orchestrator | 2026-04-18 02:13:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:21.572534 | orchestrator | 2026-04-18 02:13:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:21.572598 | orchestrator | 2026-04-18 02:13:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:24.621805 | orchestrator | 2026-04-18 02:13:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:24.622772 | orchestrator | 2026-04-18 02:13:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:24.622821 | orchestrator | 2026-04-18 02:13:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:27.672141 | orchestrator | 2026-04-18 02:13:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:27.673892 | orchestrator | 2026-04-18 02:13:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:27.673969 | orchestrator | 2026-04-18 02:13:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:30.722511 | orchestrator | 2026-04-18 02:13:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:30.726625 | orchestrator | 2026-04-18 02:13:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:30.726718 | orchestrator | 2026-04-18 02:13:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:33.773172 | orchestrator | 2026-04-18 02:13:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:33.775142 | orchestrator | 2026-04-18 02:13:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:33.775278 | orchestrator | 2026-04-18 02:13:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:36.823878 | orchestrator | 2026-04-18 02:13:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:36.827264 | orchestrator | 2026-04-18 02:13:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:36.827320 | orchestrator | 2026-04-18 02:13:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:39.875453 | orchestrator | 2026-04-18 02:13:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:39.877692 | orchestrator | 2026-04-18 02:13:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:39.877738 | orchestrator | 2026-04-18 02:13:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:42.921860 | orchestrator | 2026-04-18 02:13:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:42.923115 | orchestrator | 2026-04-18 02:13:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:42.923230 | orchestrator | 2026-04-18 02:13:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:45.972495 | orchestrator | 2026-04-18 02:13:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:45.973688 | orchestrator | 2026-04-18 02:13:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:45.973762 | orchestrator | 2026-04-18 02:13:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:49.019848 | orchestrator | 2026-04-18 02:13:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:49.020416 | orchestrator | 2026-04-18 02:13:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:49.020652 | orchestrator | 2026-04-18 02:13:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:52.065602 | orchestrator | 2026-04-18 02:13:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:52.067319 | orchestrator | 2026-04-18 02:13:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:52.067392 | orchestrator | 2026-04-18 02:13:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:55.114934 | orchestrator | 2026-04-18 02:13:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:55.116457 | orchestrator | 2026-04-18 02:13:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:55.116537 | orchestrator | 2026-04-18 02:13:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:13:58.160920 | orchestrator | 2026-04-18 02:13:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:13:58.162770 | orchestrator | 2026-04-18 02:13:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:13:58.162856 | orchestrator | 2026-04-18 02:13:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:01.204939 | orchestrator | 2026-04-18 02:14:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:01.206620 | orchestrator | 2026-04-18 02:14:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:01.206679 | orchestrator | 2026-04-18 02:14:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:04.254551 | orchestrator | 2026-04-18 02:14:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:04.256438 | orchestrator | 2026-04-18 02:14:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:04.256555 | orchestrator | 2026-04-18 02:14:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:07.311739 | orchestrator | 2026-04-18 02:14:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:07.312874 | orchestrator | 2026-04-18 02:14:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:07.313142 | orchestrator | 2026-04-18 02:14:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:10.360768 | orchestrator | 2026-04-18 02:14:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:10.361417 | orchestrator | 2026-04-18 02:14:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:10.361987 | orchestrator | 2026-04-18 02:14:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:13.407480 | orchestrator | 2026-04-18 02:14:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:13.408857 | orchestrator | 2026-04-18 02:14:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:13.408944 | orchestrator | 2026-04-18 02:14:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:16.449321 | orchestrator | 2026-04-18 02:14:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:16.453013 | orchestrator | 2026-04-18 02:14:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:16.453073 | orchestrator | 2026-04-18 02:14:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:19.501064 | orchestrator | 2026-04-18 02:14:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:19.503433 | orchestrator | 2026-04-18 02:14:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:19.503515 | orchestrator | 2026-04-18 02:14:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:22.551877 | orchestrator | 2026-04-18 02:14:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:22.553325 | orchestrator | 2026-04-18 02:14:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:22.553361 | orchestrator | 2026-04-18 02:14:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:25.602995 | orchestrator | 2026-04-18 02:14:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:25.603146 | orchestrator | 2026-04-18 02:14:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:25.603166 | orchestrator | 2026-04-18 02:14:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:28.647216 | orchestrator | 2026-04-18 02:14:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:28.649760 | orchestrator | 2026-04-18 02:14:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:28.649859 | orchestrator | 2026-04-18 02:14:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:31.698395 | orchestrator | 2026-04-18 02:14:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:31.699807 | orchestrator | 2026-04-18 02:14:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:31.700499 | orchestrator | 2026-04-18 02:14:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:34.753139 | orchestrator | 2026-04-18 02:14:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:34.755146 | orchestrator | 2026-04-18 02:14:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:34.755231 | orchestrator | 2026-04-18 02:14:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:37.800191 | orchestrator | 2026-04-18 02:14:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:37.802712 | orchestrator | 2026-04-18 02:14:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:37.802769 | orchestrator | 2026-04-18 02:14:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:40.846905 | orchestrator | 2026-04-18 02:14:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:40.850483 | orchestrator | 2026-04-18 02:14:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:40.850550 | orchestrator | 2026-04-18 02:14:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:43.890211 | orchestrator | 2026-04-18 02:14:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:43.891405 | orchestrator | 2026-04-18 02:14:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:43.891469 | orchestrator | 2026-04-18 02:14:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:46.937639 | orchestrator | 2026-04-18 02:14:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:46.937990 | orchestrator | 2026-04-18 02:14:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:46.938451 | orchestrator | 2026-04-18 02:14:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:49.990639 | orchestrator | 2026-04-18 02:14:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:49.990867 | orchestrator | 2026-04-18 02:14:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:49.990890 | orchestrator | 2026-04-18 02:14:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:53.049647 | orchestrator | 2026-04-18 02:14:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:53.052974 | orchestrator | 2026-04-18 02:14:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:53.053067 | orchestrator | 2026-04-18 02:14:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:56.103451 | orchestrator | 2026-04-18 02:14:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:56.104742 | orchestrator | 2026-04-18 02:14:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:56.104898 | orchestrator | 2026-04-18 02:14:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:14:59.154892 | orchestrator | 2026-04-18 02:14:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:14:59.157712 | orchestrator | 2026-04-18 02:14:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:14:59.157820 | orchestrator | 2026-04-18 02:14:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:02.207125 | orchestrator | 2026-04-18 02:15:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:02.207720 | orchestrator | 2026-04-18 02:15:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:02.207755 | orchestrator | 2026-04-18 02:15:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:05.258119 | orchestrator | 2026-04-18 02:15:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:05.259499 | orchestrator | 2026-04-18 02:15:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:05.259600 | orchestrator | 2026-04-18 02:15:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:08.315543 | orchestrator | 2026-04-18 02:15:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:08.317337 | orchestrator | 2026-04-18 02:15:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:08.317466 | orchestrator | 2026-04-18 02:15:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:11.368686 | orchestrator | 2026-04-18 02:15:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:11.370768 | orchestrator | 2026-04-18 02:15:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:11.370826 | orchestrator | 2026-04-18 02:15:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:14.426698 | orchestrator | 2026-04-18 02:15:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:14.427719 | orchestrator | 2026-04-18 02:15:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:14.427763 | orchestrator | 2026-04-18 02:15:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:17.478280 | orchestrator | 2026-04-18 02:15:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:17.480189 | orchestrator | 2026-04-18 02:15:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:17.480252 | orchestrator | 2026-04-18 02:15:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:20.527606 | orchestrator | 2026-04-18 02:15:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:20.529327 | orchestrator | 2026-04-18 02:15:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:20.529429 | orchestrator | 2026-04-18 02:15:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:23.604935 | orchestrator | 2026-04-18 02:15:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:23.606708 | orchestrator | 2026-04-18 02:15:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:23.606753 | orchestrator | 2026-04-18 02:15:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:26.655548 | orchestrator | 2026-04-18 02:15:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:26.656992 | orchestrator | 2026-04-18 02:15:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:26.657088 | orchestrator | 2026-04-18 02:15:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:29.703830 | orchestrator | 2026-04-18 02:15:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:29.705305 | orchestrator | 2026-04-18 02:15:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:29.706337 | orchestrator | 2026-04-18 02:15:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:32.751107 | orchestrator | 2026-04-18 02:15:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:32.753220 | orchestrator | 2026-04-18 02:15:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:32.753353 | orchestrator | 2026-04-18 02:15:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:35.803598 | orchestrator | 2026-04-18 02:15:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:35.805393 | orchestrator | 2026-04-18 02:15:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:35.805461 | orchestrator | 2026-04-18 02:15:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:38.867733 | orchestrator | 2026-04-18 02:15:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:38.869404 | orchestrator | 2026-04-18 02:15:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:38.869495 | orchestrator | 2026-04-18 02:15:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:41.915902 | orchestrator | 2026-04-18 02:15:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:41.917529 | orchestrator | 2026-04-18 02:15:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:41.917576 | orchestrator | 2026-04-18 02:15:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:44.972162 | orchestrator | 2026-04-18 02:15:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:44.974279 | orchestrator | 2026-04-18 02:15:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:44.974341 | orchestrator | 2026-04-18 02:15:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:48.031486 | orchestrator | 2026-04-18 02:15:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:48.033030 | orchestrator | 2026-04-18 02:15:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:48.033098 | orchestrator | 2026-04-18 02:15:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:51.081425 | orchestrator | 2026-04-18 02:15:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:51.083245 | orchestrator | 2026-04-18 02:15:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:51.083307 | orchestrator | 2026-04-18 02:15:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:54.132306 | orchestrator | 2026-04-18 02:15:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:54.134813 | orchestrator | 2026-04-18 02:15:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:54.134902 | orchestrator | 2026-04-18 02:15:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:15:57.192700 | orchestrator | 2026-04-18 02:15:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:15:57.194406 | orchestrator | 2026-04-18 02:15:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:15:57.194457 | orchestrator | 2026-04-18 02:15:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:00.239446 | orchestrator | 2026-04-18 02:16:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:00.241593 | orchestrator | 2026-04-18 02:16:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:00.241659 | orchestrator | 2026-04-18 02:16:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:03.289277 | orchestrator | 2026-04-18 02:16:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:03.290207 | orchestrator | 2026-04-18 02:16:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:03.290275 | orchestrator | 2026-04-18 02:16:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:06.338300 | orchestrator | 2026-04-18 02:16:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:06.339014 | orchestrator | 2026-04-18 02:16:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:06.339060 | orchestrator | 2026-04-18 02:16:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:09.382656 | orchestrator | 2026-04-18 02:16:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:09.385615 | orchestrator | 2026-04-18 02:16:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:09.385691 | orchestrator | 2026-04-18 02:16:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:12.432554 | orchestrator | 2026-04-18 02:16:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:12.433928 | orchestrator | 2026-04-18 02:16:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:12.433974 | orchestrator | 2026-04-18 02:16:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:15.479565 | orchestrator | 2026-04-18 02:16:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:15.481054 | orchestrator | 2026-04-18 02:16:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:15.481072 | orchestrator | 2026-04-18 02:16:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:18.529095 | orchestrator | 2026-04-18 02:16:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:18.531105 | orchestrator | 2026-04-18 02:16:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:18.531165 | orchestrator | 2026-04-18 02:16:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:21.579274 | orchestrator | 2026-04-18 02:16:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:21.581339 | orchestrator | 2026-04-18 02:16:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:21.581410 | orchestrator | 2026-04-18 02:16:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:24.626115 | orchestrator | 2026-04-18 02:16:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:24.627519 | orchestrator | 2026-04-18 02:16:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:24.627552 | orchestrator | 2026-04-18 02:16:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:27.673566 | orchestrator | 2026-04-18 02:16:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:27.675240 | orchestrator | 2026-04-18 02:16:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:27.675342 | orchestrator | 2026-04-18 02:16:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:30.724562 | orchestrator | 2026-04-18 02:16:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:30.726297 | orchestrator | 2026-04-18 02:16:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:30.726516 | orchestrator | 2026-04-18 02:16:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:33.774074 | orchestrator | 2026-04-18 02:16:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:33.775793 | orchestrator | 2026-04-18 02:16:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:33.775858 | orchestrator | 2026-04-18 02:16:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:36.826421 | orchestrator | 2026-04-18 02:16:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:36.828347 | orchestrator | 2026-04-18 02:16:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:36.829047 | orchestrator | 2026-04-18 02:16:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:39.873570 | orchestrator | 2026-04-18 02:16:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:39.874574 | orchestrator | 2026-04-18 02:16:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:39.874621 | orchestrator | 2026-04-18 02:16:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:42.920018 | orchestrator | 2026-04-18 02:16:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:42.921808 | orchestrator | 2026-04-18 02:16:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:42.921860 | orchestrator | 2026-04-18 02:16:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:45.968182 | orchestrator | 2026-04-18 02:16:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:45.970203 | orchestrator | 2026-04-18 02:16:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:45.970386 | orchestrator | 2026-04-18 02:16:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:49.019942 | orchestrator | 2026-04-18 02:16:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:49.020948 | orchestrator | 2026-04-18 02:16:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:49.021001 | orchestrator | 2026-04-18 02:16:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:52.069966 | orchestrator | 2026-04-18 02:16:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:52.072575 | orchestrator | 2026-04-18 02:16:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:52.072827 | orchestrator | 2026-04-18 02:16:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:55.120816 | orchestrator | 2026-04-18 02:16:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:55.122095 | orchestrator | 2026-04-18 02:16:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:55.122175 | orchestrator | 2026-04-18 02:16:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:16:58.169862 | orchestrator | 2026-04-18 02:16:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:16:58.171483 | orchestrator | 2026-04-18 02:16:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:16:58.171616 | orchestrator | 2026-04-18 02:16:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:01.214469 | orchestrator | 2026-04-18 02:17:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:01.215824 | orchestrator | 2026-04-18 02:17:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:01.215895 | orchestrator | 2026-04-18 02:17:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:04.264563 | orchestrator | 2026-04-18 02:17:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:04.265957 | orchestrator | 2026-04-18 02:17:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:04.266011 | orchestrator | 2026-04-18 02:17:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:07.313619 | orchestrator | 2026-04-18 02:17:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:07.315205 | orchestrator | 2026-04-18 02:17:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:07.315508 | orchestrator | 2026-04-18 02:17:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:10.360558 | orchestrator | 2026-04-18 02:17:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:10.362348 | orchestrator | 2026-04-18 02:17:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:10.362768 | orchestrator | 2026-04-18 02:17:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:13.407341 | orchestrator | 2026-04-18 02:17:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:13.409428 | orchestrator | 2026-04-18 02:17:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:13.409502 | orchestrator | 2026-04-18 02:17:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:16.462662 | orchestrator | 2026-04-18 02:17:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:16.464566 | orchestrator | 2026-04-18 02:17:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:16.464616 | orchestrator | 2026-04-18 02:17:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:19.507886 | orchestrator | 2026-04-18 02:17:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:19.509165 | orchestrator | 2026-04-18 02:17:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:19.509230 | orchestrator | 2026-04-18 02:17:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:22.552374 | orchestrator | 2026-04-18 02:17:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:22.555163 | orchestrator | 2026-04-18 02:17:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:22.555407 | orchestrator | 2026-04-18 02:17:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:25.598509 | orchestrator | 2026-04-18 02:17:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:25.599772 | orchestrator | 2026-04-18 02:17:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:25.599834 | orchestrator | 2026-04-18 02:17:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:28.647733 | orchestrator | 2026-04-18 02:17:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:28.649188 | orchestrator | 2026-04-18 02:17:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:28.649289 | orchestrator | 2026-04-18 02:17:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:31.691306 | orchestrator | 2026-04-18 02:17:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:31.694148 | orchestrator | 2026-04-18 02:17:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:31.694226 | orchestrator | 2026-04-18 02:17:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:34.736589 | orchestrator | 2026-04-18 02:17:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:34.738289 | orchestrator | 2026-04-18 02:17:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:34.738375 | orchestrator | 2026-04-18 02:17:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:37.787856 | orchestrator | 2026-04-18 02:17:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:37.789708 | orchestrator | 2026-04-18 02:17:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:37.789852 | orchestrator | 2026-04-18 02:17:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:40.836927 | orchestrator | 2026-04-18 02:17:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:40.838306 | orchestrator | 2026-04-18 02:17:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:40.838444 | orchestrator | 2026-04-18 02:17:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:43.879178 | orchestrator | 2026-04-18 02:17:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:43.880752 | orchestrator | 2026-04-18 02:17:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:43.880838 | orchestrator | 2026-04-18 02:17:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:46.924079 | orchestrator | 2026-04-18 02:17:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:46.924911 | orchestrator | 2026-04-18 02:17:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:46.924936 | orchestrator | 2026-04-18 02:17:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:49.970213 | orchestrator | 2026-04-18 02:17:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:49.974439 | orchestrator | 2026-04-18 02:17:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:49.974476 | orchestrator | 2026-04-18 02:17:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:53.015459 | orchestrator | 2026-04-18 02:17:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:53.015714 | orchestrator | 2026-04-18 02:17:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:53.015881 | orchestrator | 2026-04-18 02:17:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:56.062326 | orchestrator | 2026-04-18 02:17:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:56.063733 | orchestrator | 2026-04-18 02:17:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:56.063796 | orchestrator | 2026-04-18 02:17:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:17:59.111291 | orchestrator | 2026-04-18 02:17:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:17:59.112538 | orchestrator | 2026-04-18 02:17:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:17:59.112667 | orchestrator | 2026-04-18 02:17:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:02.157559 | orchestrator | 2026-04-18 02:18:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:02.157928 | orchestrator | 2026-04-18 02:18:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:02.157953 | orchestrator | 2026-04-18 02:18:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:05.202203 | orchestrator | 2026-04-18 02:18:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:05.203943 | orchestrator | 2026-04-18 02:18:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:05.204018 | orchestrator | 2026-04-18 02:18:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:08.249473 | orchestrator | 2026-04-18 02:18:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:08.251397 | orchestrator | 2026-04-18 02:18:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:08.253167 | orchestrator | 2026-04-18 02:18:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:11.294598 | orchestrator | 2026-04-18 02:18:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:11.296230 | orchestrator | 2026-04-18 02:18:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:11.296256 | orchestrator | 2026-04-18 02:18:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:14.339306 | orchestrator | 2026-04-18 02:18:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:14.340526 | orchestrator | 2026-04-18 02:18:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:14.340633 | orchestrator | 2026-04-18 02:18:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:17.388953 | orchestrator | 2026-04-18 02:18:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:17.390791 | orchestrator | 2026-04-18 02:18:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:17.390912 | orchestrator | 2026-04-18 02:18:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:20.432580 | orchestrator | 2026-04-18 02:18:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:20.434251 | orchestrator | 2026-04-18 02:18:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:20.434382 | orchestrator | 2026-04-18 02:18:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:23.483164 | orchestrator | 2026-04-18 02:18:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:23.484843 | orchestrator | 2026-04-18 02:18:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:23.484911 | orchestrator | 2026-04-18 02:18:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:26.532080 | orchestrator | 2026-04-18 02:18:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:26.534415 | orchestrator | 2026-04-18 02:18:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:26.534487 | orchestrator | 2026-04-18 02:18:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:29.577691 | orchestrator | 2026-04-18 02:18:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:29.579195 | orchestrator | 2026-04-18 02:18:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:29.579246 | orchestrator | 2026-04-18 02:18:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:32.622600 | orchestrator | 2026-04-18 02:18:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:32.624509 | orchestrator | 2026-04-18 02:18:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:32.624626 | orchestrator | 2026-04-18 02:18:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:35.671634 | orchestrator | 2026-04-18 02:18:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:35.672449 | orchestrator | 2026-04-18 02:18:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:35.672469 | orchestrator | 2026-04-18 02:18:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:38.725633 | orchestrator | 2026-04-18 02:18:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:38.728291 | orchestrator | 2026-04-18 02:18:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:38.728354 | orchestrator | 2026-04-18 02:18:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:41.778217 | orchestrator | 2026-04-18 02:18:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:41.779549 | orchestrator | 2026-04-18 02:18:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:41.779650 | orchestrator | 2026-04-18 02:18:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:44.826900 | orchestrator | 2026-04-18 02:18:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:44.827098 | orchestrator | 2026-04-18 02:18:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:44.827118 | orchestrator | 2026-04-18 02:18:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:47.886066 | orchestrator | 2026-04-18 02:18:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:47.887605 | orchestrator | 2026-04-18 02:18:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:47.887747 | orchestrator | 2026-04-18 02:18:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:50.935778 | orchestrator | 2026-04-18 02:18:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:50.937150 | orchestrator | 2026-04-18 02:18:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:50.937947 | orchestrator | 2026-04-18 02:18:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:53.990235 | orchestrator | 2026-04-18 02:18:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:53.994448 | orchestrator | 2026-04-18 02:18:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:53.994546 | orchestrator | 2026-04-18 02:18:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:18:57.042700 | orchestrator | 2026-04-18 02:18:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:18:57.043836 | orchestrator | 2026-04-18 02:18:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:18:57.043919 | orchestrator | 2026-04-18 02:18:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:00.086687 | orchestrator | 2026-04-18 02:19:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:00.088486 | orchestrator | 2026-04-18 02:19:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:00.088523 | orchestrator | 2026-04-18 02:19:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:03.133482 | orchestrator | 2026-04-18 02:19:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:03.134333 | orchestrator | 2026-04-18 02:19:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:03.134372 | orchestrator | 2026-04-18 02:19:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:06.180258 | orchestrator | 2026-04-18 02:19:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:06.181629 | orchestrator | 2026-04-18 02:19:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:06.181712 | orchestrator | 2026-04-18 02:19:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:09.232849 | orchestrator | 2026-04-18 02:19:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:09.233797 | orchestrator | 2026-04-18 02:19:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:09.233909 | orchestrator | 2026-04-18 02:19:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:12.279736 | orchestrator | 2026-04-18 02:19:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:12.282569 | orchestrator | 2026-04-18 02:19:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:12.282641 | orchestrator | 2026-04-18 02:19:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:15.332066 | orchestrator | 2026-04-18 02:19:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:15.332725 | orchestrator | 2026-04-18 02:19:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:15.332743 | orchestrator | 2026-04-18 02:19:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:18.376193 | orchestrator | 2026-04-18 02:19:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:18.377334 | orchestrator | 2026-04-18 02:19:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:18.377496 | orchestrator | 2026-04-18 02:19:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:21.418635 | orchestrator | 2026-04-18 02:19:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:21.420482 | orchestrator | 2026-04-18 02:19:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:21.420547 | orchestrator | 2026-04-18 02:19:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:24.465520 | orchestrator | 2026-04-18 02:19:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:24.466718 | orchestrator | 2026-04-18 02:19:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:24.466775 | orchestrator | 2026-04-18 02:19:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:27.516918 | orchestrator | 2026-04-18 02:19:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:27.518730 | orchestrator | 2026-04-18 02:19:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:27.518791 | orchestrator | 2026-04-18 02:19:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:30.563968 | orchestrator | 2026-04-18 02:19:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:30.565966 | orchestrator | 2026-04-18 02:19:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:30.565994 | orchestrator | 2026-04-18 02:19:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:33.618303 | orchestrator | 2026-04-18 02:19:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:33.619701 | orchestrator | 2026-04-18 02:19:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:33.619759 | orchestrator | 2026-04-18 02:19:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:36.666620 | orchestrator | 2026-04-18 02:19:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:36.667166 | orchestrator | 2026-04-18 02:19:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:36.667232 | orchestrator | 2026-04-18 02:19:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:39.721578 | orchestrator | 2026-04-18 02:19:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:39.722916 | orchestrator | 2026-04-18 02:19:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:39.722966 | orchestrator | 2026-04-18 02:19:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:42.766927 | orchestrator | 2026-04-18 02:19:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:42.769179 | orchestrator | 2026-04-18 02:19:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:42.769254 | orchestrator | 2026-04-18 02:19:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:45.811314 | orchestrator | 2026-04-18 02:19:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:45.811387 | orchestrator | 2026-04-18 02:19:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:45.811901 | orchestrator | 2026-04-18 02:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:48.863581 | orchestrator | 2026-04-18 02:19:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:48.864272 | orchestrator | 2026-04-18 02:19:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:48.864647 | orchestrator | 2026-04-18 02:19:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:51.913104 | orchestrator | 2026-04-18 02:19:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:51.914463 | orchestrator | 2026-04-18 02:19:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:51.914487 | orchestrator | 2026-04-18 02:19:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:54.969901 | orchestrator | 2026-04-18 02:19:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:54.971916 | orchestrator | 2026-04-18 02:19:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:54.972031 | orchestrator | 2026-04-18 02:19:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:19:58.028353 | orchestrator | 2026-04-18 02:19:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:19:58.030160 | orchestrator | 2026-04-18 02:19:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:19:58.030268 | orchestrator | 2026-04-18 02:19:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:01.082157 | orchestrator | 2026-04-18 02:20:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:01.084506 | orchestrator | 2026-04-18 02:20:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:01.084587 | orchestrator | 2026-04-18 02:20:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:04.136248 | orchestrator | 2026-04-18 02:20:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:04.137522 | orchestrator | 2026-04-18 02:20:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:04.138267 | orchestrator | 2026-04-18 02:20:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:07.185869 | orchestrator | 2026-04-18 02:20:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:07.187802 | orchestrator | 2026-04-18 02:20:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:07.187862 | orchestrator | 2026-04-18 02:20:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:10.238702 | orchestrator | 2026-04-18 02:20:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:10.241445 | orchestrator | 2026-04-18 02:20:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:10.241602 | orchestrator | 2026-04-18 02:20:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:13.281909 | orchestrator | 2026-04-18 02:20:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:13.282508 | orchestrator | 2026-04-18 02:20:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:13.282549 | orchestrator | 2026-04-18 02:20:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:16.337736 | orchestrator | 2026-04-18 02:20:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:16.338000 | orchestrator | 2026-04-18 02:20:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:16.338066 | orchestrator | 2026-04-18 02:20:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:19.384783 | orchestrator | 2026-04-18 02:20:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:19.386173 | orchestrator | 2026-04-18 02:20:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:19.386205 | orchestrator | 2026-04-18 02:20:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:22.425421 | orchestrator | 2026-04-18 02:20:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:22.427655 | orchestrator | 2026-04-18 02:20:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:22.427772 | orchestrator | 2026-04-18 02:20:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:25.474111 | orchestrator | 2026-04-18 02:20:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:25.475425 | orchestrator | 2026-04-18 02:20:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:25.475603 | orchestrator | 2026-04-18 02:20:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:28.530778 | orchestrator | 2026-04-18 02:20:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:28.533428 | orchestrator | 2026-04-18 02:20:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:28.533551 | orchestrator | 2026-04-18 02:20:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:31.579916 | orchestrator | 2026-04-18 02:20:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:31.582613 | orchestrator | 2026-04-18 02:20:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:31.582663 | orchestrator | 2026-04-18 02:20:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:34.637670 | orchestrator | 2026-04-18 02:20:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:34.638738 | orchestrator | 2026-04-18 02:20:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:34.638761 | orchestrator | 2026-04-18 02:20:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:37.681873 | orchestrator | 2026-04-18 02:20:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:37.683804 | orchestrator | 2026-04-18 02:20:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:37.683979 | orchestrator | 2026-04-18 02:20:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:40.729611 | orchestrator | 2026-04-18 02:20:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:40.731946 | orchestrator | 2026-04-18 02:20:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:40.732040 | orchestrator | 2026-04-18 02:20:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:43.777932 | orchestrator | 2026-04-18 02:20:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:43.779298 | orchestrator | 2026-04-18 02:20:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:43.779345 | orchestrator | 2026-04-18 02:20:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:46.829936 | orchestrator | 2026-04-18 02:20:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:46.832879 | orchestrator | 2026-04-18 02:20:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:46.833069 | orchestrator | 2026-04-18 02:20:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:49.882109 | orchestrator | 2026-04-18 02:20:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:49.882713 | orchestrator | 2026-04-18 02:20:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:49.883109 | orchestrator | 2026-04-18 02:20:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:52.927210 | orchestrator | 2026-04-18 02:20:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:52.928830 | orchestrator | 2026-04-18 02:20:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:52.928928 | orchestrator | 2026-04-18 02:20:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:55.969030 | orchestrator | 2026-04-18 02:20:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:55.971722 | orchestrator | 2026-04-18 02:20:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:55.971832 | orchestrator | 2026-04-18 02:20:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:20:59.020327 | orchestrator | 2026-04-18 02:20:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:20:59.023047 | orchestrator | 2026-04-18 02:20:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:20:59.023377 | orchestrator | 2026-04-18 02:20:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:02.075024 | orchestrator | 2026-04-18 02:21:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:02.077073 | orchestrator | 2026-04-18 02:21:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:02.077250 | orchestrator | 2026-04-18 02:21:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:05.122432 | orchestrator | 2026-04-18 02:21:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:05.124413 | orchestrator | 2026-04-18 02:21:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:05.124475 | orchestrator | 2026-04-18 02:21:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:08.183392 | orchestrator | 2026-04-18 02:21:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:08.185185 | orchestrator | 2026-04-18 02:21:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:08.185226 | orchestrator | 2026-04-18 02:21:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:11.229932 | orchestrator | 2026-04-18 02:21:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:11.232387 | orchestrator | 2026-04-18 02:21:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:11.232417 | orchestrator | 2026-04-18 02:21:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:14.282770 | orchestrator | 2026-04-18 02:21:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:14.284678 | orchestrator | 2026-04-18 02:21:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:14.284817 | orchestrator | 2026-04-18 02:21:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:17.330665 | orchestrator | 2026-04-18 02:21:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:17.331406 | orchestrator | 2026-04-18 02:21:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:17.331427 | orchestrator | 2026-04-18 02:21:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:20.393307 | orchestrator | 2026-04-18 02:21:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:20.394791 | orchestrator | 2026-04-18 02:21:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:20.394849 | orchestrator | 2026-04-18 02:21:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:23.449266 | orchestrator | 2026-04-18 02:21:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:23.451748 | orchestrator | 2026-04-18 02:21:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:23.451828 | orchestrator | 2026-04-18 02:21:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:26.501041 | orchestrator | 2026-04-18 02:21:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:26.503920 | orchestrator | 2026-04-18 02:21:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:26.503977 | orchestrator | 2026-04-18 02:21:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:29.552682 | orchestrator | 2026-04-18 02:21:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:29.553907 | orchestrator | 2026-04-18 02:21:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:29.554056 | orchestrator | 2026-04-18 02:21:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:32.610087 | orchestrator | 2026-04-18 02:21:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:32.612739 | orchestrator | 2026-04-18 02:21:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:32.612889 | orchestrator | 2026-04-18 02:21:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:35.653268 | orchestrator | 2026-04-18 02:21:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:35.654829 | orchestrator | 2026-04-18 02:21:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:35.654875 | orchestrator | 2026-04-18 02:21:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:38.705288 | orchestrator | 2026-04-18 02:21:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:38.707591 | orchestrator | 2026-04-18 02:21:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:38.707711 | orchestrator | 2026-04-18 02:21:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:41.752724 | orchestrator | 2026-04-18 02:21:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:41.754989 | orchestrator | 2026-04-18 02:21:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:41.755059 | orchestrator | 2026-04-18 02:21:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:44.799834 | orchestrator | 2026-04-18 02:21:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:44.801635 | orchestrator | 2026-04-18 02:21:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:44.801732 | orchestrator | 2026-04-18 02:21:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:47.846726 | orchestrator | 2026-04-18 02:21:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:47.847691 | orchestrator | 2026-04-18 02:21:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:47.847888 | orchestrator | 2026-04-18 02:21:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:50.896822 | orchestrator | 2026-04-18 02:21:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:50.898983 | orchestrator | 2026-04-18 02:21:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:50.899025 | orchestrator | 2026-04-18 02:21:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:53.944342 | orchestrator | 2026-04-18 02:21:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:53.946162 | orchestrator | 2026-04-18 02:21:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:53.946221 | orchestrator | 2026-04-18 02:21:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:21:56.992103 | orchestrator | 2026-04-18 02:21:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:21:56.993689 | orchestrator | 2026-04-18 02:21:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:21:56.993734 | orchestrator | 2026-04-18 02:21:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:22:00.053502 | orchestrator | 2026-04-18 02:22:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:22:00.054480 | orchestrator | 2026-04-18 02:22:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:22:00.054528 | orchestrator | 2026-04-18 02:22:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:22:03.102479 | orchestrator | 2026-04-18 02:22:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:22:03.103759 | orchestrator | 2026-04-18 02:22:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:22:03.103808 | orchestrator | 2026-04-18 02:22:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:22:06.148252 | orchestrator | 2026-04-18 02:22:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:22:06.149364 | orchestrator | 2026-04-18 02:22:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:22:06.149383 | orchestrator | 2026-04-18 02:22:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:22:09.199254 | orchestrator | 2026-04-18 02:22:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:22:09.200478 | orchestrator | 2026-04-18 02:22:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:22:09.201011 | orchestrator | 2026-04-18 02:22:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:12.331846 | orchestrator | 2026-04-18 02:24:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:12.331969 | orchestrator | 2026-04-18 02:24:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:12.331988 | orchestrator | 2026-04-18 02:24:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:15.378344 | orchestrator | 2026-04-18 02:24:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:15.379477 | orchestrator | 2026-04-18 02:24:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:15.379534 | orchestrator | 2026-04-18 02:24:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:18.423231 | orchestrator | 2026-04-18 02:24:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:18.425765 | orchestrator | 2026-04-18 02:24:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:18.425902 | orchestrator | 2026-04-18 02:24:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:21.469248 | orchestrator | 2026-04-18 02:24:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:21.471244 | orchestrator | 2026-04-18 02:24:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:21.471331 | orchestrator | 2026-04-18 02:24:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:24.518726 | orchestrator | 2026-04-18 02:24:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:24.520801 | orchestrator | 2026-04-18 02:24:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:24.520890 | orchestrator | 2026-04-18 02:24:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:27.566820 | orchestrator | 2026-04-18 02:24:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:27.568842 | orchestrator | 2026-04-18 02:24:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:27.568895 | orchestrator | 2026-04-18 02:24:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:30.609231 | orchestrator | 2026-04-18 02:24:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:30.611504 | orchestrator | 2026-04-18 02:24:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:30.611606 | orchestrator | 2026-04-18 02:24:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:33.658435 | orchestrator | 2026-04-18 02:24:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:33.660651 | orchestrator | 2026-04-18 02:24:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:33.660796 | orchestrator | 2026-04-18 02:24:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:36.707618 | orchestrator | 2026-04-18 02:24:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:36.709713 | orchestrator | 2026-04-18 02:24:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:36.710821 | orchestrator | 2026-04-18 02:24:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:39.758812 | orchestrator | 2026-04-18 02:24:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:39.760905 | orchestrator | 2026-04-18 02:24:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:39.761035 | orchestrator | 2026-04-18 02:24:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:42.803902 | orchestrator | 2026-04-18 02:24:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:42.805101 | orchestrator | 2026-04-18 02:24:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:42.805311 | orchestrator | 2026-04-18 02:24:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:45.851873 | orchestrator | 2026-04-18 02:24:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:45.851940 | orchestrator | 2026-04-18 02:24:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:45.851946 | orchestrator | 2026-04-18 02:24:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:48.894418 | orchestrator | 2026-04-18 02:24:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:48.896409 | orchestrator | 2026-04-18 02:24:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:48.896451 | orchestrator | 2026-04-18 02:24:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:51.938972 | orchestrator | 2026-04-18 02:24:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:51.939828 | orchestrator | 2026-04-18 02:24:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:51.939907 | orchestrator | 2026-04-18 02:24:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:54.987585 | orchestrator | 2026-04-18 02:24:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:54.989170 | orchestrator | 2026-04-18 02:24:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:54.989231 | orchestrator | 2026-04-18 02:24:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:24:58.033717 | orchestrator | 2026-04-18 02:24:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:24:58.037560 | orchestrator | 2026-04-18 02:24:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:24:58.037675 | orchestrator | 2026-04-18 02:24:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:01.077024 | orchestrator | 2026-04-18 02:25:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:01.077933 | orchestrator | 2026-04-18 02:25:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:01.078056 | orchestrator | 2026-04-18 02:25:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:04.120069 | orchestrator | 2026-04-18 02:25:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:04.122683 | orchestrator | 2026-04-18 02:25:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:04.122802 | orchestrator | 2026-04-18 02:25:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:07.168826 | orchestrator | 2026-04-18 02:25:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:07.170476 | orchestrator | 2026-04-18 02:25:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:07.170538 | orchestrator | 2026-04-18 02:25:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:10.215214 | orchestrator | 2026-04-18 02:25:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:10.216579 | orchestrator | 2026-04-18 02:25:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:10.216826 | orchestrator | 2026-04-18 02:25:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:13.257918 | orchestrator | 2026-04-18 02:25:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:13.259441 | orchestrator | 2026-04-18 02:25:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:13.259482 | orchestrator | 2026-04-18 02:25:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:16.305147 | orchestrator | 2026-04-18 02:25:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:16.306849 | orchestrator | 2026-04-18 02:25:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:16.306927 | orchestrator | 2026-04-18 02:25:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:19.348847 | orchestrator | 2026-04-18 02:25:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:19.350612 | orchestrator | 2026-04-18 02:25:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:19.350666 | orchestrator | 2026-04-18 02:25:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:22.393926 | orchestrator | 2026-04-18 02:25:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:22.396093 | orchestrator | 2026-04-18 02:25:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:22.396212 | orchestrator | 2026-04-18 02:25:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:25.437773 | orchestrator | 2026-04-18 02:25:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:25.438078 | orchestrator | 2026-04-18 02:25:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:25.438286 | orchestrator | 2026-04-18 02:25:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:28.486377 | orchestrator | 2026-04-18 02:25:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:28.488092 | orchestrator | 2026-04-18 02:25:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:28.488177 | orchestrator | 2026-04-18 02:25:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:31.532705 | orchestrator | 2026-04-18 02:25:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:31.535261 | orchestrator | 2026-04-18 02:25:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:31.535306 | orchestrator | 2026-04-18 02:25:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:34.578888 | orchestrator | 2026-04-18 02:25:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:34.580237 | orchestrator | 2026-04-18 02:25:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:34.580301 | orchestrator | 2026-04-18 02:25:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:37.622231 | orchestrator | 2026-04-18 02:25:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:37.624105 | orchestrator | 2026-04-18 02:25:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:37.624168 | orchestrator | 2026-04-18 02:25:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:40.673508 | orchestrator | 2026-04-18 02:25:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:40.675835 | orchestrator | 2026-04-18 02:25:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:40.675970 | orchestrator | 2026-04-18 02:25:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:43.718231 | orchestrator | 2026-04-18 02:25:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:43.720527 | orchestrator | 2026-04-18 02:25:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:43.720825 | orchestrator | 2026-04-18 02:25:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:46.768229 | orchestrator | 2026-04-18 02:25:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:46.770243 | orchestrator | 2026-04-18 02:25:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:46.770364 | orchestrator | 2026-04-18 02:25:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:49.812881 | orchestrator | 2026-04-18 02:25:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:49.814577 | orchestrator | 2026-04-18 02:25:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:49.814671 | orchestrator | 2026-04-18 02:25:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:52.865602 | orchestrator | 2026-04-18 02:25:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:52.867084 | orchestrator | 2026-04-18 02:25:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:52.867144 | orchestrator | 2026-04-18 02:25:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:55.914849 | orchestrator | 2026-04-18 02:25:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:55.916107 | orchestrator | 2026-04-18 02:25:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:55.916152 | orchestrator | 2026-04-18 02:25:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:25:58.961349 | orchestrator | 2026-04-18 02:25:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:25:58.962960 | orchestrator | 2026-04-18 02:25:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:25:58.963014 | orchestrator | 2026-04-18 02:25:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:02.003968 | orchestrator | 2026-04-18 02:26:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:02.005038 | orchestrator | 2026-04-18 02:26:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:02.005154 | orchestrator | 2026-04-18 02:26:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:05.050948 | orchestrator | 2026-04-18 02:26:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:05.053541 | orchestrator | 2026-04-18 02:26:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:05.053755 | orchestrator | 2026-04-18 02:26:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:08.099795 | orchestrator | 2026-04-18 02:26:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:08.103303 | orchestrator | 2026-04-18 02:26:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:08.103374 | orchestrator | 2026-04-18 02:26:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:11.155157 | orchestrator | 2026-04-18 02:26:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:11.155848 | orchestrator | 2026-04-18 02:26:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:11.156073 | orchestrator | 2026-04-18 02:26:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:14.197893 | orchestrator | 2026-04-18 02:26:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:14.199345 | orchestrator | 2026-04-18 02:26:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:14.199400 | orchestrator | 2026-04-18 02:26:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:17.239362 | orchestrator | 2026-04-18 02:26:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:17.242316 | orchestrator | 2026-04-18 02:26:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:17.242408 | orchestrator | 2026-04-18 02:26:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:20.285881 | orchestrator | 2026-04-18 02:26:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:20.288548 | orchestrator | 2026-04-18 02:26:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:20.288661 | orchestrator | 2026-04-18 02:26:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:23.335585 | orchestrator | 2026-04-18 02:26:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:23.337242 | orchestrator | 2026-04-18 02:26:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:23.337305 | orchestrator | 2026-04-18 02:26:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:26.383445 | orchestrator | 2026-04-18 02:26:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:26.386341 | orchestrator | 2026-04-18 02:26:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:26.386414 | orchestrator | 2026-04-18 02:26:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:29.424980 | orchestrator | 2026-04-18 02:26:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:29.426514 | orchestrator | 2026-04-18 02:26:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:29.426579 | orchestrator | 2026-04-18 02:26:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:32.470307 | orchestrator | 2026-04-18 02:26:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:32.472621 | orchestrator | 2026-04-18 02:26:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:32.472694 | orchestrator | 2026-04-18 02:26:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:35.519122 | orchestrator | 2026-04-18 02:26:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:35.520528 | orchestrator | 2026-04-18 02:26:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:35.520586 | orchestrator | 2026-04-18 02:26:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:38.565506 | orchestrator | 2026-04-18 02:26:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:38.568359 | orchestrator | 2026-04-18 02:26:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:38.568423 | orchestrator | 2026-04-18 02:26:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:41.612609 | orchestrator | 2026-04-18 02:26:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:41.616638 | orchestrator | 2026-04-18 02:26:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:41.616689 | orchestrator | 2026-04-18 02:26:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:44.663266 | orchestrator | 2026-04-18 02:26:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:44.665859 | orchestrator | 2026-04-18 02:26:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:44.665932 | orchestrator | 2026-04-18 02:26:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:47.710339 | orchestrator | 2026-04-18 02:26:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:47.712297 | orchestrator | 2026-04-18 02:26:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:47.712368 | orchestrator | 2026-04-18 02:26:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:50.762347 | orchestrator | 2026-04-18 02:26:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:50.764055 | orchestrator | 2026-04-18 02:26:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:50.764105 | orchestrator | 2026-04-18 02:26:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:53.813946 | orchestrator | 2026-04-18 02:26:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:53.816394 | orchestrator | 2026-04-18 02:26:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:53.816480 | orchestrator | 2026-04-18 02:26:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:56.863546 | orchestrator | 2026-04-18 02:26:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:56.864886 | orchestrator | 2026-04-18 02:26:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:56.864939 | orchestrator | 2026-04-18 02:26:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:26:59.911858 | orchestrator | 2026-04-18 02:26:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:26:59.915444 | orchestrator | 2026-04-18 02:26:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:26:59.915513 | orchestrator | 2026-04-18 02:26:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:02.953911 | orchestrator | 2026-04-18 02:27:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:02.956307 | orchestrator | 2026-04-18 02:27:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:02.956398 | orchestrator | 2026-04-18 02:27:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:05.999399 | orchestrator | 2026-04-18 02:27:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:06.001646 | orchestrator | 2026-04-18 02:27:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:06.001696 | orchestrator | 2026-04-18 02:27:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:09.044938 | orchestrator | 2026-04-18 02:27:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:09.045141 | orchestrator | 2026-04-18 02:27:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:09.045160 | orchestrator | 2026-04-18 02:27:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:12.081027 | orchestrator | 2026-04-18 02:27:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:12.082141 | orchestrator | 2026-04-18 02:27:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:12.082204 | orchestrator | 2026-04-18 02:27:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:15.132148 | orchestrator | 2026-04-18 02:27:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:15.133269 | orchestrator | 2026-04-18 02:27:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:15.133303 | orchestrator | 2026-04-18 02:27:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:18.177363 | orchestrator | 2026-04-18 02:27:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:18.178995 | orchestrator | 2026-04-18 02:27:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:18.179055 | orchestrator | 2026-04-18 02:27:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:21.229590 | orchestrator | 2026-04-18 02:27:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:21.234782 | orchestrator | 2026-04-18 02:27:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:21.234877 | orchestrator | 2026-04-18 02:27:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:24.288401 | orchestrator | 2026-04-18 02:27:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:24.293483 | orchestrator | 2026-04-18 02:27:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:24.293571 | orchestrator | 2026-04-18 02:27:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:27.349786 | orchestrator | 2026-04-18 02:27:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:27.351452 | orchestrator | 2026-04-18 02:27:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:27.351604 | orchestrator | 2026-04-18 02:27:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:30.405664 | orchestrator | 2026-04-18 02:27:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:30.406663 | orchestrator | 2026-04-18 02:27:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:30.406733 | orchestrator | 2026-04-18 02:27:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:33.456024 | orchestrator | 2026-04-18 02:27:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:33.457687 | orchestrator | 2026-04-18 02:27:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:33.458161 | orchestrator | 2026-04-18 02:27:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:36.502543 | orchestrator | 2026-04-18 02:27:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:36.504655 | orchestrator | 2026-04-18 02:27:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:36.504703 | orchestrator | 2026-04-18 02:27:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:39.541517 | orchestrator | 2026-04-18 02:27:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:39.542962 | orchestrator | 2026-04-18 02:27:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:39.543004 | orchestrator | 2026-04-18 02:27:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:42.581615 | orchestrator | 2026-04-18 02:27:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:42.582915 | orchestrator | 2026-04-18 02:27:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:42.582968 | orchestrator | 2026-04-18 02:27:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:45.625009 | orchestrator | 2026-04-18 02:27:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:45.626083 | orchestrator | 2026-04-18 02:27:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:45.626107 | orchestrator | 2026-04-18 02:27:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:48.667081 | orchestrator | 2026-04-18 02:27:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:48.667909 | orchestrator | 2026-04-18 02:27:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:48.668053 | orchestrator | 2026-04-18 02:27:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:51.714407 | orchestrator | 2026-04-18 02:27:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:51.716901 | orchestrator | 2026-04-18 02:27:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:51.716995 | orchestrator | 2026-04-18 02:27:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:54.766691 | orchestrator | 2026-04-18 02:27:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:54.767511 | orchestrator | 2026-04-18 02:27:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:54.767989 | orchestrator | 2026-04-18 02:27:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:27:57.812483 | orchestrator | 2026-04-18 02:27:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:27:57.814481 | orchestrator | 2026-04-18 02:27:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:27:57.814578 | orchestrator | 2026-04-18 02:27:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:00.856751 | orchestrator | 2026-04-18 02:28:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:00.857803 | orchestrator | 2026-04-18 02:28:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:00.858003 | orchestrator | 2026-04-18 02:28:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:03.909355 | orchestrator | 2026-04-18 02:28:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:03.911579 | orchestrator | 2026-04-18 02:28:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:03.912255 | orchestrator | 2026-04-18 02:28:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:06.958231 | orchestrator | 2026-04-18 02:28:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:06.960173 | orchestrator | 2026-04-18 02:28:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:06.960229 | orchestrator | 2026-04-18 02:28:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:10.010256 | orchestrator | 2026-04-18 02:28:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:10.011450 | orchestrator | 2026-04-18 02:28:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:10.011568 | orchestrator | 2026-04-18 02:28:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:13.058395 | orchestrator | 2026-04-18 02:28:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:13.105402 | orchestrator | 2026-04-18 02:28:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:13.105476 | orchestrator | 2026-04-18 02:28:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:16.111706 | orchestrator | 2026-04-18 02:28:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:16.113419 | orchestrator | 2026-04-18 02:28:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:16.113468 | orchestrator | 2026-04-18 02:28:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:19.161294 | orchestrator | 2026-04-18 02:28:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:19.164256 | orchestrator | 2026-04-18 02:28:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:19.164327 | orchestrator | 2026-04-18 02:28:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:22.207792 | orchestrator | 2026-04-18 02:28:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:22.210511 | orchestrator | 2026-04-18 02:28:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:22.210616 | orchestrator | 2026-04-18 02:28:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:25.258313 | orchestrator | 2026-04-18 02:28:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:25.258415 | orchestrator | 2026-04-18 02:28:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:25.258423 | orchestrator | 2026-04-18 02:28:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:28.301712 | orchestrator | 2026-04-18 02:28:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:28.303997 | orchestrator | 2026-04-18 02:28:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:28.304080 | orchestrator | 2026-04-18 02:28:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:31.348744 | orchestrator | 2026-04-18 02:28:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:31.350050 | orchestrator | 2026-04-18 02:28:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:31.350085 | orchestrator | 2026-04-18 02:28:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:34.391405 | orchestrator | 2026-04-18 02:28:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:34.393169 | orchestrator | 2026-04-18 02:28:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:34.393220 | orchestrator | 2026-04-18 02:28:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:37.440217 | orchestrator | 2026-04-18 02:28:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:37.443140 | orchestrator | 2026-04-18 02:28:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:37.443208 | orchestrator | 2026-04-18 02:28:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:40.488447 | orchestrator | 2026-04-18 02:28:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:40.490482 | orchestrator | 2026-04-18 02:28:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:40.490566 | orchestrator | 2026-04-18 02:28:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:43.530947 | orchestrator | 2026-04-18 02:28:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:43.533268 | orchestrator | 2026-04-18 02:28:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:43.533328 | orchestrator | 2026-04-18 02:28:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:46.584145 | orchestrator | 2026-04-18 02:28:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:46.585993 | orchestrator | 2026-04-18 02:28:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:46.586181 | orchestrator | 2026-04-18 02:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:49.641346 | orchestrator | 2026-04-18 02:28:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:49.643985 | orchestrator | 2026-04-18 02:28:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:49.644070 | orchestrator | 2026-04-18 02:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:52.689505 | orchestrator | 2026-04-18 02:28:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:52.691708 | orchestrator | 2026-04-18 02:28:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:52.691769 | orchestrator | 2026-04-18 02:28:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:55.738790 | orchestrator | 2026-04-18 02:28:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:55.740170 | orchestrator | 2026-04-18 02:28:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:55.740259 | orchestrator | 2026-04-18 02:28:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:28:58.789537 | orchestrator | 2026-04-18 02:28:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:28:58.790689 | orchestrator | 2026-04-18 02:28:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:28:58.790749 | orchestrator | 2026-04-18 02:28:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:01.836030 | orchestrator | 2026-04-18 02:29:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:01.837722 | orchestrator | 2026-04-18 02:29:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:01.837772 | orchestrator | 2026-04-18 02:29:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:04.889748 | orchestrator | 2026-04-18 02:29:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:04.891761 | orchestrator | 2026-04-18 02:29:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:04.891935 | orchestrator | 2026-04-18 02:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:07.933267 | orchestrator | 2026-04-18 02:29:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:07.935200 | orchestrator | 2026-04-18 02:29:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:07.935254 | orchestrator | 2026-04-18 02:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:10.980259 | orchestrator | 2026-04-18 02:29:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:10.981809 | orchestrator | 2026-04-18 02:29:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:10.981854 | orchestrator | 2026-04-18 02:29:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:14.028745 | orchestrator | 2026-04-18 02:29:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:14.030316 | orchestrator | 2026-04-18 02:29:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:14.030369 | orchestrator | 2026-04-18 02:29:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:17.081199 | orchestrator | 2026-04-18 02:29:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:17.082732 | orchestrator | 2026-04-18 02:29:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:17.082795 | orchestrator | 2026-04-18 02:29:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:20.140660 | orchestrator | 2026-04-18 02:29:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:20.141967 | orchestrator | 2026-04-18 02:29:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:20.142108 | orchestrator | 2026-04-18 02:29:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:23.186852 | orchestrator | 2026-04-18 02:29:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:23.188092 | orchestrator | 2026-04-18 02:29:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:23.188223 | orchestrator | 2026-04-18 02:29:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:26.236508 | orchestrator | 2026-04-18 02:29:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:26.237669 | orchestrator | 2026-04-18 02:29:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:26.237712 | orchestrator | 2026-04-18 02:29:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:29.285106 | orchestrator | 2026-04-18 02:29:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:29.287521 | orchestrator | 2026-04-18 02:29:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:29.287630 | orchestrator | 2026-04-18 02:29:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:32.340627 | orchestrator | 2026-04-18 02:29:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:32.343779 | orchestrator | 2026-04-18 02:29:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:32.343993 | orchestrator | 2026-04-18 02:29:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:35.393314 | orchestrator | 2026-04-18 02:29:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:35.395116 | orchestrator | 2026-04-18 02:29:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:35.395220 | orchestrator | 2026-04-18 02:29:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:38.437686 | orchestrator | 2026-04-18 02:29:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:38.438893 | orchestrator | 2026-04-18 02:29:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:38.439031 | orchestrator | 2026-04-18 02:29:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:41.482161 | orchestrator | 2026-04-18 02:29:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:41.484558 | orchestrator | 2026-04-18 02:29:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:41.484607 | orchestrator | 2026-04-18 02:29:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:44.534507 | orchestrator | 2026-04-18 02:29:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:44.536741 | orchestrator | 2026-04-18 02:29:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:44.536796 | orchestrator | 2026-04-18 02:29:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:47.591327 | orchestrator | 2026-04-18 02:29:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:47.593297 | orchestrator | 2026-04-18 02:29:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:47.593372 | orchestrator | 2026-04-18 02:29:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:50.651779 | orchestrator | 2026-04-18 02:29:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:50.652653 | orchestrator | 2026-04-18 02:29:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:50.652769 | orchestrator | 2026-04-18 02:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:53.706315 | orchestrator | 2026-04-18 02:29:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:53.708301 | orchestrator | 2026-04-18 02:29:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:53.708904 | orchestrator | 2026-04-18 02:29:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:56.766705 | orchestrator | 2026-04-18 02:29:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:56.768116 | orchestrator | 2026-04-18 02:29:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:56.768154 | orchestrator | 2026-04-18 02:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:29:59.819202 | orchestrator | 2026-04-18 02:29:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:29:59.820206 | orchestrator | 2026-04-18 02:29:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:29:59.820261 | orchestrator | 2026-04-18 02:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:02.872106 | orchestrator | 2026-04-18 02:30:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:02.875806 | orchestrator | 2026-04-18 02:30:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:02.875863 | orchestrator | 2026-04-18 02:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:05.931106 | orchestrator | 2026-04-18 02:30:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:05.933045 | orchestrator | 2026-04-18 02:30:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:05.933132 | orchestrator | 2026-04-18 02:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:08.983661 | orchestrator | 2026-04-18 02:30:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:08.985486 | orchestrator | 2026-04-18 02:30:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:08.985573 | orchestrator | 2026-04-18 02:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:12.043605 | orchestrator | 2026-04-18 02:30:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:12.045549 | orchestrator | 2026-04-18 02:30:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:12.045618 | orchestrator | 2026-04-18 02:30:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:15.099208 | orchestrator | 2026-04-18 02:30:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:15.101427 | orchestrator | 2026-04-18 02:30:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:15.101484 | orchestrator | 2026-04-18 02:30:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:18.150774 | orchestrator | 2026-04-18 02:30:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:18.151747 | orchestrator | 2026-04-18 02:30:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:18.151782 | orchestrator | 2026-04-18 02:30:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:21.206212 | orchestrator | 2026-04-18 02:30:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:21.208561 | orchestrator | 2026-04-18 02:30:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:21.208612 | orchestrator | 2026-04-18 02:30:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:24.255431 | orchestrator | 2026-04-18 02:30:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:24.257878 | orchestrator | 2026-04-18 02:30:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:24.258093 | orchestrator | 2026-04-18 02:30:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:27.305667 | orchestrator | 2026-04-18 02:30:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:27.307429 | orchestrator | 2026-04-18 02:30:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:27.307484 | orchestrator | 2026-04-18 02:30:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:30.355219 | orchestrator | 2026-04-18 02:30:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:30.356506 | orchestrator | 2026-04-18 02:30:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:30.356578 | orchestrator | 2026-04-18 02:30:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:33.405297 | orchestrator | 2026-04-18 02:30:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:33.406404 | orchestrator | 2026-04-18 02:30:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:33.406478 | orchestrator | 2026-04-18 02:30:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:36.459312 | orchestrator | 2026-04-18 02:30:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:36.460139 | orchestrator | 2026-04-18 02:30:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:36.460202 | orchestrator | 2026-04-18 02:30:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:39.511697 | orchestrator | 2026-04-18 02:30:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:39.514770 | orchestrator | 2026-04-18 02:30:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:39.514886 | orchestrator | 2026-04-18 02:30:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:42.559539 | orchestrator | 2026-04-18 02:30:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:42.561434 | orchestrator | 2026-04-18 02:30:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:42.561467 | orchestrator | 2026-04-18 02:30:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:45.610325 | orchestrator | 2026-04-18 02:30:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:45.611521 | orchestrator | 2026-04-18 02:30:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:45.611595 | orchestrator | 2026-04-18 02:30:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:48.656148 | orchestrator | 2026-04-18 02:30:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:48.657050 | orchestrator | 2026-04-18 02:30:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:48.657077 | orchestrator | 2026-04-18 02:30:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:51.707922 | orchestrator | 2026-04-18 02:30:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:51.709308 | orchestrator | 2026-04-18 02:30:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:51.709520 | orchestrator | 2026-04-18 02:30:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:54.753769 | orchestrator | 2026-04-18 02:30:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:54.754894 | orchestrator | 2026-04-18 02:30:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:54.754938 | orchestrator | 2026-04-18 02:30:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:30:57.804759 | orchestrator | 2026-04-18 02:30:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:30:57.806898 | orchestrator | 2026-04-18 02:30:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:30:57.807239 | orchestrator | 2026-04-18 02:30:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:00.854489 | orchestrator | 2026-04-18 02:31:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:00.856060 | orchestrator | 2026-04-18 02:31:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:00.856201 | orchestrator | 2026-04-18 02:31:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:03.904683 | orchestrator | 2026-04-18 02:31:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:03.906320 | orchestrator | 2026-04-18 02:31:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:03.906381 | orchestrator | 2026-04-18 02:31:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:06.953711 | orchestrator | 2026-04-18 02:31:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:06.954768 | orchestrator | 2026-04-18 02:31:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:06.954802 | orchestrator | 2026-04-18 02:31:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:09.996246 | orchestrator | 2026-04-18 02:31:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:09.998253 | orchestrator | 2026-04-18 02:31:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:09.998330 | orchestrator | 2026-04-18 02:31:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:13.050165 | orchestrator | 2026-04-18 02:31:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:13.053469 | orchestrator | 2026-04-18 02:31:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:13.053554 | orchestrator | 2026-04-18 02:31:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:16.103289 | orchestrator | 2026-04-18 02:31:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:16.105876 | orchestrator | 2026-04-18 02:31:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:16.106144 | orchestrator | 2026-04-18 02:31:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:19.159629 | orchestrator | 2026-04-18 02:31:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:19.161028 | orchestrator | 2026-04-18 02:31:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:19.161082 | orchestrator | 2026-04-18 02:31:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:22.207203 | orchestrator | 2026-04-18 02:31:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:22.210142 | orchestrator | 2026-04-18 02:31:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:22.210234 | orchestrator | 2026-04-18 02:31:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:25.261422 | orchestrator | 2026-04-18 02:31:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:25.266247 | orchestrator | 2026-04-18 02:31:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:25.266314 | orchestrator | 2026-04-18 02:31:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:28.317239 | orchestrator | 2026-04-18 02:31:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:28.319521 | orchestrator | 2026-04-18 02:31:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:28.319600 | orchestrator | 2026-04-18 02:31:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:31.374688 | orchestrator | 2026-04-18 02:31:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:31.375931 | orchestrator | 2026-04-18 02:31:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:31.376243 | orchestrator | 2026-04-18 02:31:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:34.431694 | orchestrator | 2026-04-18 02:31:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:34.433064 | orchestrator | 2026-04-18 02:31:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:34.433104 | orchestrator | 2026-04-18 02:31:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:37.485420 | orchestrator | 2026-04-18 02:31:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:37.487875 | orchestrator | 2026-04-18 02:31:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:37.487924 | orchestrator | 2026-04-18 02:31:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:40.542382 | orchestrator | 2026-04-18 02:31:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:40.544380 | orchestrator | 2026-04-18 02:31:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:40.544435 | orchestrator | 2026-04-18 02:31:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:43.592539 | orchestrator | 2026-04-18 02:31:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:43.596164 | orchestrator | 2026-04-18 02:31:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:43.596236 | orchestrator | 2026-04-18 02:31:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:46.650219 | orchestrator | 2026-04-18 02:31:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:46.652289 | orchestrator | 2026-04-18 02:31:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:46.652340 | orchestrator | 2026-04-18 02:31:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:49.696115 | orchestrator | 2026-04-18 02:31:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:49.697282 | orchestrator | 2026-04-18 02:31:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:49.697398 | orchestrator | 2026-04-18 02:31:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:52.746369 | orchestrator | 2026-04-18 02:31:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:52.746494 | orchestrator | 2026-04-18 02:31:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:52.746506 | orchestrator | 2026-04-18 02:31:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:55.797180 | orchestrator | 2026-04-18 02:31:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:55.798561 | orchestrator | 2026-04-18 02:31:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:55.798924 | orchestrator | 2026-04-18 02:31:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:31:58.846635 | orchestrator | 2026-04-18 02:31:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:31:58.848826 | orchestrator | 2026-04-18 02:31:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:31:58.848925 | orchestrator | 2026-04-18 02:31:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:01.891812 | orchestrator | 2026-04-18 02:32:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:01.893207 | orchestrator | 2026-04-18 02:32:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:01.893271 | orchestrator | 2026-04-18 02:32:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:04.944265 | orchestrator | 2026-04-18 02:32:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:04.946318 | orchestrator | 2026-04-18 02:32:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:04.946373 | orchestrator | 2026-04-18 02:32:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:07.994603 | orchestrator | 2026-04-18 02:32:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:07.996177 | orchestrator | 2026-04-18 02:32:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:07.996242 | orchestrator | 2026-04-18 02:32:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:11.049696 | orchestrator | 2026-04-18 02:32:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:11.051162 | orchestrator | 2026-04-18 02:32:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:11.051253 | orchestrator | 2026-04-18 02:32:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:14.095821 | orchestrator | 2026-04-18 02:32:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:14.098826 | orchestrator | 2026-04-18 02:32:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:14.098901 | orchestrator | 2026-04-18 02:32:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:17.147107 | orchestrator | 2026-04-18 02:32:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:17.149675 | orchestrator | 2026-04-18 02:32:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:17.150186 | orchestrator | 2026-04-18 02:32:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:20.199582 | orchestrator | 2026-04-18 02:32:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:20.200997 | orchestrator | 2026-04-18 02:32:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:20.201076 | orchestrator | 2026-04-18 02:32:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:23.248435 | orchestrator | 2026-04-18 02:32:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:23.250212 | orchestrator | 2026-04-18 02:32:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:23.250286 | orchestrator | 2026-04-18 02:32:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:26.292652 | orchestrator | 2026-04-18 02:32:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:26.294888 | orchestrator | 2026-04-18 02:32:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:26.294963 | orchestrator | 2026-04-18 02:32:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:29.344407 | orchestrator | 2026-04-18 02:32:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:29.345987 | orchestrator | 2026-04-18 02:32:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:29.346197 | orchestrator | 2026-04-18 02:32:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:32.393272 | orchestrator | 2026-04-18 02:32:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:32.395579 | orchestrator | 2026-04-18 02:32:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:32.395722 | orchestrator | 2026-04-18 02:32:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:35.434876 | orchestrator | 2026-04-18 02:32:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:35.435432 | orchestrator | 2026-04-18 02:32:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:35.435483 | orchestrator | 2026-04-18 02:32:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:38.480387 | orchestrator | 2026-04-18 02:32:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:38.481815 | orchestrator | 2026-04-18 02:32:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:38.481886 | orchestrator | 2026-04-18 02:32:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:41.529832 | orchestrator | 2026-04-18 02:32:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:41.532653 | orchestrator | 2026-04-18 02:32:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:41.532715 | orchestrator | 2026-04-18 02:32:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:44.582126 | orchestrator | 2026-04-18 02:32:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:44.583370 | orchestrator | 2026-04-18 02:32:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:44.583436 | orchestrator | 2026-04-18 02:32:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:47.633631 | orchestrator | 2026-04-18 02:32:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:47.635500 | orchestrator | 2026-04-18 02:32:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:47.635562 | orchestrator | 2026-04-18 02:32:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:50.680112 | orchestrator | 2026-04-18 02:32:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:50.681943 | orchestrator | 2026-04-18 02:32:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:50.682077 | orchestrator | 2026-04-18 02:32:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:53.729160 | orchestrator | 2026-04-18 02:32:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:53.730201 | orchestrator | 2026-04-18 02:32:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:53.730235 | orchestrator | 2026-04-18 02:32:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:56.781508 | orchestrator | 2026-04-18 02:32:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:56.785070 | orchestrator | 2026-04-18 02:32:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:56.785132 | orchestrator | 2026-04-18 02:32:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:32:59.826710 | orchestrator | 2026-04-18 02:32:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:32:59.827909 | orchestrator | 2026-04-18 02:32:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:32:59.828019 | orchestrator | 2026-04-18 02:32:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:02.873591 | orchestrator | 2026-04-18 02:33:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:02.875089 | orchestrator | 2026-04-18 02:33:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:02.875166 | orchestrator | 2026-04-18 02:33:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:05.925174 | orchestrator | 2026-04-18 02:33:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:05.927187 | orchestrator | 2026-04-18 02:33:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:05.927258 | orchestrator | 2026-04-18 02:33:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:08.980284 | orchestrator | 2026-04-18 02:33:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:08.983751 | orchestrator | 2026-04-18 02:33:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:08.983831 | orchestrator | 2026-04-18 02:33:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:12.038689 | orchestrator | 2026-04-18 02:33:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:12.041045 | orchestrator | 2026-04-18 02:33:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:12.041120 | orchestrator | 2026-04-18 02:33:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:15.097485 | orchestrator | 2026-04-18 02:33:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:15.099149 | orchestrator | 2026-04-18 02:33:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:15.099366 | orchestrator | 2026-04-18 02:33:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:18.148494 | orchestrator | 2026-04-18 02:33:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:18.149723 | orchestrator | 2026-04-18 02:33:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:18.149769 | orchestrator | 2026-04-18 02:33:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:21.197111 | orchestrator | 2026-04-18 02:33:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:21.200121 | orchestrator | 2026-04-18 02:33:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:21.200212 | orchestrator | 2026-04-18 02:33:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:24.252246 | orchestrator | 2026-04-18 02:33:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:24.255352 | orchestrator | 2026-04-18 02:33:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:24.255420 | orchestrator | 2026-04-18 02:33:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:27.305929 | orchestrator | 2026-04-18 02:33:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:27.309297 | orchestrator | 2026-04-18 02:33:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:27.309357 | orchestrator | 2026-04-18 02:33:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:30.357628 | orchestrator | 2026-04-18 02:33:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:30.359269 | orchestrator | 2026-04-18 02:33:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:30.359348 | orchestrator | 2026-04-18 02:33:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:33.407924 | orchestrator | 2026-04-18 02:33:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:33.409647 | orchestrator | 2026-04-18 02:33:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:33.409848 | orchestrator | 2026-04-18 02:33:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:36.456107 | orchestrator | 2026-04-18 02:33:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:36.458958 | orchestrator | 2026-04-18 02:33:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:36.459206 | orchestrator | 2026-04-18 02:33:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:39.502563 | orchestrator | 2026-04-18 02:33:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:39.504287 | orchestrator | 2026-04-18 02:33:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:39.504372 | orchestrator | 2026-04-18 02:33:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:42.551991 | orchestrator | 2026-04-18 02:33:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:42.552672 | orchestrator | 2026-04-18 02:33:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:42.552736 | orchestrator | 2026-04-18 02:33:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:45.599280 | orchestrator | 2026-04-18 02:33:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:45.602375 | orchestrator | 2026-04-18 02:33:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:45.602450 | orchestrator | 2026-04-18 02:33:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:48.651302 | orchestrator | 2026-04-18 02:33:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:48.652538 | orchestrator | 2026-04-18 02:33:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:48.652601 | orchestrator | 2026-04-18 02:33:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:51.694100 | orchestrator | 2026-04-18 02:33:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:51.695601 | orchestrator | 2026-04-18 02:33:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:51.695657 | orchestrator | 2026-04-18 02:33:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:54.742989 | orchestrator | 2026-04-18 02:33:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:54.744695 | orchestrator | 2026-04-18 02:33:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:54.744759 | orchestrator | 2026-04-18 02:33:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:33:57.791700 | orchestrator | 2026-04-18 02:33:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:33:57.793982 | orchestrator | 2026-04-18 02:33:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:33:57.794256 | orchestrator | 2026-04-18 02:33:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:00.837638 | orchestrator | 2026-04-18 02:34:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:00.839835 | orchestrator | 2026-04-18 02:34:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:00.839906 | orchestrator | 2026-04-18 02:34:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:03.889639 | orchestrator | 2026-04-18 02:34:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:03.892422 | orchestrator | 2026-04-18 02:34:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:03.892495 | orchestrator | 2026-04-18 02:34:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:06.943999 | orchestrator | 2026-04-18 02:34:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:06.945094 | orchestrator | 2026-04-18 02:34:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:06.945161 | orchestrator | 2026-04-18 02:34:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:09.986618 | orchestrator | 2026-04-18 02:34:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:09.987654 | orchestrator | 2026-04-18 02:34:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:09.987755 | orchestrator | 2026-04-18 02:34:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:13.038431 | orchestrator | 2026-04-18 02:34:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:13.039645 | orchestrator | 2026-04-18 02:34:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:13.039780 | orchestrator | 2026-04-18 02:34:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:16.083119 | orchestrator | 2026-04-18 02:34:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:16.084374 | orchestrator | 2026-04-18 02:34:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:16.084432 | orchestrator | 2026-04-18 02:34:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:19.129704 | orchestrator | 2026-04-18 02:34:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:19.135811 | orchestrator | 2026-04-18 02:34:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:19.135913 | orchestrator | 2026-04-18 02:34:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:22.184923 | orchestrator | 2026-04-18 02:34:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:22.185881 | orchestrator | 2026-04-18 02:34:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:22.185909 | orchestrator | 2026-04-18 02:34:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:25.238188 | orchestrator | 2026-04-18 02:34:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:25.239976 | orchestrator | 2026-04-18 02:34:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:25.240042 | orchestrator | 2026-04-18 02:34:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:28.293520 | orchestrator | 2026-04-18 02:34:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:28.295301 | orchestrator | 2026-04-18 02:34:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:28.295363 | orchestrator | 2026-04-18 02:34:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:31.350362 | orchestrator | 2026-04-18 02:34:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:31.351438 | orchestrator | 2026-04-18 02:34:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:31.351485 | orchestrator | 2026-04-18 02:34:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:34.402228 | orchestrator | 2026-04-18 02:34:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:34.405126 | orchestrator | 2026-04-18 02:34:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:34.405194 | orchestrator | 2026-04-18 02:34:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:37.459798 | orchestrator | 2026-04-18 02:34:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:37.461056 | orchestrator | 2026-04-18 02:34:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:37.461535 | orchestrator | 2026-04-18 02:34:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:40.503800 | orchestrator | 2026-04-18 02:34:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:40.503949 | orchestrator | 2026-04-18 02:34:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:40.503961 | orchestrator | 2026-04-18 02:34:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:43.557682 | orchestrator | 2026-04-18 02:34:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:43.559187 | orchestrator | 2026-04-18 02:34:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:43.559240 | orchestrator | 2026-04-18 02:34:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:46.605015 | orchestrator | 2026-04-18 02:34:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:46.606438 | orchestrator | 2026-04-18 02:34:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:46.606527 | orchestrator | 2026-04-18 02:34:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:49.650710 | orchestrator | 2026-04-18 02:34:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:49.652935 | orchestrator | 2026-04-18 02:34:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:49.653048 | orchestrator | 2026-04-18 02:34:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:52.698348 | orchestrator | 2026-04-18 02:34:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:52.699717 | orchestrator | 2026-04-18 02:34:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:52.699842 | orchestrator | 2026-04-18 02:34:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:55.749416 | orchestrator | 2026-04-18 02:34:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:55.751412 | orchestrator | 2026-04-18 02:34:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:55.751483 | orchestrator | 2026-04-18 02:34:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:34:58.801665 | orchestrator | 2026-04-18 02:34:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:34:58.804214 | orchestrator | 2026-04-18 02:34:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:34:58.804268 | orchestrator | 2026-04-18 02:34:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:01.850451 | orchestrator | 2026-04-18 02:35:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:01.851931 | orchestrator | 2026-04-18 02:35:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:01.852014 | orchestrator | 2026-04-18 02:35:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:04.903248 | orchestrator | 2026-04-18 02:35:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:04.904806 | orchestrator | 2026-04-18 02:35:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:04.904846 | orchestrator | 2026-04-18 02:35:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:07.953879 | orchestrator | 2026-04-18 02:35:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:07.955598 | orchestrator | 2026-04-18 02:35:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:07.955647 | orchestrator | 2026-04-18 02:35:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:11.003574 | orchestrator | 2026-04-18 02:35:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:11.005049 | orchestrator | 2026-04-18 02:35:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:11.005102 | orchestrator | 2026-04-18 02:35:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:14.054403 | orchestrator | 2026-04-18 02:35:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:14.055821 | orchestrator | 2026-04-18 02:35:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:14.055979 | orchestrator | 2026-04-18 02:35:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:17.095842 | orchestrator | 2026-04-18 02:35:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:17.098233 | orchestrator | 2026-04-18 02:35:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:17.098267 | orchestrator | 2026-04-18 02:35:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:20.144019 | orchestrator | 2026-04-18 02:35:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:20.145657 | orchestrator | 2026-04-18 02:35:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:20.145695 | orchestrator | 2026-04-18 02:35:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:23.196883 | orchestrator | 2026-04-18 02:35:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:23.198252 | orchestrator | 2026-04-18 02:35:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:23.198461 | orchestrator | 2026-04-18 02:35:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:26.244953 | orchestrator | 2026-04-18 02:35:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:26.246962 | orchestrator | 2026-04-18 02:35:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:26.247019 | orchestrator | 2026-04-18 02:35:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:29.291072 | orchestrator | 2026-04-18 02:35:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:29.292475 | orchestrator | 2026-04-18 02:35:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:29.292517 | orchestrator | 2026-04-18 02:35:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:32.343076 | orchestrator | 2026-04-18 02:35:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:32.345047 | orchestrator | 2026-04-18 02:35:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:32.345134 | orchestrator | 2026-04-18 02:35:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:35.392834 | orchestrator | 2026-04-18 02:35:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:35.394205 | orchestrator | 2026-04-18 02:35:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:35.394261 | orchestrator | 2026-04-18 02:35:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:38.451879 | orchestrator | 2026-04-18 02:35:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:38.454300 | orchestrator | 2026-04-18 02:35:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:38.454425 | orchestrator | 2026-04-18 02:35:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:41.514274 | orchestrator | 2026-04-18 02:35:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:41.516078 | orchestrator | 2026-04-18 02:35:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:41.516210 | orchestrator | 2026-04-18 02:35:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:44.570272 | orchestrator | 2026-04-18 02:35:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:44.572050 | orchestrator | 2026-04-18 02:35:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:44.572220 | orchestrator | 2026-04-18 02:35:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:47.629215 | orchestrator | 2026-04-18 02:35:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:47.630696 | orchestrator | 2026-04-18 02:35:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:47.630738 | orchestrator | 2026-04-18 02:35:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:50.686262 | orchestrator | 2026-04-18 02:35:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:50.688764 | orchestrator | 2026-04-18 02:35:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:50.689049 | orchestrator | 2026-04-18 02:35:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:53.738911 | orchestrator | 2026-04-18 02:35:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:53.740164 | orchestrator | 2026-04-18 02:35:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:53.740264 | orchestrator | 2026-04-18 02:35:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:56.791163 | orchestrator | 2026-04-18 02:35:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:56.793799 | orchestrator | 2026-04-18 02:35:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:56.793896 | orchestrator | 2026-04-18 02:35:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:35:59.849507 | orchestrator | 2026-04-18 02:35:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:35:59.850844 | orchestrator | 2026-04-18 02:35:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:35:59.850884 | orchestrator | 2026-04-18 02:35:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:02.895377 | orchestrator | 2026-04-18 02:36:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:02.898420 | orchestrator | 2026-04-18 02:36:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:02.898509 | orchestrator | 2026-04-18 02:36:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:05.953830 | orchestrator | 2026-04-18 02:36:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:05.955520 | orchestrator | 2026-04-18 02:36:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:05.955586 | orchestrator | 2026-04-18 02:36:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:09.005235 | orchestrator | 2026-04-18 02:36:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:09.015696 | orchestrator | 2026-04-18 02:36:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:09.015775 | orchestrator | 2026-04-18 02:36:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:12.068715 | orchestrator | 2026-04-18 02:36:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:12.070385 | orchestrator | 2026-04-18 02:36:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:12.070444 | orchestrator | 2026-04-18 02:36:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:15.123844 | orchestrator | 2026-04-18 02:36:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:15.125949 | orchestrator | 2026-04-18 02:36:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:15.126098 | orchestrator | 2026-04-18 02:36:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:18.172309 | orchestrator | 2026-04-18 02:36:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:18.174145 | orchestrator | 2026-04-18 02:36:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:18.174271 | orchestrator | 2026-04-18 02:36:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:21.231413 | orchestrator | 2026-04-18 02:36:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:21.231868 | orchestrator | 2026-04-18 02:36:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:21.231998 | orchestrator | 2026-04-18 02:36:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:24.280253 | orchestrator | 2026-04-18 02:36:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:24.282567 | orchestrator | 2026-04-18 02:36:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:24.282611 | orchestrator | 2026-04-18 02:36:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:27.331077 | orchestrator | 2026-04-18 02:36:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:27.332663 | orchestrator | 2026-04-18 02:36:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:27.332747 | orchestrator | 2026-04-18 02:36:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:30.376305 | orchestrator | 2026-04-18 02:36:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:30.377589 | orchestrator | 2026-04-18 02:36:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:30.377686 | orchestrator | 2026-04-18 02:36:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:33.425002 | orchestrator | 2026-04-18 02:36:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:33.426087 | orchestrator | 2026-04-18 02:36:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:33.426133 | orchestrator | 2026-04-18 02:36:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:36.468262 | orchestrator | 2026-04-18 02:36:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:36.469809 | orchestrator | 2026-04-18 02:36:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:36.469861 | orchestrator | 2026-04-18 02:36:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:39.512825 | orchestrator | 2026-04-18 02:36:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:39.514438 | orchestrator | 2026-04-18 02:36:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:39.514522 | orchestrator | 2026-04-18 02:36:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:42.564347 | orchestrator | 2026-04-18 02:36:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:42.566836 | orchestrator | 2026-04-18 02:36:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:42.567169 | orchestrator | 2026-04-18 02:36:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:45.613356 | orchestrator | 2026-04-18 02:36:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:45.615293 | orchestrator | 2026-04-18 02:36:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:45.615411 | orchestrator | 2026-04-18 02:36:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:48.657196 | orchestrator | 2026-04-18 02:36:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:48.658820 | orchestrator | 2026-04-18 02:36:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:48.658874 | orchestrator | 2026-04-18 02:36:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:51.703941 | orchestrator | 2026-04-18 02:36:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:51.705097 | orchestrator | 2026-04-18 02:36:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:51.705278 | orchestrator | 2026-04-18 02:36:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:54.746170 | orchestrator | 2026-04-18 02:36:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:54.748537 | orchestrator | 2026-04-18 02:36:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:54.748645 | orchestrator | 2026-04-18 02:36:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:36:57.796841 | orchestrator | 2026-04-18 02:36:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:36:57.798991 | orchestrator | 2026-04-18 02:36:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:36:57.799061 | orchestrator | 2026-04-18 02:36:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:00.843669 | orchestrator | 2026-04-18 02:37:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:00.845492 | orchestrator | 2026-04-18 02:37:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:00.845541 | orchestrator | 2026-04-18 02:37:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:03.888212 | orchestrator | 2026-04-18 02:37:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:03.889574 | orchestrator | 2026-04-18 02:37:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:03.889857 | orchestrator | 2026-04-18 02:37:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:06.936710 | orchestrator | 2026-04-18 02:37:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:06.938908 | orchestrator | 2026-04-18 02:37:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:06.939005 | orchestrator | 2026-04-18 02:37:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:09.985230 | orchestrator | 2026-04-18 02:37:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:09.986180 | orchestrator | 2026-04-18 02:37:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:09.986465 | orchestrator | 2026-04-18 02:37:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:13.033225 | orchestrator | 2026-04-18 02:37:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:13.034895 | orchestrator | 2026-04-18 02:37:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:13.035158 | orchestrator | 2026-04-18 02:37:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:16.090674 | orchestrator | 2026-04-18 02:37:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:16.091604 | orchestrator | 2026-04-18 02:37:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:16.091804 | orchestrator | 2026-04-18 02:37:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:19.137786 | orchestrator | 2026-04-18 02:37:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:19.139004 | orchestrator | 2026-04-18 02:37:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:19.139043 | orchestrator | 2026-04-18 02:37:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:22.190097 | orchestrator | 2026-04-18 02:37:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:22.191937 | orchestrator | 2026-04-18 02:37:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:22.191990 | orchestrator | 2026-04-18 02:37:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:25.241140 | orchestrator | 2026-04-18 02:37:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:25.242699 | orchestrator | 2026-04-18 02:37:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:25.242767 | orchestrator | 2026-04-18 02:37:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:28.291836 | orchestrator | 2026-04-18 02:37:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:28.296383 | orchestrator | 2026-04-18 02:37:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:28.296483 | orchestrator | 2026-04-18 02:37:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:31.352725 | orchestrator | 2026-04-18 02:37:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:31.354165 | orchestrator | 2026-04-18 02:37:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:31.354244 | orchestrator | 2026-04-18 02:37:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:34.394535 | orchestrator | 2026-04-18 02:37:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:34.395656 | orchestrator | 2026-04-18 02:37:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:34.395747 | orchestrator | 2026-04-18 02:37:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:37.442516 | orchestrator | 2026-04-18 02:37:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:37.444604 | orchestrator | 2026-04-18 02:37:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:37.444694 | orchestrator | 2026-04-18 02:37:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:40.489760 | orchestrator | 2026-04-18 02:37:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:40.490877 | orchestrator | 2026-04-18 02:37:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:40.490927 | orchestrator | 2026-04-18 02:37:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:43.538551 | orchestrator | 2026-04-18 02:37:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:43.539434 | orchestrator | 2026-04-18 02:37:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:43.539493 | orchestrator | 2026-04-18 02:37:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:46.585590 | orchestrator | 2026-04-18 02:37:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:46.589448 | orchestrator | 2026-04-18 02:37:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:46.589511 | orchestrator | 2026-04-18 02:37:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:49.633458 | orchestrator | 2026-04-18 02:37:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:49.634811 | orchestrator | 2026-04-18 02:37:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:49.634854 | orchestrator | 2026-04-18 02:37:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:52.684766 | orchestrator | 2026-04-18 02:37:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:52.685214 | orchestrator | 2026-04-18 02:37:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:52.685243 | orchestrator | 2026-04-18 02:37:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:55.731980 | orchestrator | 2026-04-18 02:37:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:55.733719 | orchestrator | 2026-04-18 02:37:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:55.733878 | orchestrator | 2026-04-18 02:37:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:37:58.782596 | orchestrator | 2026-04-18 02:37:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:37:58.784624 | orchestrator | 2026-04-18 02:37:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:37:58.784670 | orchestrator | 2026-04-18 02:37:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:01.839736 | orchestrator | 2026-04-18 02:38:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:01.841816 | orchestrator | 2026-04-18 02:38:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:01.841903 | orchestrator | 2026-04-18 02:38:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:04.889095 | orchestrator | 2026-04-18 02:38:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:04.890623 | orchestrator | 2026-04-18 02:38:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:04.890730 | orchestrator | 2026-04-18 02:38:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:07.941240 | orchestrator | 2026-04-18 02:38:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:07.943003 | orchestrator | 2026-04-18 02:38:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:07.943114 | orchestrator | 2026-04-18 02:38:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:10.989263 | orchestrator | 2026-04-18 02:38:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:10.991300 | orchestrator | 2026-04-18 02:38:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:10.991366 | orchestrator | 2026-04-18 02:38:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:14.038245 | orchestrator | 2026-04-18 02:38:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:14.040186 | orchestrator | 2026-04-18 02:38:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:14.040253 | orchestrator | 2026-04-18 02:38:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:17.102530 | orchestrator | 2026-04-18 02:38:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:17.104001 | orchestrator | 2026-04-18 02:38:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:17.104094 | orchestrator | 2026-04-18 02:38:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:20.154560 | orchestrator | 2026-04-18 02:38:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:20.155312 | orchestrator | 2026-04-18 02:38:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:20.155365 | orchestrator | 2026-04-18 02:38:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:23.205351 | orchestrator | 2026-04-18 02:38:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:23.207986 | orchestrator | 2026-04-18 02:38:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:23.208179 | orchestrator | 2026-04-18 02:38:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:26.253899 | orchestrator | 2026-04-18 02:38:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:26.255339 | orchestrator | 2026-04-18 02:38:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:26.255385 | orchestrator | 2026-04-18 02:38:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:29.304509 | orchestrator | 2026-04-18 02:38:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:29.305974 | orchestrator | 2026-04-18 02:38:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:29.306069 | orchestrator | 2026-04-18 02:38:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:32.360369 | orchestrator | 2026-04-18 02:38:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:32.361750 | orchestrator | 2026-04-18 02:38:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:32.361847 | orchestrator | 2026-04-18 02:38:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:35.414534 | orchestrator | 2026-04-18 02:38:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:35.416062 | orchestrator | 2026-04-18 02:38:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:35.416158 | orchestrator | 2026-04-18 02:38:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:38.471847 | orchestrator | 2026-04-18 02:38:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:38.475201 | orchestrator | 2026-04-18 02:38:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:38.475332 | orchestrator | 2026-04-18 02:38:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:41.523841 | orchestrator | 2026-04-18 02:38:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:41.526106 | orchestrator | 2026-04-18 02:38:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:41.526164 | orchestrator | 2026-04-18 02:38:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:44.571134 | orchestrator | 2026-04-18 02:38:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:44.572525 | orchestrator | 2026-04-18 02:38:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:44.572604 | orchestrator | 2026-04-18 02:38:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:47.617835 | orchestrator | 2026-04-18 02:38:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:47.619288 | orchestrator | 2026-04-18 02:38:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:47.619363 | orchestrator | 2026-04-18 02:38:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:50.665949 | orchestrator | 2026-04-18 02:38:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:50.668010 | orchestrator | 2026-04-18 02:38:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:50.668273 | orchestrator | 2026-04-18 02:38:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:53.707014 | orchestrator | 2026-04-18 02:38:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:53.708517 | orchestrator | 2026-04-18 02:38:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:53.708558 | orchestrator | 2026-04-18 02:38:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:56.751380 | orchestrator | 2026-04-18 02:38:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:56.753051 | orchestrator | 2026-04-18 02:38:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:56.753097 | orchestrator | 2026-04-18 02:38:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:38:59.798908 | orchestrator | 2026-04-18 02:38:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:38:59.800495 | orchestrator | 2026-04-18 02:38:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:38:59.800717 | orchestrator | 2026-04-18 02:38:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:02.848269 | orchestrator | 2026-04-18 02:39:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:02.849564 | orchestrator | 2026-04-18 02:39:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:02.849662 | orchestrator | 2026-04-18 02:39:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:05.893762 | orchestrator | 2026-04-18 02:39:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:05.895719 | orchestrator | 2026-04-18 02:39:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:05.895787 | orchestrator | 2026-04-18 02:39:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:08.945631 | orchestrator | 2026-04-18 02:39:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:08.946789 | orchestrator | 2026-04-18 02:39:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:08.946897 | orchestrator | 2026-04-18 02:39:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:12.014296 | orchestrator | 2026-04-18 02:39:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:12.017223 | orchestrator | 2026-04-18 02:39:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:12.017299 | orchestrator | 2026-04-18 02:39:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:15.072093 | orchestrator | 2026-04-18 02:39:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:15.074723 | orchestrator | 2026-04-18 02:39:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:15.074800 | orchestrator | 2026-04-18 02:39:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:18.121411 | orchestrator | 2026-04-18 02:39:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:18.122799 | orchestrator | 2026-04-18 02:39:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:18.122869 | orchestrator | 2026-04-18 02:39:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:21.162408 | orchestrator | 2026-04-18 02:39:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:21.164530 | orchestrator | 2026-04-18 02:39:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:21.164590 | orchestrator | 2026-04-18 02:39:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:24.209410 | orchestrator | 2026-04-18 02:39:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:24.210498 | orchestrator | 2026-04-18 02:39:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:24.210706 | orchestrator | 2026-04-18 02:39:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:27.256061 | orchestrator | 2026-04-18 02:39:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:27.258661 | orchestrator | 2026-04-18 02:39:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:27.258726 | orchestrator | 2026-04-18 02:39:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:30.305392 | orchestrator | 2026-04-18 02:39:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:30.306122 | orchestrator | 2026-04-18 02:39:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:30.306158 | orchestrator | 2026-04-18 02:39:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:33.354282 | orchestrator | 2026-04-18 02:39:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:33.355698 | orchestrator | 2026-04-18 02:39:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:33.355728 | orchestrator | 2026-04-18 02:39:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:36.402250 | orchestrator | 2026-04-18 02:39:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:36.403973 | orchestrator | 2026-04-18 02:39:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:36.404064 | orchestrator | 2026-04-18 02:39:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:39.451328 | orchestrator | 2026-04-18 02:39:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:39.453436 | orchestrator | 2026-04-18 02:39:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:39.453616 | orchestrator | 2026-04-18 02:39:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:42.498089 | orchestrator | 2026-04-18 02:39:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:42.500116 | orchestrator | 2026-04-18 02:39:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:42.500186 | orchestrator | 2026-04-18 02:39:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:45.544974 | orchestrator | 2026-04-18 02:39:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:45.547031 | orchestrator | 2026-04-18 02:39:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:45.547215 | orchestrator | 2026-04-18 02:39:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:48.593990 | orchestrator | 2026-04-18 02:39:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:48.595275 | orchestrator | 2026-04-18 02:39:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:48.595341 | orchestrator | 2026-04-18 02:39:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:51.636874 | orchestrator | 2026-04-18 02:39:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:51.639268 | orchestrator | 2026-04-18 02:39:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:51.639414 | orchestrator | 2026-04-18 02:39:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:54.687658 | orchestrator | 2026-04-18 02:39:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:54.689425 | orchestrator | 2026-04-18 02:39:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:54.689515 | orchestrator | 2026-04-18 02:39:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:39:57.739165 | orchestrator | 2026-04-18 02:39:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:39:57.740266 | orchestrator | 2026-04-18 02:39:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:39:57.740428 | orchestrator | 2026-04-18 02:39:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:00.801402 | orchestrator | 2026-04-18 02:40:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:00.803444 | orchestrator | 2026-04-18 02:40:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:00.803529 | orchestrator | 2026-04-18 02:40:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:03.858171 | orchestrator | 2026-04-18 02:40:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:03.861316 | orchestrator | 2026-04-18 02:40:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:03.861370 | orchestrator | 2026-04-18 02:40:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:06.914734 | orchestrator | 2026-04-18 02:40:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:06.917679 | orchestrator | 2026-04-18 02:40:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:06.917889 | orchestrator | 2026-04-18 02:40:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:09.968679 | orchestrator | 2026-04-18 02:40:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:09.971431 | orchestrator | 2026-04-18 02:40:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:09.971721 | orchestrator | 2026-04-18 02:40:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:13.021807 | orchestrator | 2026-04-18 02:40:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:13.023100 | orchestrator | 2026-04-18 02:40:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:13.023151 | orchestrator | 2026-04-18 02:40:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:16.069269 | orchestrator | 2026-04-18 02:40:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:16.071010 | orchestrator | 2026-04-18 02:40:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:16.071064 | orchestrator | 2026-04-18 02:40:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:19.126284 | orchestrator | 2026-04-18 02:40:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:19.128458 | orchestrator | 2026-04-18 02:40:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:19.128690 | orchestrator | 2026-04-18 02:40:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:22.184763 | orchestrator | 2026-04-18 02:40:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:22.187110 | orchestrator | 2026-04-18 02:40:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:22.187163 | orchestrator | 2026-04-18 02:40:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:25.233594 | orchestrator | 2026-04-18 02:40:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:25.236343 | orchestrator | 2026-04-18 02:40:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:25.236458 | orchestrator | 2026-04-18 02:40:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:28.282696 | orchestrator | 2026-04-18 02:40:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:28.284757 | orchestrator | 2026-04-18 02:40:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:28.284831 | orchestrator | 2026-04-18 02:40:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:31.330059 | orchestrator | 2026-04-18 02:40:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:31.331655 | orchestrator | 2026-04-18 02:40:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:31.331738 | orchestrator | 2026-04-18 02:40:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:34.374944 | orchestrator | 2026-04-18 02:40:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:34.375909 | orchestrator | 2026-04-18 02:40:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:34.375949 | orchestrator | 2026-04-18 02:40:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:37.422005 | orchestrator | 2026-04-18 02:40:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:37.423601 | orchestrator | 2026-04-18 02:40:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:37.423702 | orchestrator | 2026-04-18 02:40:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:40.468463 | orchestrator | 2026-04-18 02:40:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:40.469870 | orchestrator | 2026-04-18 02:40:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:40.469923 | orchestrator | 2026-04-18 02:40:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:43.518893 | orchestrator | 2026-04-18 02:40:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:43.520760 | orchestrator | 2026-04-18 02:40:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:43.520850 | orchestrator | 2026-04-18 02:40:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:46.566220 | orchestrator | 2026-04-18 02:40:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:46.568121 | orchestrator | 2026-04-18 02:40:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:46.568211 | orchestrator | 2026-04-18 02:40:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:49.614925 | orchestrator | 2026-04-18 02:40:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:49.616532 | orchestrator | 2026-04-18 02:40:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:49.616652 | orchestrator | 2026-04-18 02:40:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:52.658168 | orchestrator | 2026-04-18 02:40:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:52.658926 | orchestrator | 2026-04-18 02:40:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:52.658972 | orchestrator | 2026-04-18 02:40:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:55.713674 | orchestrator | 2026-04-18 02:40:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:55.716225 | orchestrator | 2026-04-18 02:40:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:55.716285 | orchestrator | 2026-04-18 02:40:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:40:58.766714 | orchestrator | 2026-04-18 02:40:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:40:58.769271 | orchestrator | 2026-04-18 02:40:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:40:58.769346 | orchestrator | 2026-04-18 02:40:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:01.813138 | orchestrator | 2026-04-18 02:41:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:01.814713 | orchestrator | 2026-04-18 02:41:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:01.814756 | orchestrator | 2026-04-18 02:41:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:04.860031 | orchestrator | 2026-04-18 02:41:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:04.861971 | orchestrator | 2026-04-18 02:41:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:04.862164 | orchestrator | 2026-04-18 02:41:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:07.907330 | orchestrator | 2026-04-18 02:41:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:07.909063 | orchestrator | 2026-04-18 02:41:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:07.909104 | orchestrator | 2026-04-18 02:41:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:10.952346 | orchestrator | 2026-04-18 02:41:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:10.954224 | orchestrator | 2026-04-18 02:41:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:10.954284 | orchestrator | 2026-04-18 02:41:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:13.999672 | orchestrator | 2026-04-18 02:41:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:14.001881 | orchestrator | 2026-04-18 02:41:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:14.001932 | orchestrator | 2026-04-18 02:41:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:17.045223 | orchestrator | 2026-04-18 02:41:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:17.046219 | orchestrator | 2026-04-18 02:41:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:17.046260 | orchestrator | 2026-04-18 02:41:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:20.093963 | orchestrator | 2026-04-18 02:41:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:20.096291 | orchestrator | 2026-04-18 02:41:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:20.096491 | orchestrator | 2026-04-18 02:41:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:23.142469 | orchestrator | 2026-04-18 02:41:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:23.143435 | orchestrator | 2026-04-18 02:41:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:23.143522 | orchestrator | 2026-04-18 02:41:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:26.193030 | orchestrator | 2026-04-18 02:41:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:26.194982 | orchestrator | 2026-04-18 02:41:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:26.195024 | orchestrator | 2026-04-18 02:41:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:29.239351 | orchestrator | 2026-04-18 02:41:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:29.239467 | orchestrator | 2026-04-18 02:41:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:29.239487 | orchestrator | 2026-04-18 02:41:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:32.285327 | orchestrator | 2026-04-18 02:41:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:32.285432 | orchestrator | 2026-04-18 02:41:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:32.285452 | orchestrator | 2026-04-18 02:41:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:35.328700 | orchestrator | 2026-04-18 02:41:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:35.330471 | orchestrator | 2026-04-18 02:41:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:35.330624 | orchestrator | 2026-04-18 02:41:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:38.383059 | orchestrator | 2026-04-18 02:41:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:38.384418 | orchestrator | 2026-04-18 02:41:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:38.384474 | orchestrator | 2026-04-18 02:41:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:41.438347 | orchestrator | 2026-04-18 02:41:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:41.441369 | orchestrator | 2026-04-18 02:41:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:41.441461 | orchestrator | 2026-04-18 02:41:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:44.487770 | orchestrator | 2026-04-18 02:41:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:44.490300 | orchestrator | 2026-04-18 02:41:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:44.490356 | orchestrator | 2026-04-18 02:41:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:47.538003 | orchestrator | 2026-04-18 02:41:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:47.540089 | orchestrator | 2026-04-18 02:41:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:47.540154 | orchestrator | 2026-04-18 02:41:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:50.588562 | orchestrator | 2026-04-18 02:41:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:50.590476 | orchestrator | 2026-04-18 02:41:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:50.590588 | orchestrator | 2026-04-18 02:41:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:53.642873 | orchestrator | 2026-04-18 02:41:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:53.645457 | orchestrator | 2026-04-18 02:41:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:53.645613 | orchestrator | 2026-04-18 02:41:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:56.688911 | orchestrator | 2026-04-18 02:41:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:56.690374 | orchestrator | 2026-04-18 02:41:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:56.690436 | orchestrator | 2026-04-18 02:41:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:41:59.740872 | orchestrator | 2026-04-18 02:41:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:41:59.742943 | orchestrator | 2026-04-18 02:41:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:41:59.743009 | orchestrator | 2026-04-18 02:41:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:02.794330 | orchestrator | 2026-04-18 02:42:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:02.796355 | orchestrator | 2026-04-18 02:42:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:02.796421 | orchestrator | 2026-04-18 02:42:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:05.847167 | orchestrator | 2026-04-18 02:42:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:05.848668 | orchestrator | 2026-04-18 02:42:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:05.848752 | orchestrator | 2026-04-18 02:42:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:08.902297 | orchestrator | 2026-04-18 02:42:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:08.904192 | orchestrator | 2026-04-18 02:42:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:08.904267 | orchestrator | 2026-04-18 02:42:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:11.959437 | orchestrator | 2026-04-18 02:42:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:11.960826 | orchestrator | 2026-04-18 02:42:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:11.960943 | orchestrator | 2026-04-18 02:42:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:15.011148 | orchestrator | 2026-04-18 02:42:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:15.012763 | orchestrator | 2026-04-18 02:42:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:15.012832 | orchestrator | 2026-04-18 02:42:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:18.063778 | orchestrator | 2026-04-18 02:42:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:18.065721 | orchestrator | 2026-04-18 02:42:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:18.065789 | orchestrator | 2026-04-18 02:42:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:21.126132 | orchestrator | 2026-04-18 02:42:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:21.126293 | orchestrator | 2026-04-18 02:42:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:21.126308 | orchestrator | 2026-04-18 02:42:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:24.187561 | orchestrator | 2026-04-18 02:42:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:24.190826 | orchestrator | 2026-04-18 02:42:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:24.190953 | orchestrator | 2026-04-18 02:42:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:27.238852 | orchestrator | 2026-04-18 02:42:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:27.241210 | orchestrator | 2026-04-18 02:42:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:27.241284 | orchestrator | 2026-04-18 02:42:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:30.293237 | orchestrator | 2026-04-18 02:42:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:30.295831 | orchestrator | 2026-04-18 02:42:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:30.296040 | orchestrator | 2026-04-18 02:42:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:33.345461 | orchestrator | 2026-04-18 02:42:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:33.346980 | orchestrator | 2026-04-18 02:42:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:33.347068 | orchestrator | 2026-04-18 02:42:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:36.390277 | orchestrator | 2026-04-18 02:42:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:36.391867 | orchestrator | 2026-04-18 02:42:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:36.391957 | orchestrator | 2026-04-18 02:42:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:39.437496 | orchestrator | 2026-04-18 02:42:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:39.438487 | orchestrator | 2026-04-18 02:42:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:39.438560 | orchestrator | 2026-04-18 02:42:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:42.477885 | orchestrator | 2026-04-18 02:42:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:42.478487 | orchestrator | 2026-04-18 02:42:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:42.478591 | orchestrator | 2026-04-18 02:42:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:45.521298 | orchestrator | 2026-04-18 02:42:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:45.523471 | orchestrator | 2026-04-18 02:42:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:45.523623 | orchestrator | 2026-04-18 02:42:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:48.570694 | orchestrator | 2026-04-18 02:42:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:48.570996 | orchestrator | 2026-04-18 02:42:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:48.571806 | orchestrator | 2026-04-18 02:42:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:51.627695 | orchestrator | 2026-04-18 02:42:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:51.628378 | orchestrator | 2026-04-18 02:42:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:51.628410 | orchestrator | 2026-04-18 02:42:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:54.675006 | orchestrator | 2026-04-18 02:42:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:54.676846 | orchestrator | 2026-04-18 02:42:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:54.676933 | orchestrator | 2026-04-18 02:42:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:42:57.722766 | orchestrator | 2026-04-18 02:42:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:42:57.724367 | orchestrator | 2026-04-18 02:42:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:42:57.724444 | orchestrator | 2026-04-18 02:42:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:00.767916 | orchestrator | 2026-04-18 02:43:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:00.769198 | orchestrator | 2026-04-18 02:43:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:00.769257 | orchestrator | 2026-04-18 02:43:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:03.817225 | orchestrator | 2026-04-18 02:43:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:03.818944 | orchestrator | 2026-04-18 02:43:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:03.819385 | orchestrator | 2026-04-18 02:43:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:06.861743 | orchestrator | 2026-04-18 02:43:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:06.863830 | orchestrator | 2026-04-18 02:43:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:06.863884 | orchestrator | 2026-04-18 02:43:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:09.908205 | orchestrator | 2026-04-18 02:43:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:09.910195 | orchestrator | 2026-04-18 02:43:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:09.910241 | orchestrator | 2026-04-18 02:43:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:12.949642 | orchestrator | 2026-04-18 02:43:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:12.951353 | orchestrator | 2026-04-18 02:43:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:12.951394 | orchestrator | 2026-04-18 02:43:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:15.999277 | orchestrator | 2026-04-18 02:43:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:16.000705 | orchestrator | 2026-04-18 02:43:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:16.000767 | orchestrator | 2026-04-18 02:43:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:19.053036 | orchestrator | 2026-04-18 02:43:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:19.054488 | orchestrator | 2026-04-18 02:43:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:19.054763 | orchestrator | 2026-04-18 02:43:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:22.102371 | orchestrator | 2026-04-18 02:43:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:22.104333 | orchestrator | 2026-04-18 02:43:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:22.104398 | orchestrator | 2026-04-18 02:43:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:25.155006 | orchestrator | 2026-04-18 02:43:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:25.156396 | orchestrator | 2026-04-18 02:43:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:25.156439 | orchestrator | 2026-04-18 02:43:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:28.212155 | orchestrator | 2026-04-18 02:43:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:28.214340 | orchestrator | 2026-04-18 02:43:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:28.214395 | orchestrator | 2026-04-18 02:43:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:31.255990 | orchestrator | 2026-04-18 02:43:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:31.256955 | orchestrator | 2026-04-18 02:43:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:31.256991 | orchestrator | 2026-04-18 02:43:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:34.304062 | orchestrator | 2026-04-18 02:43:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:34.306083 | orchestrator | 2026-04-18 02:43:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:34.306179 | orchestrator | 2026-04-18 02:43:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:37.354583 | orchestrator | 2026-04-18 02:43:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:37.357234 | orchestrator | 2026-04-18 02:43:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:37.357291 | orchestrator | 2026-04-18 02:43:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:40.404261 | orchestrator | 2026-04-18 02:43:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:40.405835 | orchestrator | 2026-04-18 02:43:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:40.405922 | orchestrator | 2026-04-18 02:43:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:43.457603 | orchestrator | 2026-04-18 02:43:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:43.458893 | orchestrator | 2026-04-18 02:43:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:43.459085 | orchestrator | 2026-04-18 02:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:46.502445 | orchestrator | 2026-04-18 02:43:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:46.504431 | orchestrator | 2026-04-18 02:43:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:46.504510 | orchestrator | 2026-04-18 02:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:49.548437 | orchestrator | 2026-04-18 02:43:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:49.550181 | orchestrator | 2026-04-18 02:43:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:49.550281 | orchestrator | 2026-04-18 02:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:52.598487 | orchestrator | 2026-04-18 02:43:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:52.599227 | orchestrator | 2026-04-18 02:43:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:52.599253 | orchestrator | 2026-04-18 02:43:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:55.649948 | orchestrator | 2026-04-18 02:43:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:55.651441 | orchestrator | 2026-04-18 02:43:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:55.651511 | orchestrator | 2026-04-18 02:43:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:43:58.698657 | orchestrator | 2026-04-18 02:43:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:43:58.700145 | orchestrator | 2026-04-18 02:43:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:43:58.700183 | orchestrator | 2026-04-18 02:43:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:01.741352 | orchestrator | 2026-04-18 02:44:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:01.743603 | orchestrator | 2026-04-18 02:44:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:01.743663 | orchestrator | 2026-04-18 02:44:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:04.786533 | orchestrator | 2026-04-18 02:44:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:04.787327 | orchestrator | 2026-04-18 02:44:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:04.787368 | orchestrator | 2026-04-18 02:44:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:07.840481 | orchestrator | 2026-04-18 02:44:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:07.843249 | orchestrator | 2026-04-18 02:44:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:07.843329 | orchestrator | 2026-04-18 02:44:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:10.897198 | orchestrator | 2026-04-18 02:44:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:10.899360 | orchestrator | 2026-04-18 02:44:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:10.899448 | orchestrator | 2026-04-18 02:44:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:13.943380 | orchestrator | 2026-04-18 02:44:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:13.945903 | orchestrator | 2026-04-18 02:44:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:13.945960 | orchestrator | 2026-04-18 02:44:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:16.995684 | orchestrator | 2026-04-18 02:44:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:16.997010 | orchestrator | 2026-04-18 02:44:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:16.997100 | orchestrator | 2026-04-18 02:44:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:20.046740 | orchestrator | 2026-04-18 02:44:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:20.049658 | orchestrator | 2026-04-18 02:44:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:20.049744 | orchestrator | 2026-04-18 02:44:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:23.096175 | orchestrator | 2026-04-18 02:44:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:23.098003 | orchestrator | 2026-04-18 02:44:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:23.098097 | orchestrator | 2026-04-18 02:44:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:26.151158 | orchestrator | 2026-04-18 02:44:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:26.153060 | orchestrator | 2026-04-18 02:44:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:26.153146 | orchestrator | 2026-04-18 02:44:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:29.198758 | orchestrator | 2026-04-18 02:44:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:29.200243 | orchestrator | 2026-04-18 02:44:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:29.200745 | orchestrator | 2026-04-18 02:44:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:32.247694 | orchestrator | 2026-04-18 02:44:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:32.250278 | orchestrator | 2026-04-18 02:44:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:32.250366 | orchestrator | 2026-04-18 02:44:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:35.298726 | orchestrator | 2026-04-18 02:44:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:35.301198 | orchestrator | 2026-04-18 02:44:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:35.301279 | orchestrator | 2026-04-18 02:44:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:38.355394 | orchestrator | 2026-04-18 02:44:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:38.357765 | orchestrator | 2026-04-18 02:44:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:38.357820 | orchestrator | 2026-04-18 02:44:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:41.409351 | orchestrator | 2026-04-18 02:44:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:41.410791 | orchestrator | 2026-04-18 02:44:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:41.410867 | orchestrator | 2026-04-18 02:44:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:44.460334 | orchestrator | 2026-04-18 02:44:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:44.461650 | orchestrator | 2026-04-18 02:44:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:44.461859 | orchestrator | 2026-04-18 02:44:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:47.509686 | orchestrator | 2026-04-18 02:44:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:47.511192 | orchestrator | 2026-04-18 02:44:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:47.511233 | orchestrator | 2026-04-18 02:44:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:50.563345 | orchestrator | 2026-04-18 02:44:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:50.564663 | orchestrator | 2026-04-18 02:44:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:50.564764 | orchestrator | 2026-04-18 02:44:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:53.618996 | orchestrator | 2026-04-18 02:44:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:53.621163 | orchestrator | 2026-04-18 02:44:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:53.621319 | orchestrator | 2026-04-18 02:44:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:56.674231 | orchestrator | 2026-04-18 02:44:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:56.676102 | orchestrator | 2026-04-18 02:44:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:56.676179 | orchestrator | 2026-04-18 02:44:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:44:59.724539 | orchestrator | 2026-04-18 02:44:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:44:59.726103 | orchestrator | 2026-04-18 02:44:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:44:59.726185 | orchestrator | 2026-04-18 02:44:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:02.773422 | orchestrator | 2026-04-18 02:45:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:02.774072 | orchestrator | 2026-04-18 02:45:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:02.774148 | orchestrator | 2026-04-18 02:45:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:05.826478 | orchestrator | 2026-04-18 02:45:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:05.827822 | orchestrator | 2026-04-18 02:45:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:05.827978 | orchestrator | 2026-04-18 02:45:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:08.873221 | orchestrator | 2026-04-18 02:45:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:08.875498 | orchestrator | 2026-04-18 02:45:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:08.875989 | orchestrator | 2026-04-18 02:45:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:11.930492 | orchestrator | 2026-04-18 02:45:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:11.932137 | orchestrator | 2026-04-18 02:45:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:11.932213 | orchestrator | 2026-04-18 02:45:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:14.981242 | orchestrator | 2026-04-18 02:45:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:14.982741 | orchestrator | 2026-04-18 02:45:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:14.982806 | orchestrator | 2026-04-18 02:45:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:18.034346 | orchestrator | 2026-04-18 02:45:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:18.035205 | orchestrator | 2026-04-18 02:45:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:18.035359 | orchestrator | 2026-04-18 02:45:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:21.078947 | orchestrator | 2026-04-18 02:45:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:21.080659 | orchestrator | 2026-04-18 02:45:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:21.080724 | orchestrator | 2026-04-18 02:45:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:24.127711 | orchestrator | 2026-04-18 02:45:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:24.130755 | orchestrator | 2026-04-18 02:45:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:24.130829 | orchestrator | 2026-04-18 02:45:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:27.170905 | orchestrator | 2026-04-18 02:45:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:27.172817 | orchestrator | 2026-04-18 02:45:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:27.173170 | orchestrator | 2026-04-18 02:45:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:30.222104 | orchestrator | 2026-04-18 02:45:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:30.223981 | orchestrator | 2026-04-18 02:45:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:30.224045 | orchestrator | 2026-04-18 02:45:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:33.274957 | orchestrator | 2026-04-18 02:45:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:33.276417 | orchestrator | 2026-04-18 02:45:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:33.276476 | orchestrator | 2026-04-18 02:45:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:36.322957 | orchestrator | 2026-04-18 02:45:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:36.324726 | orchestrator | 2026-04-18 02:45:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:36.324788 | orchestrator | 2026-04-18 02:45:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:39.367940 | orchestrator | 2026-04-18 02:45:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:39.369985 | orchestrator | 2026-04-18 02:45:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:39.370063 | orchestrator | 2026-04-18 02:45:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:42.414755 | orchestrator | 2026-04-18 02:45:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:42.416945 | orchestrator | 2026-04-18 02:45:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:42.417044 | orchestrator | 2026-04-18 02:45:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:45.462955 | orchestrator | 2026-04-18 02:45:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:45.464664 | orchestrator | 2026-04-18 02:45:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:45.464745 | orchestrator | 2026-04-18 02:45:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:48.511780 | orchestrator | 2026-04-18 02:45:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:48.513304 | orchestrator | 2026-04-18 02:45:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:48.513368 | orchestrator | 2026-04-18 02:45:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:51.562331 | orchestrator | 2026-04-18 02:45:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:51.565223 | orchestrator | 2026-04-18 02:45:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:51.565290 | orchestrator | 2026-04-18 02:45:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:54.610394 | orchestrator | 2026-04-18 02:45:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:54.611341 | orchestrator | 2026-04-18 02:45:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:54.611390 | orchestrator | 2026-04-18 02:45:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:45:57.658167 | orchestrator | 2026-04-18 02:45:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:45:57.659499 | orchestrator | 2026-04-18 02:45:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:45:57.659545 | orchestrator | 2026-04-18 02:45:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:00.699554 | orchestrator | 2026-04-18 02:46:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:00.701689 | orchestrator | 2026-04-18 02:46:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:00.701766 | orchestrator | 2026-04-18 02:46:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:03.748336 | orchestrator | 2026-04-18 02:46:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:03.749675 | orchestrator | 2026-04-18 02:46:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:03.749725 | orchestrator | 2026-04-18 02:46:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:06.798680 | orchestrator | 2026-04-18 02:46:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:06.799974 | orchestrator | 2026-04-18 02:46:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:06.800154 | orchestrator | 2026-04-18 02:46:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:09.851504 | orchestrator | 2026-04-18 02:46:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:09.853310 | orchestrator | 2026-04-18 02:46:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:09.853541 | orchestrator | 2026-04-18 02:46:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:12.904145 | orchestrator | 2026-04-18 02:46:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:12.905279 | orchestrator | 2026-04-18 02:46:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:12.905325 | orchestrator | 2026-04-18 02:46:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:15.952104 | orchestrator | 2026-04-18 02:46:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:15.954145 | orchestrator | 2026-04-18 02:46:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:15.954265 | orchestrator | 2026-04-18 02:46:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:18.997318 | orchestrator | 2026-04-18 02:46:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:19.000124 | orchestrator | 2026-04-18 02:46:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:19.000224 | orchestrator | 2026-04-18 02:46:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:22.046181 | orchestrator | 2026-04-18 02:46:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:22.048219 | orchestrator | 2026-04-18 02:46:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:22.048302 | orchestrator | 2026-04-18 02:46:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:25.095817 | orchestrator | 2026-04-18 02:46:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:25.097728 | orchestrator | 2026-04-18 02:46:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:25.097777 | orchestrator | 2026-04-18 02:46:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:28.152741 | orchestrator | 2026-04-18 02:46:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:28.153996 | orchestrator | 2026-04-18 02:46:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:28.154100 | orchestrator | 2026-04-18 02:46:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:31.199195 | orchestrator | 2026-04-18 02:46:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:31.199726 | orchestrator | 2026-04-18 02:46:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:31.199796 | orchestrator | 2026-04-18 02:46:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:34.249695 | orchestrator | 2026-04-18 02:46:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:34.251549 | orchestrator | 2026-04-18 02:46:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:34.251699 | orchestrator | 2026-04-18 02:46:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:37.303406 | orchestrator | 2026-04-18 02:46:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:37.305201 | orchestrator | 2026-04-18 02:46:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:37.305257 | orchestrator | 2026-04-18 02:46:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:40.354275 | orchestrator | 2026-04-18 02:46:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:40.355206 | orchestrator | 2026-04-18 02:46:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:40.355239 | orchestrator | 2026-04-18 02:46:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:43.405841 | orchestrator | 2026-04-18 02:46:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:43.406528 | orchestrator | 2026-04-18 02:46:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:43.406576 | orchestrator | 2026-04-18 02:46:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:46.453702 | orchestrator | 2026-04-18 02:46:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:46.455171 | orchestrator | 2026-04-18 02:46:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:46.455266 | orchestrator | 2026-04-18 02:46:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:49.504631 | orchestrator | 2026-04-18 02:46:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:49.506368 | orchestrator | 2026-04-18 02:46:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:49.506428 | orchestrator | 2026-04-18 02:46:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:52.550502 | orchestrator | 2026-04-18 02:46:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:52.551143 | orchestrator | 2026-04-18 02:46:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:52.551176 | orchestrator | 2026-04-18 02:46:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:55.598806 | orchestrator | 2026-04-18 02:46:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:55.600413 | orchestrator | 2026-04-18 02:46:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:55.600464 | orchestrator | 2026-04-18 02:46:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:46:58.651702 | orchestrator | 2026-04-18 02:46:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:46:58.652998 | orchestrator | 2026-04-18 02:46:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:46:58.653042 | orchestrator | 2026-04-18 02:46:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:01.705192 | orchestrator | 2026-04-18 02:47:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:01.706183 | orchestrator | 2026-04-18 02:47:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:01.706276 | orchestrator | 2026-04-18 02:47:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:04.755246 | orchestrator | 2026-04-18 02:47:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:04.756484 | orchestrator | 2026-04-18 02:47:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:04.756527 | orchestrator | 2026-04-18 02:47:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:07.797368 | orchestrator | 2026-04-18 02:47:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:07.799366 | orchestrator | 2026-04-18 02:47:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:07.800112 | orchestrator | 2026-04-18 02:47:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:10.848805 | orchestrator | 2026-04-18 02:47:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:10.850465 | orchestrator | 2026-04-18 02:47:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:10.850520 | orchestrator | 2026-04-18 02:47:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:13.901466 | orchestrator | 2026-04-18 02:47:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:13.902722 | orchestrator | 2026-04-18 02:47:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:13.902749 | orchestrator | 2026-04-18 02:47:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:16.949573 | orchestrator | 2026-04-18 02:47:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:16.951387 | orchestrator | 2026-04-18 02:47:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:16.951461 | orchestrator | 2026-04-18 02:47:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:19.998702 | orchestrator | 2026-04-18 02:47:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:20.000048 | orchestrator | 2026-04-18 02:47:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:20.000121 | orchestrator | 2026-04-18 02:47:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:23.048721 | orchestrator | 2026-04-18 02:47:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:23.050693 | orchestrator | 2026-04-18 02:47:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:23.050865 | orchestrator | 2026-04-18 02:47:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:26.097114 | orchestrator | 2026-04-18 02:47:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:26.099000 | orchestrator | 2026-04-18 02:47:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:26.099107 | orchestrator | 2026-04-18 02:47:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:29.148426 | orchestrator | 2026-04-18 02:47:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:29.150684 | orchestrator | 2026-04-18 02:47:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:29.150724 | orchestrator | 2026-04-18 02:47:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:32.193681 | orchestrator | 2026-04-18 02:47:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:32.195140 | orchestrator | 2026-04-18 02:47:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:32.195185 | orchestrator | 2026-04-18 02:47:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:35.245826 | orchestrator | 2026-04-18 02:47:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:35.247642 | orchestrator | 2026-04-18 02:47:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:35.247700 | orchestrator | 2026-04-18 02:47:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:38.300938 | orchestrator | 2026-04-18 02:47:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:38.302880 | orchestrator | 2026-04-18 02:47:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:38.302942 | orchestrator | 2026-04-18 02:47:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:41.361100 | orchestrator | 2026-04-18 02:47:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:41.362911 | orchestrator | 2026-04-18 02:47:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:41.362966 | orchestrator | 2026-04-18 02:47:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:44.419427 | orchestrator | 2026-04-18 02:47:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:44.421552 | orchestrator | 2026-04-18 02:47:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:44.421634 | orchestrator | 2026-04-18 02:47:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:47.467453 | orchestrator | 2026-04-18 02:47:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:47.470134 | orchestrator | 2026-04-18 02:47:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:47.470212 | orchestrator | 2026-04-18 02:47:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:50.514876 | orchestrator | 2026-04-18 02:47:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:50.516619 | orchestrator | 2026-04-18 02:47:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:50.516753 | orchestrator | 2026-04-18 02:47:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:53.574312 | orchestrator | 2026-04-18 02:47:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:53.574547 | orchestrator | 2026-04-18 02:47:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:53.574576 | orchestrator | 2026-04-18 02:47:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:56.620334 | orchestrator | 2026-04-18 02:47:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:56.622277 | orchestrator | 2026-04-18 02:47:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:56.622551 | orchestrator | 2026-04-18 02:47:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:47:59.676725 | orchestrator | 2026-04-18 02:47:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:47:59.678785 | orchestrator | 2026-04-18 02:47:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:47:59.678893 | orchestrator | 2026-04-18 02:47:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:02.727186 | orchestrator | 2026-04-18 02:48:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:02.728354 | orchestrator | 2026-04-18 02:48:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:02.728402 | orchestrator | 2026-04-18 02:48:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:05.777827 | orchestrator | 2026-04-18 02:48:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:05.779345 | orchestrator | 2026-04-18 02:48:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:05.779486 | orchestrator | 2026-04-18 02:48:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:08.825863 | orchestrator | 2026-04-18 02:48:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:08.827249 | orchestrator | 2026-04-18 02:48:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:08.827387 | orchestrator | 2026-04-18 02:48:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:11.875858 | orchestrator | 2026-04-18 02:48:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:11.877508 | orchestrator | 2026-04-18 02:48:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:11.877670 | orchestrator | 2026-04-18 02:48:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:14.929674 | orchestrator | 2026-04-18 02:48:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:14.930513 | orchestrator | 2026-04-18 02:48:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:14.930582 | orchestrator | 2026-04-18 02:48:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:17.975566 | orchestrator | 2026-04-18 02:48:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:17.977285 | orchestrator | 2026-04-18 02:48:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:17.977334 | orchestrator | 2026-04-18 02:48:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:21.021339 | orchestrator | 2026-04-18 02:48:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:21.023057 | orchestrator | 2026-04-18 02:48:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:21.023122 | orchestrator | 2026-04-18 02:48:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:24.072774 | orchestrator | 2026-04-18 02:48:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:24.074320 | orchestrator | 2026-04-18 02:48:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:24.074386 | orchestrator | 2026-04-18 02:48:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:27.121157 | orchestrator | 2026-04-18 02:48:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:27.122458 | orchestrator | 2026-04-18 02:48:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:27.122750 | orchestrator | 2026-04-18 02:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:30.168193 | orchestrator | 2026-04-18 02:48:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:30.169752 | orchestrator | 2026-04-18 02:48:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:30.169792 | orchestrator | 2026-04-18 02:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:33.210702 | orchestrator | 2026-04-18 02:48:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:33.212308 | orchestrator | 2026-04-18 02:48:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:33.212342 | orchestrator | 2026-04-18 02:48:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:36.255728 | orchestrator | 2026-04-18 02:48:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:36.257720 | orchestrator | 2026-04-18 02:48:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:36.258092 | orchestrator | 2026-04-18 02:48:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:39.302000 | orchestrator | 2026-04-18 02:48:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:39.304053 | orchestrator | 2026-04-18 02:48:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:39.304112 | orchestrator | 2026-04-18 02:48:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:42.356496 | orchestrator | 2026-04-18 02:48:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:42.358758 | orchestrator | 2026-04-18 02:48:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:42.358840 | orchestrator | 2026-04-18 02:48:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:45.415579 | orchestrator | 2026-04-18 02:48:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:45.418252 | orchestrator | 2026-04-18 02:48:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:45.418435 | orchestrator | 2026-04-18 02:48:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:48.472765 | orchestrator | 2026-04-18 02:48:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:48.474983 | orchestrator | 2026-04-18 02:48:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:48.475043 | orchestrator | 2026-04-18 02:48:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:51.525118 | orchestrator | 2026-04-18 02:48:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:51.529798 | orchestrator | 2026-04-18 02:48:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:51.529887 | orchestrator | 2026-04-18 02:48:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:54.566812 | orchestrator | 2026-04-18 02:48:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:54.567725 | orchestrator | 2026-04-18 02:48:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:54.567764 | orchestrator | 2026-04-18 02:48:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:48:57.615086 | orchestrator | 2026-04-18 02:48:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:48:57.615616 | orchestrator | 2026-04-18 02:48:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:48:57.615703 | orchestrator | 2026-04-18 02:48:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:00.657920 | orchestrator | 2026-04-18 02:49:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:00.659093 | orchestrator | 2026-04-18 02:49:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:00.659126 | orchestrator | 2026-04-18 02:49:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:03.706328 | orchestrator | 2026-04-18 02:49:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:03.708291 | orchestrator | 2026-04-18 02:49:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:03.708512 | orchestrator | 2026-04-18 02:49:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:06.756449 | orchestrator | 2026-04-18 02:49:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:06.758241 | orchestrator | 2026-04-18 02:49:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:06.758409 | orchestrator | 2026-04-18 02:49:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:09.802428 | orchestrator | 2026-04-18 02:49:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:09.803728 | orchestrator | 2026-04-18 02:49:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:09.803808 | orchestrator | 2026-04-18 02:49:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:12.846554 | orchestrator | 2026-04-18 02:49:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:12.847448 | orchestrator | 2026-04-18 02:49:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:12.847491 | orchestrator | 2026-04-18 02:49:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:15.896839 | orchestrator | 2026-04-18 02:49:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:15.898628 | orchestrator | 2026-04-18 02:49:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:15.898685 | orchestrator | 2026-04-18 02:49:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:18.940734 | orchestrator | 2026-04-18 02:49:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:18.943441 | orchestrator | 2026-04-18 02:49:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:18.943536 | orchestrator | 2026-04-18 02:49:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:21.989191 | orchestrator | 2026-04-18 02:49:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:21.990458 | orchestrator | 2026-04-18 02:49:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:21.990505 | orchestrator | 2026-04-18 02:49:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:25.031963 | orchestrator | 2026-04-18 02:49:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:25.033111 | orchestrator | 2026-04-18 02:49:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:25.033176 | orchestrator | 2026-04-18 02:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:28.085094 | orchestrator | 2026-04-18 02:49:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:28.087337 | orchestrator | 2026-04-18 02:49:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:28.087448 | orchestrator | 2026-04-18 02:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:31.140187 | orchestrator | 2026-04-18 02:49:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:31.141747 | orchestrator | 2026-04-18 02:49:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:31.141789 | orchestrator | 2026-04-18 02:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:34.196195 | orchestrator | 2026-04-18 02:49:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:34.199438 | orchestrator | 2026-04-18 02:49:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:34.199491 | orchestrator | 2026-04-18 02:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:37.250969 | orchestrator | 2026-04-18 02:49:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:37.252778 | orchestrator | 2026-04-18 02:49:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:37.252838 | orchestrator | 2026-04-18 02:49:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:40.298751 | orchestrator | 2026-04-18 02:49:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:40.299941 | orchestrator | 2026-04-18 02:49:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:40.300039 | orchestrator | 2026-04-18 02:49:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:43.347331 | orchestrator | 2026-04-18 02:49:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:43.349362 | orchestrator | 2026-04-18 02:49:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:43.349445 | orchestrator | 2026-04-18 02:49:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:46.398566 | orchestrator | 2026-04-18 02:49:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:46.400567 | orchestrator | 2026-04-18 02:49:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:46.400702 | orchestrator | 2026-04-18 02:49:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:49.450494 | orchestrator | 2026-04-18 02:49:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:49.452899 | orchestrator | 2026-04-18 02:49:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:49.453023 | orchestrator | 2026-04-18 02:49:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:52.502815 | orchestrator | 2026-04-18 02:49:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:52.505539 | orchestrator | 2026-04-18 02:49:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:52.505600 | orchestrator | 2026-04-18 02:49:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:55.555910 | orchestrator | 2026-04-18 02:49:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:55.557401 | orchestrator | 2026-04-18 02:49:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:55.557446 | orchestrator | 2026-04-18 02:49:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:49:58.609439 | orchestrator | 2026-04-18 02:49:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:49:58.611211 | orchestrator | 2026-04-18 02:49:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:49:58.611258 | orchestrator | 2026-04-18 02:49:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:01.658300 | orchestrator | 2026-04-18 02:50:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:01.660744 | orchestrator | 2026-04-18 02:50:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:01.660813 | orchestrator | 2026-04-18 02:50:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:04.706283 | orchestrator | 2026-04-18 02:50:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:04.708207 | orchestrator | 2026-04-18 02:50:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:04.708291 | orchestrator | 2026-04-18 02:50:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:07.761201 | orchestrator | 2026-04-18 02:50:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:07.763032 | orchestrator | 2026-04-18 02:50:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:07.763098 | orchestrator | 2026-04-18 02:50:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:10.808163 | orchestrator | 2026-04-18 02:50:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:10.810124 | orchestrator | 2026-04-18 02:50:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:10.810182 | orchestrator | 2026-04-18 02:50:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:13.861734 | orchestrator | 2026-04-18 02:50:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:13.862345 | orchestrator | 2026-04-18 02:50:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:13.862417 | orchestrator | 2026-04-18 02:50:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:16.912901 | orchestrator | 2026-04-18 02:50:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:16.914779 | orchestrator | 2026-04-18 02:50:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:16.914954 | orchestrator | 2026-04-18 02:50:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:19.961142 | orchestrator | 2026-04-18 02:50:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:19.963830 | orchestrator | 2026-04-18 02:50:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:19.963912 | orchestrator | 2026-04-18 02:50:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:23.016365 | orchestrator | 2026-04-18 02:50:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:23.021116 | orchestrator | 2026-04-18 02:50:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:23.021928 | orchestrator | 2026-04-18 02:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:26.073143 | orchestrator | 2026-04-18 02:50:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:26.075449 | orchestrator | 2026-04-18 02:50:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:26.075507 | orchestrator | 2026-04-18 02:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:29.123342 | orchestrator | 2026-04-18 02:50:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:29.124736 | orchestrator | 2026-04-18 02:50:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:29.124817 | orchestrator | 2026-04-18 02:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:32.166148 | orchestrator | 2026-04-18 02:50:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:32.167351 | orchestrator | 2026-04-18 02:50:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:32.167490 | orchestrator | 2026-04-18 02:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:35.211591 | orchestrator | 2026-04-18 02:50:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:35.216940 | orchestrator | 2026-04-18 02:50:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:35.217045 | orchestrator | 2026-04-18 02:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:38.268316 | orchestrator | 2026-04-18 02:50:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:38.269765 | orchestrator | 2026-04-18 02:50:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:38.269830 | orchestrator | 2026-04-18 02:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:41.317585 | orchestrator | 2026-04-18 02:50:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:41.318299 | orchestrator | 2026-04-18 02:50:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:41.318326 | orchestrator | 2026-04-18 02:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:44.369439 | orchestrator | 2026-04-18 02:50:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:44.371982 | orchestrator | 2026-04-18 02:50:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:44.372274 | orchestrator | 2026-04-18 02:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:47.425843 | orchestrator | 2026-04-18 02:50:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:47.429410 | orchestrator | 2026-04-18 02:50:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:47.429533 | orchestrator | 2026-04-18 02:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:50.479796 | orchestrator | 2026-04-18 02:50:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:50.483733 | orchestrator | 2026-04-18 02:50:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:50.483821 | orchestrator | 2026-04-18 02:50:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:53.529326 | orchestrator | 2026-04-18 02:50:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:53.530367 | orchestrator | 2026-04-18 02:50:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:53.530414 | orchestrator | 2026-04-18 02:50:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:56.577974 | orchestrator | 2026-04-18 02:50:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:56.579028 | orchestrator | 2026-04-18 02:50:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:56.579093 | orchestrator | 2026-04-18 02:50:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:50:59.632143 | orchestrator | 2026-04-18 02:50:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:50:59.633915 | orchestrator | 2026-04-18 02:50:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:50:59.633973 | orchestrator | 2026-04-18 02:50:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:02.679524 | orchestrator | 2026-04-18 02:51:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:02.680899 | orchestrator | 2026-04-18 02:51:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:02.680965 | orchestrator | 2026-04-18 02:51:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:05.729574 | orchestrator | 2026-04-18 02:51:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:05.731435 | orchestrator | 2026-04-18 02:51:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:05.731524 | orchestrator | 2026-04-18 02:51:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:08.783985 | orchestrator | 2026-04-18 02:51:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:08.786244 | orchestrator | 2026-04-18 02:51:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:08.786312 | orchestrator | 2026-04-18 02:51:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:11.832627 | orchestrator | 2026-04-18 02:51:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:11.834184 | orchestrator | 2026-04-18 02:51:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:11.834331 | orchestrator | 2026-04-18 02:51:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:14.886888 | orchestrator | 2026-04-18 02:51:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:14.889115 | orchestrator | 2026-04-18 02:51:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:14.889198 | orchestrator | 2026-04-18 02:51:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:17.935595 | orchestrator | 2026-04-18 02:51:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:17.937491 | orchestrator | 2026-04-18 02:51:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:17.937561 | orchestrator | 2026-04-18 02:51:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:20.983778 | orchestrator | 2026-04-18 02:51:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:20.985555 | orchestrator | 2026-04-18 02:51:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:20.985606 | orchestrator | 2026-04-18 02:51:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:24.039966 | orchestrator | 2026-04-18 02:51:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:24.041231 | orchestrator | 2026-04-18 02:51:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:24.041286 | orchestrator | 2026-04-18 02:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:27.094008 | orchestrator | 2026-04-18 02:51:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:27.097228 | orchestrator | 2026-04-18 02:51:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:27.097351 | orchestrator | 2026-04-18 02:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:30.150876 | orchestrator | 2026-04-18 02:51:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:30.153155 | orchestrator | 2026-04-18 02:51:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:30.153233 | orchestrator | 2026-04-18 02:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:33.201293 | orchestrator | 2026-04-18 02:51:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:33.202095 | orchestrator | 2026-04-18 02:51:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:33.202128 | orchestrator | 2026-04-18 02:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:36.251189 | orchestrator | 2026-04-18 02:51:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:36.253473 | orchestrator | 2026-04-18 02:51:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:36.253573 | orchestrator | 2026-04-18 02:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:39.304497 | orchestrator | 2026-04-18 02:51:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:39.306134 | orchestrator | 2026-04-18 02:51:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:39.306177 | orchestrator | 2026-04-18 02:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:42.356972 | orchestrator | 2026-04-18 02:51:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:42.358488 | orchestrator | 2026-04-18 02:51:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:42.358994 | orchestrator | 2026-04-18 02:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:45.408771 | orchestrator | 2026-04-18 02:51:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:45.410558 | orchestrator | 2026-04-18 02:51:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:45.410609 | orchestrator | 2026-04-18 02:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:48.457564 | orchestrator | 2026-04-18 02:51:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:48.458654 | orchestrator | 2026-04-18 02:51:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:48.458751 | orchestrator | 2026-04-18 02:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:51.506209 | orchestrator | 2026-04-18 02:51:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:51.507406 | orchestrator | 2026-04-18 02:51:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:51.507439 | orchestrator | 2026-04-18 02:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:54.557558 | orchestrator | 2026-04-18 02:51:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:54.559677 | orchestrator | 2026-04-18 02:51:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:54.559766 | orchestrator | 2026-04-18 02:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:51:57.607274 | orchestrator | 2026-04-18 02:51:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:51:57.607467 | orchestrator | 2026-04-18 02:51:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:51:57.607500 | orchestrator | 2026-04-18 02:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:00.655844 | orchestrator | 2026-04-18 02:52:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:00.657264 | orchestrator | 2026-04-18 02:52:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:00.657409 | orchestrator | 2026-04-18 02:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:03.714520 | orchestrator | 2026-04-18 02:52:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:03.716682 | orchestrator | 2026-04-18 02:52:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:03.716805 | orchestrator | 2026-04-18 02:52:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:06.769044 | orchestrator | 2026-04-18 02:52:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:06.769804 | orchestrator | 2026-04-18 02:52:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:06.769831 | orchestrator | 2026-04-18 02:52:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:09.818302 | orchestrator | 2026-04-18 02:52:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:09.819245 | orchestrator | 2026-04-18 02:52:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:09.819291 | orchestrator | 2026-04-18 02:52:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:12.878395 | orchestrator | 2026-04-18 02:52:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:12.882410 | orchestrator | 2026-04-18 02:52:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:12.882487 | orchestrator | 2026-04-18 02:52:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:15.941166 | orchestrator | 2026-04-18 02:52:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:15.942958 | orchestrator | 2026-04-18 02:52:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:15.943033 | orchestrator | 2026-04-18 02:52:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:18.991110 | orchestrator | 2026-04-18 02:52:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:18.992368 | orchestrator | 2026-04-18 02:52:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:18.992422 | orchestrator | 2026-04-18 02:52:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:22.042521 | orchestrator | 2026-04-18 02:52:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:22.044405 | orchestrator | 2026-04-18 02:52:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:22.044472 | orchestrator | 2026-04-18 02:52:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:25.099106 | orchestrator | 2026-04-18 02:52:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:25.100431 | orchestrator | 2026-04-18 02:52:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:25.100527 | orchestrator | 2026-04-18 02:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:28.147926 | orchestrator | 2026-04-18 02:52:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:28.149292 | orchestrator | 2026-04-18 02:52:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:28.149337 | orchestrator | 2026-04-18 02:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:31.196862 | orchestrator | 2026-04-18 02:52:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:31.198142 | orchestrator | 2026-04-18 02:52:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:31.198294 | orchestrator | 2026-04-18 02:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:34.245043 | orchestrator | 2026-04-18 02:52:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:34.246501 | orchestrator | 2026-04-18 02:52:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:34.246612 | orchestrator | 2026-04-18 02:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:37.294458 | orchestrator | 2026-04-18 02:52:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:37.299426 | orchestrator | 2026-04-18 02:52:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:37.299751 | orchestrator | 2026-04-18 02:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:40.350084 | orchestrator | 2026-04-18 02:52:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:40.353824 | orchestrator | 2026-04-18 02:52:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:40.353895 | orchestrator | 2026-04-18 02:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:43.401599 | orchestrator | 2026-04-18 02:52:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:43.404229 | orchestrator | 2026-04-18 02:52:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:43.404279 | orchestrator | 2026-04-18 02:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:46.452381 | orchestrator | 2026-04-18 02:52:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:46.454449 | orchestrator | 2026-04-18 02:52:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:46.455908 | orchestrator | 2026-04-18 02:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:49.505377 | orchestrator | 2026-04-18 02:52:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:49.507087 | orchestrator | 2026-04-18 02:52:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:49.507161 | orchestrator | 2026-04-18 02:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:52.552089 | orchestrator | 2026-04-18 02:52:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:52.552865 | orchestrator | 2026-04-18 02:52:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:52.553071 | orchestrator | 2026-04-18 02:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:55.610355 | orchestrator | 2026-04-18 02:52:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:55.611160 | orchestrator | 2026-04-18 02:52:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:55.611229 | orchestrator | 2026-04-18 02:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:52:58.672862 | orchestrator | 2026-04-18 02:52:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:52:58.674206 | orchestrator | 2026-04-18 02:52:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:52:58.674248 | orchestrator | 2026-04-18 02:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:53:01.718097 | orchestrator | 2026-04-18 02:53:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:53:01.721002 | orchestrator | 2026-04-18 02:53:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:53:01.721362 | orchestrator | 2026-04-18 02:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:53:04.771213 | orchestrator | 2026-04-18 02:53:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:53:04.773288 | orchestrator | 2026-04-18 02:53:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:53:04.773338 | orchestrator | 2026-04-18 02:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:53:07.824306 | orchestrator | 2026-04-18 02:53:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:53:07.825807 | orchestrator | 2026-04-18 02:53:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:53:07.825855 | orchestrator | 2026-04-18 02:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:53:10.878111 | orchestrator | 2026-04-18 02:53:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:53:10.880206 | orchestrator | 2026-04-18 02:53:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:53:10.880262 | orchestrator | 2026-04-18 02:53:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:53:13.936555 | orchestrator | 2026-04-18 02:53:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:14.042149 | orchestrator | 2026-04-18 02:55:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:14.042243 | orchestrator | 2026-04-18 02:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:17.091175 | orchestrator | 2026-04-18 02:55:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:17.091685 | orchestrator | 2026-04-18 02:55:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:17.091898 | orchestrator | 2026-04-18 02:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:20.137809 | orchestrator | 2026-04-18 02:55:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:20.139263 | orchestrator | 2026-04-18 02:55:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:20.139307 | orchestrator | 2026-04-18 02:55:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:23.190169 | orchestrator | 2026-04-18 02:55:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:23.194896 | orchestrator | 2026-04-18 02:55:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:23.194961 | orchestrator | 2026-04-18 02:55:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:26.238788 | orchestrator | 2026-04-18 02:55:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:26.241014 | orchestrator | 2026-04-18 02:55:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:26.241068 | orchestrator | 2026-04-18 02:55:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:29.288207 | orchestrator | 2026-04-18 02:55:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:29.289585 | orchestrator | 2026-04-18 02:55:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:29.289637 | orchestrator | 2026-04-18 02:55:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:32.334476 | orchestrator | 2026-04-18 02:55:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:32.336663 | orchestrator | 2026-04-18 02:55:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:32.336701 | orchestrator | 2026-04-18 02:55:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:35.380434 | orchestrator | 2026-04-18 02:55:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:35.382990 | orchestrator | 2026-04-18 02:55:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:35.383074 | orchestrator | 2026-04-18 02:55:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:38.428107 | orchestrator | 2026-04-18 02:55:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:38.429531 | orchestrator | 2026-04-18 02:55:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:38.429577 | orchestrator | 2026-04-18 02:55:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:41.474327 | orchestrator | 2026-04-18 02:55:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:41.476776 | orchestrator | 2026-04-18 02:55:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:41.476909 | orchestrator | 2026-04-18 02:55:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:44.522226 | orchestrator | 2026-04-18 02:55:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:44.523413 | orchestrator | 2026-04-18 02:55:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:44.523501 | orchestrator | 2026-04-18 02:55:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:47.569898 | orchestrator | 2026-04-18 02:55:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:47.570990 | orchestrator | 2026-04-18 02:55:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:47.571247 | orchestrator | 2026-04-18 02:55:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:50.615663 | orchestrator | 2026-04-18 02:55:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:50.618087 | orchestrator | 2026-04-18 02:55:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:50.618142 | orchestrator | 2026-04-18 02:55:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:53.659128 | orchestrator | 2026-04-18 02:55:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:53.662114 | orchestrator | 2026-04-18 02:55:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:53.662203 | orchestrator | 2026-04-18 02:55:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:56.709044 | orchestrator | 2026-04-18 02:55:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:56.711688 | orchestrator | 2026-04-18 02:55:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:56.712163 | orchestrator | 2026-04-18 02:55:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:55:59.755199 | orchestrator | 2026-04-18 02:55:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:55:59.757009 | orchestrator | 2026-04-18 02:55:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:55:59.757072 | orchestrator | 2026-04-18 02:55:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:02.805495 | orchestrator | 2026-04-18 02:56:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:02.806722 | orchestrator | 2026-04-18 02:56:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:02.806765 | orchestrator | 2026-04-18 02:56:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:05.854889 | orchestrator | 2026-04-18 02:56:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:05.856667 | orchestrator | 2026-04-18 02:56:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:05.856717 | orchestrator | 2026-04-18 02:56:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:08.904150 | orchestrator | 2026-04-18 02:56:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:08.904934 | orchestrator | 2026-04-18 02:56:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:08.905050 | orchestrator | 2026-04-18 02:56:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:11.946994 | orchestrator | 2026-04-18 02:56:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:11.948949 | orchestrator | 2026-04-18 02:56:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:11.949091 | orchestrator | 2026-04-18 02:56:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:14.998800 | orchestrator | 2026-04-18 02:56:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:15.000306 | orchestrator | 2026-04-18 02:56:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:15.000714 | orchestrator | 2026-04-18 02:56:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:18.049760 | orchestrator | 2026-04-18 02:56:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:18.060626 | orchestrator | 2026-04-18 02:56:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:18.060705 | orchestrator | 2026-04-18 02:56:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:21.100415 | orchestrator | 2026-04-18 02:56:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:21.101235 | orchestrator | 2026-04-18 02:56:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:21.101341 | orchestrator | 2026-04-18 02:56:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:24.144470 | orchestrator | 2026-04-18 02:56:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:24.144871 | orchestrator | 2026-04-18 02:56:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:24.145007 | orchestrator | 2026-04-18 02:56:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:27.193607 | orchestrator | 2026-04-18 02:56:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:27.194216 | orchestrator | 2026-04-18 02:56:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:27.194254 | orchestrator | 2026-04-18 02:56:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:30.248907 | orchestrator | 2026-04-18 02:56:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:30.250160 | orchestrator | 2026-04-18 02:56:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:30.250198 | orchestrator | 2026-04-18 02:56:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:33.296762 | orchestrator | 2026-04-18 02:56:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:33.298674 | orchestrator | 2026-04-18 02:56:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:33.298732 | orchestrator | 2026-04-18 02:56:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:36.346753 | orchestrator | 2026-04-18 02:56:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:36.348529 | orchestrator | 2026-04-18 02:56:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:36.348586 | orchestrator | 2026-04-18 02:56:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:39.397253 | orchestrator | 2026-04-18 02:56:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:39.397446 | orchestrator | 2026-04-18 02:56:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:39.397560 | orchestrator | 2026-04-18 02:56:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:42.448317 | orchestrator | 2026-04-18 02:56:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:42.448433 | orchestrator | 2026-04-18 02:56:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:42.448505 | orchestrator | 2026-04-18 02:56:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:45.499091 | orchestrator | 2026-04-18 02:56:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:45.500803 | orchestrator | 2026-04-18 02:56:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:45.500957 | orchestrator | 2026-04-18 02:56:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:48.546283 | orchestrator | 2026-04-18 02:56:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:48.548222 | orchestrator | 2026-04-18 02:56:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:48.548291 | orchestrator | 2026-04-18 02:56:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:51.592869 | orchestrator | 2026-04-18 02:56:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:51.593466 | orchestrator | 2026-04-18 02:56:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:51.593491 | orchestrator | 2026-04-18 02:56:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:54.640286 | orchestrator | 2026-04-18 02:56:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:54.642484 | orchestrator | 2026-04-18 02:56:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:54.642590 | orchestrator | 2026-04-18 02:56:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:56:57.684422 | orchestrator | 2026-04-18 02:56:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:56:57.685296 | orchestrator | 2026-04-18 02:56:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:56:57.685341 | orchestrator | 2026-04-18 02:56:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:00.730315 | orchestrator | 2026-04-18 02:57:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:00.732601 | orchestrator | 2026-04-18 02:57:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:00.733212 | orchestrator | 2026-04-18 02:57:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:03.778728 | orchestrator | 2026-04-18 02:57:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:03.781104 | orchestrator | 2026-04-18 02:57:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:03.781185 | orchestrator | 2026-04-18 02:57:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:06.825063 | orchestrator | 2026-04-18 02:57:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:06.826586 | orchestrator | 2026-04-18 02:57:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:06.826876 | orchestrator | 2026-04-18 02:57:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:09.873073 | orchestrator | 2026-04-18 02:57:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:09.874417 | orchestrator | 2026-04-18 02:57:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:09.874588 | orchestrator | 2026-04-18 02:57:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:12.924850 | orchestrator | 2026-04-18 02:57:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:12.927131 | orchestrator | 2026-04-18 02:57:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:12.927211 | orchestrator | 2026-04-18 02:57:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:15.970671 | orchestrator | 2026-04-18 02:57:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:15.973112 | orchestrator | 2026-04-18 02:57:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:15.973186 | orchestrator | 2026-04-18 02:57:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:19.021077 | orchestrator | 2026-04-18 02:57:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:19.023169 | orchestrator | 2026-04-18 02:57:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:19.023224 | orchestrator | 2026-04-18 02:57:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:22.069715 | orchestrator | 2026-04-18 02:57:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:22.070500 | orchestrator | 2026-04-18 02:57:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:22.070732 | orchestrator | 2026-04-18 02:57:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:25.114337 | orchestrator | 2026-04-18 02:57:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:25.116471 | orchestrator | 2026-04-18 02:57:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:25.116589 | orchestrator | 2026-04-18 02:57:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:28.160056 | orchestrator | 2026-04-18 02:57:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:28.161645 | orchestrator | 2026-04-18 02:57:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:28.161687 | orchestrator | 2026-04-18 02:57:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:31.205318 | orchestrator | 2026-04-18 02:57:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:31.205798 | orchestrator | 2026-04-18 02:57:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:31.205935 | orchestrator | 2026-04-18 02:57:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:34.257198 | orchestrator | 2026-04-18 02:57:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:34.258309 | orchestrator | 2026-04-18 02:57:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:34.258462 | orchestrator | 2026-04-18 02:57:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:37.303384 | orchestrator | 2026-04-18 02:57:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:37.305108 | orchestrator | 2026-04-18 02:57:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:37.305189 | orchestrator | 2026-04-18 02:57:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:40.350282 | orchestrator | 2026-04-18 02:57:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:40.355111 | orchestrator | 2026-04-18 02:57:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:40.355484 | orchestrator | 2026-04-18 02:57:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:43.403836 | orchestrator | 2026-04-18 02:57:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:43.406399 | orchestrator | 2026-04-18 02:57:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:43.406605 | orchestrator | 2026-04-18 02:57:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:46.455010 | orchestrator | 2026-04-18 02:57:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:46.456516 | orchestrator | 2026-04-18 02:57:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:46.456634 | orchestrator | 2026-04-18 02:57:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:49.495253 | orchestrator | 2026-04-18 02:57:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:49.497084 | orchestrator | 2026-04-18 02:57:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:49.497143 | orchestrator | 2026-04-18 02:57:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:52.542193 | orchestrator | 2026-04-18 02:57:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:52.543541 | orchestrator | 2026-04-18 02:57:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:52.543631 | orchestrator | 2026-04-18 02:57:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:55.588929 | orchestrator | 2026-04-18 02:57:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:55.590649 | orchestrator | 2026-04-18 02:57:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:55.590699 | orchestrator | 2026-04-18 02:57:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:57:58.637763 | orchestrator | 2026-04-18 02:57:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:57:58.639370 | orchestrator | 2026-04-18 02:57:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:57:58.639431 | orchestrator | 2026-04-18 02:57:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:01.681207 | orchestrator | 2026-04-18 02:58:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:01.683844 | orchestrator | 2026-04-18 02:58:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:01.683970 | orchestrator | 2026-04-18 02:58:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:04.728229 | orchestrator | 2026-04-18 02:58:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:04.729468 | orchestrator | 2026-04-18 02:58:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:04.729730 | orchestrator | 2026-04-18 02:58:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:07.773728 | orchestrator | 2026-04-18 02:58:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:07.775010 | orchestrator | 2026-04-18 02:58:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:07.775082 | orchestrator | 2026-04-18 02:58:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:10.821028 | orchestrator | 2026-04-18 02:58:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:10.822719 | orchestrator | 2026-04-18 02:58:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:10.822785 | orchestrator | 2026-04-18 02:58:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:13.867047 | orchestrator | 2026-04-18 02:58:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:13.868644 | orchestrator | 2026-04-18 02:58:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:13.869337 | orchestrator | 2026-04-18 02:58:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:16.909780 | orchestrator | 2026-04-18 02:58:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:16.912073 | orchestrator | 2026-04-18 02:58:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:16.912156 | orchestrator | 2026-04-18 02:58:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:19.952620 | orchestrator | 2026-04-18 02:58:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:19.954856 | orchestrator | 2026-04-18 02:58:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:19.954914 | orchestrator | 2026-04-18 02:58:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:22.997774 | orchestrator | 2026-04-18 02:58:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:22.998860 | orchestrator | 2026-04-18 02:58:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:22.998979 | orchestrator | 2026-04-18 02:58:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:26.045763 | orchestrator | 2026-04-18 02:58:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:26.047593 | orchestrator | 2026-04-18 02:58:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:26.047637 | orchestrator | 2026-04-18 02:58:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:29.093252 | orchestrator | 2026-04-18 02:58:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:29.094829 | orchestrator | 2026-04-18 02:58:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:29.094931 | orchestrator | 2026-04-18 02:58:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:32.139252 | orchestrator | 2026-04-18 02:58:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:32.140841 | orchestrator | 2026-04-18 02:58:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:32.140985 | orchestrator | 2026-04-18 02:58:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:35.182103 | orchestrator | 2026-04-18 02:58:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:35.184346 | orchestrator | 2026-04-18 02:58:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:35.184410 | orchestrator | 2026-04-18 02:58:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:38.227228 | orchestrator | 2026-04-18 02:58:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:38.228879 | orchestrator | 2026-04-18 02:58:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:38.229106 | orchestrator | 2026-04-18 02:58:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:41.270166 | orchestrator | 2026-04-18 02:58:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:41.271814 | orchestrator | 2026-04-18 02:58:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:41.271887 | orchestrator | 2026-04-18 02:58:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:44.312287 | orchestrator | 2026-04-18 02:58:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:44.314122 | orchestrator | 2026-04-18 02:58:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:44.314179 | orchestrator | 2026-04-18 02:58:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:47.356124 | orchestrator | 2026-04-18 02:58:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:47.357990 | orchestrator | 2026-04-18 02:58:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:47.358071 | orchestrator | 2026-04-18 02:58:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:50.403452 | orchestrator | 2026-04-18 02:58:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:50.406394 | orchestrator | 2026-04-18 02:58:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:50.406452 | orchestrator | 2026-04-18 02:58:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:53.453830 | orchestrator | 2026-04-18 02:58:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:53.455709 | orchestrator | 2026-04-18 02:58:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:53.455792 | orchestrator | 2026-04-18 02:58:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:56.496285 | orchestrator | 2026-04-18 02:58:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:56.499226 | orchestrator | 2026-04-18 02:58:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:56.499287 | orchestrator | 2026-04-18 02:58:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:58:59.543233 | orchestrator | 2026-04-18 02:58:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:58:59.544706 | orchestrator | 2026-04-18 02:58:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:58:59.544748 | orchestrator | 2026-04-18 02:58:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:02.580741 | orchestrator | 2026-04-18 02:59:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:02.583061 | orchestrator | 2026-04-18 02:59:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:02.583226 | orchestrator | 2026-04-18 02:59:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:05.621711 | orchestrator | 2026-04-18 02:59:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:05.623331 | orchestrator | 2026-04-18 02:59:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:05.623408 | orchestrator | 2026-04-18 02:59:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:08.660765 | orchestrator | 2026-04-18 02:59:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:08.662078 | orchestrator | 2026-04-18 02:59:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:08.662218 | orchestrator | 2026-04-18 02:59:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:11.712457 | orchestrator | 2026-04-18 02:59:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:11.714298 | orchestrator | 2026-04-18 02:59:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:11.714340 | orchestrator | 2026-04-18 02:59:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:14.756991 | orchestrator | 2026-04-18 02:59:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:14.758462 | orchestrator | 2026-04-18 02:59:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:14.758521 | orchestrator | 2026-04-18 02:59:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:17.798203 | orchestrator | 2026-04-18 02:59:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:17.804733 | orchestrator | 2026-04-18 02:59:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:17.804833 | orchestrator | 2026-04-18 02:59:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:20.853177 | orchestrator | 2026-04-18 02:59:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:20.857295 | orchestrator | 2026-04-18 02:59:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:20.857464 | orchestrator | 2026-04-18 02:59:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:23.902542 | orchestrator | 2026-04-18 02:59:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:23.903508 | orchestrator | 2026-04-18 02:59:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:23.903543 | orchestrator | 2026-04-18 02:59:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:26.952579 | orchestrator | 2026-04-18 02:59:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:26.954203 | orchestrator | 2026-04-18 02:59:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:26.954551 | orchestrator | 2026-04-18 02:59:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:30.004501 | orchestrator | 2026-04-18 02:59:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:30.006535 | orchestrator | 2026-04-18 02:59:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:30.006623 | orchestrator | 2026-04-18 02:59:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:33.046090 | orchestrator | 2026-04-18 02:59:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:33.048686 | orchestrator | 2026-04-18 02:59:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:33.048766 | orchestrator | 2026-04-18 02:59:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:36.099320 | orchestrator | 2026-04-18 02:59:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:36.099656 | orchestrator | 2026-04-18 02:59:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:36.099836 | orchestrator | 2026-04-18 02:59:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:39.150246 | orchestrator | 2026-04-18 02:59:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:39.152595 | orchestrator | 2026-04-18 02:59:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:39.152711 | orchestrator | 2026-04-18 02:59:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:42.197292 | orchestrator | 2026-04-18 02:59:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:42.198364 | orchestrator | 2026-04-18 02:59:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:42.198425 | orchestrator | 2026-04-18 02:59:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:45.242145 | orchestrator | 2026-04-18 02:59:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:45.245223 | orchestrator | 2026-04-18 02:59:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:45.245278 | orchestrator | 2026-04-18 02:59:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:48.290747 | orchestrator | 2026-04-18 02:59:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:48.292376 | orchestrator | 2026-04-18 02:59:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:48.292413 | orchestrator | 2026-04-18 02:59:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:51.344711 | orchestrator | 2026-04-18 02:59:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:51.347270 | orchestrator | 2026-04-18 02:59:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:51.347316 | orchestrator | 2026-04-18 02:59:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:54.390238 | orchestrator | 2026-04-18 02:59:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:54.392249 | orchestrator | 2026-04-18 02:59:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:54.392528 | orchestrator | 2026-04-18 02:59:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 02:59:57.439356 | orchestrator | 2026-04-18 02:59:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 02:59:57.441023 | orchestrator | 2026-04-18 02:59:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 02:59:57.441478 | orchestrator | 2026-04-18 02:59:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:00.483066 | orchestrator | 2026-04-18 03:00:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:00.484939 | orchestrator | 2026-04-18 03:00:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:00.485070 | orchestrator | 2026-04-18 03:00:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:03.532313 | orchestrator | 2026-04-18 03:00:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:03.535798 | orchestrator | 2026-04-18 03:00:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:03.535872 | orchestrator | 2026-04-18 03:00:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:06.584268 | orchestrator | 2026-04-18 03:00:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:06.586082 | orchestrator | 2026-04-18 03:00:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:06.586191 | orchestrator | 2026-04-18 03:00:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:09.638544 | orchestrator | 2026-04-18 03:00:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:09.640234 | orchestrator | 2026-04-18 03:00:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:09.640330 | orchestrator | 2026-04-18 03:00:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:12.688237 | orchestrator | 2026-04-18 03:00:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:12.690338 | orchestrator | 2026-04-18 03:00:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:12.690433 | orchestrator | 2026-04-18 03:00:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:15.746446 | orchestrator | 2026-04-18 03:00:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:15.748715 | orchestrator | 2026-04-18 03:00:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:15.748791 | orchestrator | 2026-04-18 03:00:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:18.794355 | orchestrator | 2026-04-18 03:00:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:18.796813 | orchestrator | 2026-04-18 03:00:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:18.796856 | orchestrator | 2026-04-18 03:00:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:21.847359 | orchestrator | 2026-04-18 03:00:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:21.848902 | orchestrator | 2026-04-18 03:00:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:21.848962 | orchestrator | 2026-04-18 03:00:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:24.891431 | orchestrator | 2026-04-18 03:00:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:24.891953 | orchestrator | 2026-04-18 03:00:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:24.892255 | orchestrator | 2026-04-18 03:00:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:27.932103 | orchestrator | 2026-04-18 03:00:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:27.932725 | orchestrator | 2026-04-18 03:00:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:27.933645 | orchestrator | 2026-04-18 03:00:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:30.980897 | orchestrator | 2026-04-18 03:00:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:30.982267 | orchestrator | 2026-04-18 03:00:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:30.982355 | orchestrator | 2026-04-18 03:00:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:34.027193 | orchestrator | 2026-04-18 03:00:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:34.029491 | orchestrator | 2026-04-18 03:00:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:34.029561 | orchestrator | 2026-04-18 03:00:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:37.076581 | orchestrator | 2026-04-18 03:00:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:37.078117 | orchestrator | 2026-04-18 03:00:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:37.078189 | orchestrator | 2026-04-18 03:00:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:40.128192 | orchestrator | 2026-04-18 03:00:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:40.129844 | orchestrator | 2026-04-18 03:00:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:40.130572 | orchestrator | 2026-04-18 03:00:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:43.173627 | orchestrator | 2026-04-18 03:00:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:43.174819 | orchestrator | 2026-04-18 03:00:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:43.175102 | orchestrator | 2026-04-18 03:00:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:46.221210 | orchestrator | 2026-04-18 03:00:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:46.221875 | orchestrator | 2026-04-18 03:00:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:46.221930 | orchestrator | 2026-04-18 03:00:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:49.275944 | orchestrator | 2026-04-18 03:00:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:49.278477 | orchestrator | 2026-04-18 03:00:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:49.278638 | orchestrator | 2026-04-18 03:00:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:52.336586 | orchestrator | 2026-04-18 03:00:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:52.339043 | orchestrator | 2026-04-18 03:00:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:52.339099 | orchestrator | 2026-04-18 03:00:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:55.394431 | orchestrator | 2026-04-18 03:00:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:55.394983 | orchestrator | 2026-04-18 03:00:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:55.395004 | orchestrator | 2026-04-18 03:00:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:00:58.446531 | orchestrator | 2026-04-18 03:00:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:00:58.446610 | orchestrator | 2026-04-18 03:00:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:00:58.446619 | orchestrator | 2026-04-18 03:00:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:01.499150 | orchestrator | 2026-04-18 03:01:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:01.501864 | orchestrator | 2026-04-18 03:01:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:01.502001 | orchestrator | 2026-04-18 03:01:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:04.555096 | orchestrator | 2026-04-18 03:01:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:04.556275 | orchestrator | 2026-04-18 03:01:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:04.556326 | orchestrator | 2026-04-18 03:01:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:07.613340 | orchestrator | 2026-04-18 03:01:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:07.615424 | orchestrator | 2026-04-18 03:01:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:07.615502 | orchestrator | 2026-04-18 03:01:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:10.666521 | orchestrator | 2026-04-18 03:01:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:10.668651 | orchestrator | 2026-04-18 03:01:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:10.668730 | orchestrator | 2026-04-18 03:01:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:13.719201 | orchestrator | 2026-04-18 03:01:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:13.721523 | orchestrator | 2026-04-18 03:01:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:13.721719 | orchestrator | 2026-04-18 03:01:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:16.767460 | orchestrator | 2026-04-18 03:01:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:16.768884 | orchestrator | 2026-04-18 03:01:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:16.769493 | orchestrator | 2026-04-18 03:01:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:19.816076 | orchestrator | 2026-04-18 03:01:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:19.820715 | orchestrator | 2026-04-18 03:01:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:19.820799 | orchestrator | 2026-04-18 03:01:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:22.876188 | orchestrator | 2026-04-18 03:01:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:22.877334 | orchestrator | 2026-04-18 03:01:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:22.877381 | orchestrator | 2026-04-18 03:01:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:25.925480 | orchestrator | 2026-04-18 03:01:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:25.926575 | orchestrator | 2026-04-18 03:01:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:25.926669 | orchestrator | 2026-04-18 03:01:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:28.980096 | orchestrator | 2026-04-18 03:01:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:28.981223 | orchestrator | 2026-04-18 03:01:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:28.981274 | orchestrator | 2026-04-18 03:01:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:32.029569 | orchestrator | 2026-04-18 03:01:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:32.029986 | orchestrator | 2026-04-18 03:01:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:32.030010 | orchestrator | 2026-04-18 03:01:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:35.082967 | orchestrator | 2026-04-18 03:01:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:35.084307 | orchestrator | 2026-04-18 03:01:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:35.084364 | orchestrator | 2026-04-18 03:01:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:38.139222 | orchestrator | 2026-04-18 03:01:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:38.141656 | orchestrator | 2026-04-18 03:01:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:38.141774 | orchestrator | 2026-04-18 03:01:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:41.190229 | orchestrator | 2026-04-18 03:01:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:41.190788 | orchestrator | 2026-04-18 03:01:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:41.190957 | orchestrator | 2026-04-18 03:01:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:44.238583 | orchestrator | 2026-04-18 03:01:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:44.239515 | orchestrator | 2026-04-18 03:01:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:44.239557 | orchestrator | 2026-04-18 03:01:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:47.288820 | orchestrator | 2026-04-18 03:01:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:47.294962 | orchestrator | 2026-04-18 03:01:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:47.295059 | orchestrator | 2026-04-18 03:01:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:50.340891 | orchestrator | 2026-04-18 03:01:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:50.342357 | orchestrator | 2026-04-18 03:01:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:50.342406 | orchestrator | 2026-04-18 03:01:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:53.396648 | orchestrator | 2026-04-18 03:01:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:53.397696 | orchestrator | 2026-04-18 03:01:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:53.397793 | orchestrator | 2026-04-18 03:01:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:56.453568 | orchestrator | 2026-04-18 03:01:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:56.455164 | orchestrator | 2026-04-18 03:01:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:56.455218 | orchestrator | 2026-04-18 03:01:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:01:59.504605 | orchestrator | 2026-04-18 03:01:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:01:59.506090 | orchestrator | 2026-04-18 03:01:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:01:59.506127 | orchestrator | 2026-04-18 03:01:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:02.557932 | orchestrator | 2026-04-18 03:02:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:02.560341 | orchestrator | 2026-04-18 03:02:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:02.560391 | orchestrator | 2026-04-18 03:02:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:05.613830 | orchestrator | 2026-04-18 03:02:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:05.616134 | orchestrator | 2026-04-18 03:02:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:05.616190 | orchestrator | 2026-04-18 03:02:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:08.667943 | orchestrator | 2026-04-18 03:02:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:08.669694 | orchestrator | 2026-04-18 03:02:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:08.669831 | orchestrator | 2026-04-18 03:02:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:11.714967 | orchestrator | 2026-04-18 03:02:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:11.716232 | orchestrator | 2026-04-18 03:02:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:11.716293 | orchestrator | 2026-04-18 03:02:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:14.766449 | orchestrator | 2026-04-18 03:02:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:14.768848 | orchestrator | 2026-04-18 03:02:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:14.768921 | orchestrator | 2026-04-18 03:02:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:17.822393 | orchestrator | 2026-04-18 03:02:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:17.824343 | orchestrator | 2026-04-18 03:02:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:17.824438 | orchestrator | 2026-04-18 03:02:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:20.876718 | orchestrator | 2026-04-18 03:02:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:20.876838 | orchestrator | 2026-04-18 03:02:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:20.877390 | orchestrator | 2026-04-18 03:02:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:23.928297 | orchestrator | 2026-04-18 03:02:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:23.929689 | orchestrator | 2026-04-18 03:02:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:23.929880 | orchestrator | 2026-04-18 03:02:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:26.988898 | orchestrator | 2026-04-18 03:02:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:26.991702 | orchestrator | 2026-04-18 03:02:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:26.991942 | orchestrator | 2026-04-18 03:02:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:30.045259 | orchestrator | 2026-04-18 03:02:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:30.046336 | orchestrator | 2026-04-18 03:02:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:30.046381 | orchestrator | 2026-04-18 03:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:33.090785 | orchestrator | 2026-04-18 03:02:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:33.092698 | orchestrator | 2026-04-18 03:02:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:33.092752 | orchestrator | 2026-04-18 03:02:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:36.130761 | orchestrator | 2026-04-18 03:02:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:36.131031 | orchestrator | 2026-04-18 03:02:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:36.131110 | orchestrator | 2026-04-18 03:02:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:39.180667 | orchestrator | 2026-04-18 03:02:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:39.182261 | orchestrator | 2026-04-18 03:02:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:39.182312 | orchestrator | 2026-04-18 03:02:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:42.241270 | orchestrator | 2026-04-18 03:02:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:42.242181 | orchestrator | 2026-04-18 03:02:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:42.242217 | orchestrator | 2026-04-18 03:02:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:45.295185 | orchestrator | 2026-04-18 03:02:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:45.297144 | orchestrator | 2026-04-18 03:02:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:45.297186 | orchestrator | 2026-04-18 03:02:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:48.350371 | orchestrator | 2026-04-18 03:02:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:48.351271 | orchestrator | 2026-04-18 03:02:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:48.351372 | orchestrator | 2026-04-18 03:02:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:51.394159 | orchestrator | 2026-04-18 03:02:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:51.396104 | orchestrator | 2026-04-18 03:02:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:51.396250 | orchestrator | 2026-04-18 03:02:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:54.448543 | orchestrator | 2026-04-18 03:02:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:54.450396 | orchestrator | 2026-04-18 03:02:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:54.450453 | orchestrator | 2026-04-18 03:02:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:02:57.512008 | orchestrator | 2026-04-18 03:02:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:02:57.512185 | orchestrator | 2026-04-18 03:02:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:02:57.512202 | orchestrator | 2026-04-18 03:02:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:00.561493 | orchestrator | 2026-04-18 03:03:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:00.563178 | orchestrator | 2026-04-18 03:03:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:00.563271 | orchestrator | 2026-04-18 03:03:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:03.614728 | orchestrator | 2026-04-18 03:03:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:03.615951 | orchestrator | 2026-04-18 03:03:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:03.615993 | orchestrator | 2026-04-18 03:03:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:06.654896 | orchestrator | 2026-04-18 03:03:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:06.656108 | orchestrator | 2026-04-18 03:03:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:06.656155 | orchestrator | 2026-04-18 03:03:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:09.695646 | orchestrator | 2026-04-18 03:03:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:09.697042 | orchestrator | 2026-04-18 03:03:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:09.697155 | orchestrator | 2026-04-18 03:03:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:12.748499 | orchestrator | 2026-04-18 03:03:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:12.750624 | orchestrator | 2026-04-18 03:03:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:12.750683 | orchestrator | 2026-04-18 03:03:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:15.802734 | orchestrator | 2026-04-18 03:03:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:15.804759 | orchestrator | 2026-04-18 03:03:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:15.804966 | orchestrator | 2026-04-18 03:03:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:18.857839 | orchestrator | 2026-04-18 03:03:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:18.859482 | orchestrator | 2026-04-18 03:03:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:18.859513 | orchestrator | 2026-04-18 03:03:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:21.912427 | orchestrator | 2026-04-18 03:03:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:21.918158 | orchestrator | 2026-04-18 03:03:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:21.918255 | orchestrator | 2026-04-18 03:03:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:24.969364 | orchestrator | 2026-04-18 03:03:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:24.970572 | orchestrator | 2026-04-18 03:03:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:24.970622 | orchestrator | 2026-04-18 03:03:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:28.026134 | orchestrator | 2026-04-18 03:03:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:28.026374 | orchestrator | 2026-04-18 03:03:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:28.027155 | orchestrator | 2026-04-18 03:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:31.076179 | orchestrator | 2026-04-18 03:03:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:31.077679 | orchestrator | 2026-04-18 03:03:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:31.077736 | orchestrator | 2026-04-18 03:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:34.122559 | orchestrator | 2026-04-18 03:03:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:34.124631 | orchestrator | 2026-04-18 03:03:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:34.124660 | orchestrator | 2026-04-18 03:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:37.174736 | orchestrator | 2026-04-18 03:03:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:37.175196 | orchestrator | 2026-04-18 03:03:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:37.175228 | orchestrator | 2026-04-18 03:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:40.227235 | orchestrator | 2026-04-18 03:03:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:40.229525 | orchestrator | 2026-04-18 03:03:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:40.229610 | orchestrator | 2026-04-18 03:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:43.280379 | orchestrator | 2026-04-18 03:03:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:43.280881 | orchestrator | 2026-04-18 03:03:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:43.281477 | orchestrator | 2026-04-18 03:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:46.336282 | orchestrator | 2026-04-18 03:03:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:46.338207 | orchestrator | 2026-04-18 03:03:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:46.338277 | orchestrator | 2026-04-18 03:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:49.385200 | orchestrator | 2026-04-18 03:03:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:49.386337 | orchestrator | 2026-04-18 03:03:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:49.386413 | orchestrator | 2026-04-18 03:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:52.430832 | orchestrator | 2026-04-18 03:03:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:52.431960 | orchestrator | 2026-04-18 03:03:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:52.432018 | orchestrator | 2026-04-18 03:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:55.484191 | orchestrator | 2026-04-18 03:03:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:55.485898 | orchestrator | 2026-04-18 03:03:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:55.485996 | orchestrator | 2026-04-18 03:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:03:58.539525 | orchestrator | 2026-04-18 03:03:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:03:58.540360 | orchestrator | 2026-04-18 03:03:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:03:58.540445 | orchestrator | 2026-04-18 03:03:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:01.595969 | orchestrator | 2026-04-18 03:04:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:01.597323 | orchestrator | 2026-04-18 03:04:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:01.597384 | orchestrator | 2026-04-18 03:04:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:04.648599 | orchestrator | 2026-04-18 03:04:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:04.651525 | orchestrator | 2026-04-18 03:04:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:04.651615 | orchestrator | 2026-04-18 03:04:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:07.704591 | orchestrator | 2026-04-18 03:04:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:07.706275 | orchestrator | 2026-04-18 03:04:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:07.706323 | orchestrator | 2026-04-18 03:04:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:10.758491 | orchestrator | 2026-04-18 03:04:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:10.762390 | orchestrator | 2026-04-18 03:04:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:10.762479 | orchestrator | 2026-04-18 03:04:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:13.818217 | orchestrator | 2026-04-18 03:04:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:13.820708 | orchestrator | 2026-04-18 03:04:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:13.820765 | orchestrator | 2026-04-18 03:04:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:16.871289 | orchestrator | 2026-04-18 03:04:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:16.873593 | orchestrator | 2026-04-18 03:04:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:16.873758 | orchestrator | 2026-04-18 03:04:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:19.924910 | orchestrator | 2026-04-18 03:04:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:19.927669 | orchestrator | 2026-04-18 03:04:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:19.927743 | orchestrator | 2026-04-18 03:04:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:22.980287 | orchestrator | 2026-04-18 03:04:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:22.981407 | orchestrator | 2026-04-18 03:04:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:22.981450 | orchestrator | 2026-04-18 03:04:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:26.032803 | orchestrator | 2026-04-18 03:04:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:26.033826 | orchestrator | 2026-04-18 03:04:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:26.033888 | orchestrator | 2026-04-18 03:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:29.086361 | orchestrator | 2026-04-18 03:04:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:29.087182 | orchestrator | 2026-04-18 03:04:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:29.087232 | orchestrator | 2026-04-18 03:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:32.134892 | orchestrator | 2026-04-18 03:04:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:32.136133 | orchestrator | 2026-04-18 03:04:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:32.136286 | orchestrator | 2026-04-18 03:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:35.189893 | orchestrator | 2026-04-18 03:04:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:35.191802 | orchestrator | 2026-04-18 03:04:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:35.191903 | orchestrator | 2026-04-18 03:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:38.246132 | orchestrator | 2026-04-18 03:04:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:38.248434 | orchestrator | 2026-04-18 03:04:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:38.248543 | orchestrator | 2026-04-18 03:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:41.300274 | orchestrator | 2026-04-18 03:04:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:41.301450 | orchestrator | 2026-04-18 03:04:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:41.301514 | orchestrator | 2026-04-18 03:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:44.353682 | orchestrator | 2026-04-18 03:04:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:44.355301 | orchestrator | 2026-04-18 03:04:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:44.355336 | orchestrator | 2026-04-18 03:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:47.409273 | orchestrator | 2026-04-18 03:04:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:47.411276 | orchestrator | 2026-04-18 03:04:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:47.411354 | orchestrator | 2026-04-18 03:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:50.460513 | orchestrator | 2026-04-18 03:04:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:50.461272 | orchestrator | 2026-04-18 03:04:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:50.461329 | orchestrator | 2026-04-18 03:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:53.522305 | orchestrator | 2026-04-18 03:04:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:53.524005 | orchestrator | 2026-04-18 03:04:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:53.524141 | orchestrator | 2026-04-18 03:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:56.570177 | orchestrator | 2026-04-18 03:04:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:56.571006 | orchestrator | 2026-04-18 03:04:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:56.571301 | orchestrator | 2026-04-18 03:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:04:59.626735 | orchestrator | 2026-04-18 03:04:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:04:59.629066 | orchestrator | 2026-04-18 03:04:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:04:59.629162 | orchestrator | 2026-04-18 03:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:02.679783 | orchestrator | 2026-04-18 03:05:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:02.681804 | orchestrator | 2026-04-18 03:05:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:02.681888 | orchestrator | 2026-04-18 03:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:05.730939 | orchestrator | 2026-04-18 03:05:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:05.732353 | orchestrator | 2026-04-18 03:05:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:05.732482 | orchestrator | 2026-04-18 03:05:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:08.781374 | orchestrator | 2026-04-18 03:05:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:08.783492 | orchestrator | 2026-04-18 03:05:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:08.783573 | orchestrator | 2026-04-18 03:05:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:11.831774 | orchestrator | 2026-04-18 03:05:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:11.833810 | orchestrator | 2026-04-18 03:05:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:11.833867 | orchestrator | 2026-04-18 03:05:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:14.882653 | orchestrator | 2026-04-18 03:05:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:14.885429 | orchestrator | 2026-04-18 03:05:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:14.885504 | orchestrator | 2026-04-18 03:05:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:17.937082 | orchestrator | 2026-04-18 03:05:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:17.937871 | orchestrator | 2026-04-18 03:05:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:17.937903 | orchestrator | 2026-04-18 03:05:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:20.992592 | orchestrator | 2026-04-18 03:05:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:20.995337 | orchestrator | 2026-04-18 03:05:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:20.995454 | orchestrator | 2026-04-18 03:05:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:24.046547 | orchestrator | 2026-04-18 03:05:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:24.046702 | orchestrator | 2026-04-18 03:05:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:24.046719 | orchestrator | 2026-04-18 03:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:27.095612 | orchestrator | 2026-04-18 03:05:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:27.096757 | orchestrator | 2026-04-18 03:05:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:27.096803 | orchestrator | 2026-04-18 03:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:30.144791 | orchestrator | 2026-04-18 03:05:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:30.146595 | orchestrator | 2026-04-18 03:05:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:30.146645 | orchestrator | 2026-04-18 03:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:33.190346 | orchestrator | 2026-04-18 03:05:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:33.192004 | orchestrator | 2026-04-18 03:05:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:33.192076 | orchestrator | 2026-04-18 03:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:36.244519 | orchestrator | 2026-04-18 03:05:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:36.246012 | orchestrator | 2026-04-18 03:05:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:36.246175 | orchestrator | 2026-04-18 03:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:39.302949 | orchestrator | 2026-04-18 03:05:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:39.304945 | orchestrator | 2026-04-18 03:05:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:39.305002 | orchestrator | 2026-04-18 03:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:42.352420 | orchestrator | 2026-04-18 03:05:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:42.354826 | orchestrator | 2026-04-18 03:05:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:42.354875 | orchestrator | 2026-04-18 03:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:45.410914 | orchestrator | 2026-04-18 03:05:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:45.413824 | orchestrator | 2026-04-18 03:05:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:45.413901 | orchestrator | 2026-04-18 03:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:48.471185 | orchestrator | 2026-04-18 03:05:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:48.474095 | orchestrator | 2026-04-18 03:05:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:48.474272 | orchestrator | 2026-04-18 03:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:51.530618 | orchestrator | 2026-04-18 03:05:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:51.533187 | orchestrator | 2026-04-18 03:05:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:51.533248 | orchestrator | 2026-04-18 03:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:54.584987 | orchestrator | 2026-04-18 03:05:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:54.586803 | orchestrator | 2026-04-18 03:05:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:54.586948 | orchestrator | 2026-04-18 03:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:05:57.634775 | orchestrator | 2026-04-18 03:05:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:05:57.635247 | orchestrator | 2026-04-18 03:05:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:05:57.635612 | orchestrator | 2026-04-18 03:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:00.694991 | orchestrator | 2026-04-18 03:06:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:00.697571 | orchestrator | 2026-04-18 03:06:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:00.697656 | orchestrator | 2026-04-18 03:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:03.746904 | orchestrator | 2026-04-18 03:06:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:03.747718 | orchestrator | 2026-04-18 03:06:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:03.747789 | orchestrator | 2026-04-18 03:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:06.790434 | orchestrator | 2026-04-18 03:06:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:06.792662 | orchestrator | 2026-04-18 03:06:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:06.792740 | orchestrator | 2026-04-18 03:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:09.846654 | orchestrator | 2026-04-18 03:06:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:09.848345 | orchestrator | 2026-04-18 03:06:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:09.848500 | orchestrator | 2026-04-18 03:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:12.901228 | orchestrator | 2026-04-18 03:06:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:12.901571 | orchestrator | 2026-04-18 03:06:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:12.901601 | orchestrator | 2026-04-18 03:06:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:15.950640 | orchestrator | 2026-04-18 03:06:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:15.953504 | orchestrator | 2026-04-18 03:06:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:15.953559 | orchestrator | 2026-04-18 03:06:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:19.005620 | orchestrator | 2026-04-18 03:06:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:19.012720 | orchestrator | 2026-04-18 03:06:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:19.012795 | orchestrator | 2026-04-18 03:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:22.061049 | orchestrator | 2026-04-18 03:06:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:22.063069 | orchestrator | 2026-04-18 03:06:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:22.063108 | orchestrator | 2026-04-18 03:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:25.115067 | orchestrator | 2026-04-18 03:06:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:25.117090 | orchestrator | 2026-04-18 03:06:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:25.117180 | orchestrator | 2026-04-18 03:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:28.155612 | orchestrator | 2026-04-18 03:06:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:28.157331 | orchestrator | 2026-04-18 03:06:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:28.157381 | orchestrator | 2026-04-18 03:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:31.209433 | orchestrator | 2026-04-18 03:06:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:31.210961 | orchestrator | 2026-04-18 03:06:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:31.211197 | orchestrator | 2026-04-18 03:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:34.257405 | orchestrator | 2026-04-18 03:06:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:34.258796 | orchestrator | 2026-04-18 03:06:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:34.258839 | orchestrator | 2026-04-18 03:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:37.310685 | orchestrator | 2026-04-18 03:06:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:37.312100 | orchestrator | 2026-04-18 03:06:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:37.312159 | orchestrator | 2026-04-18 03:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:40.362581 | orchestrator | 2026-04-18 03:06:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:40.363528 | orchestrator | 2026-04-18 03:06:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:40.363578 | orchestrator | 2026-04-18 03:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:43.416534 | orchestrator | 2026-04-18 03:06:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:43.418006 | orchestrator | 2026-04-18 03:06:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:43.418206 | orchestrator | 2026-04-18 03:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:46.464731 | orchestrator | 2026-04-18 03:06:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:46.466790 | orchestrator | 2026-04-18 03:06:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:46.466897 | orchestrator | 2026-04-18 03:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:49.513266 | orchestrator | 2026-04-18 03:06:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:49.513764 | orchestrator | 2026-04-18 03:06:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:49.513842 | orchestrator | 2026-04-18 03:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:52.564014 | orchestrator | 2026-04-18 03:06:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:52.565897 | orchestrator | 2026-04-18 03:06:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:52.565952 | orchestrator | 2026-04-18 03:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:55.620063 | orchestrator | 2026-04-18 03:06:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:55.620992 | orchestrator | 2026-04-18 03:06:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:55.621078 | orchestrator | 2026-04-18 03:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:06:58.674680 | orchestrator | 2026-04-18 03:06:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:06:58.676637 | orchestrator | 2026-04-18 03:06:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:06:58.676724 | orchestrator | 2026-04-18 03:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:01.739628 | orchestrator | 2026-04-18 03:07:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:01.742171 | orchestrator | 2026-04-18 03:07:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:01.742232 | orchestrator | 2026-04-18 03:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:04.794250 | orchestrator | 2026-04-18 03:07:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:04.794931 | orchestrator | 2026-04-18 03:07:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:04.795078 | orchestrator | 2026-04-18 03:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:07.848561 | orchestrator | 2026-04-18 03:07:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:07.850627 | orchestrator | 2026-04-18 03:07:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:07.850825 | orchestrator | 2026-04-18 03:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:10.902859 | orchestrator | 2026-04-18 03:07:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:10.904745 | orchestrator | 2026-04-18 03:07:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:10.904792 | orchestrator | 2026-04-18 03:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:13.960174 | orchestrator | 2026-04-18 03:07:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:13.960945 | orchestrator | 2026-04-18 03:07:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:13.961005 | orchestrator | 2026-04-18 03:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:16.998502 | orchestrator | 2026-04-18 03:07:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:16.998583 | orchestrator | 2026-04-18 03:07:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:16.998592 | orchestrator | 2026-04-18 03:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:20.044532 | orchestrator | 2026-04-18 03:07:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:20.046475 | orchestrator | 2026-04-18 03:07:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:20.046622 | orchestrator | 2026-04-18 03:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:23.096108 | orchestrator | 2026-04-18 03:07:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:23.098554 | orchestrator | 2026-04-18 03:07:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:23.098681 | orchestrator | 2026-04-18 03:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:26.146342 | orchestrator | 2026-04-18 03:07:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:26.148897 | orchestrator | 2026-04-18 03:07:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:26.148960 | orchestrator | 2026-04-18 03:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:29.201461 | orchestrator | 2026-04-18 03:07:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:29.203017 | orchestrator | 2026-04-18 03:07:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:29.203075 | orchestrator | 2026-04-18 03:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:32.251265 | orchestrator | 2026-04-18 03:07:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:32.253236 | orchestrator | 2026-04-18 03:07:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:32.253288 | orchestrator | 2026-04-18 03:07:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:35.299782 | orchestrator | 2026-04-18 03:07:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:35.300348 | orchestrator | 2026-04-18 03:07:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:35.300398 | orchestrator | 2026-04-18 03:07:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:38.349866 | orchestrator | 2026-04-18 03:07:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:38.350352 | orchestrator | 2026-04-18 03:07:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:38.350378 | orchestrator | 2026-04-18 03:07:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:41.398529 | orchestrator | 2026-04-18 03:07:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:41.400748 | orchestrator | 2026-04-18 03:07:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:41.400821 | orchestrator | 2026-04-18 03:07:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:44.450591 | orchestrator | 2026-04-18 03:07:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:44.452811 | orchestrator | 2026-04-18 03:07:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:44.453321 | orchestrator | 2026-04-18 03:07:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:47.502688 | orchestrator | 2026-04-18 03:07:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:47.504722 | orchestrator | 2026-04-18 03:07:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:47.504871 | orchestrator | 2026-04-18 03:07:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:50.554913 | orchestrator | 2026-04-18 03:07:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:50.556198 | orchestrator | 2026-04-18 03:07:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:50.556236 | orchestrator | 2026-04-18 03:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:53.603684 | orchestrator | 2026-04-18 03:07:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:53.605386 | orchestrator | 2026-04-18 03:07:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:53.605431 | orchestrator | 2026-04-18 03:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:56.649582 | orchestrator | 2026-04-18 03:07:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:56.652479 | orchestrator | 2026-04-18 03:07:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:56.652535 | orchestrator | 2026-04-18 03:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:07:59.699490 | orchestrator | 2026-04-18 03:07:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:07:59.702165 | orchestrator | 2026-04-18 03:07:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:07:59.702267 | orchestrator | 2026-04-18 03:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:02.750797 | orchestrator | 2026-04-18 03:08:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:02.751346 | orchestrator | 2026-04-18 03:08:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:02.751382 | orchestrator | 2026-04-18 03:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:05.801236 | orchestrator | 2026-04-18 03:08:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:05.802982 | orchestrator | 2026-04-18 03:08:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:05.803087 | orchestrator | 2026-04-18 03:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:08.845141 | orchestrator | 2026-04-18 03:08:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:08.846819 | orchestrator | 2026-04-18 03:08:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:08.846865 | orchestrator | 2026-04-18 03:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:11.893061 | orchestrator | 2026-04-18 03:08:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:11.894224 | orchestrator | 2026-04-18 03:08:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:11.894280 | orchestrator | 2026-04-18 03:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:14.939606 | orchestrator | 2026-04-18 03:08:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:14.941583 | orchestrator | 2026-04-18 03:08:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:14.941714 | orchestrator | 2026-04-18 03:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:17.987208 | orchestrator | 2026-04-18 03:08:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:17.988166 | orchestrator | 2026-04-18 03:08:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:17.988201 | orchestrator | 2026-04-18 03:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:21.039838 | orchestrator | 2026-04-18 03:08:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:21.040556 | orchestrator | 2026-04-18 03:08:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:21.040610 | orchestrator | 2026-04-18 03:08:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:24.081916 | orchestrator | 2026-04-18 03:08:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:24.082775 | orchestrator | 2026-04-18 03:08:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:24.082838 | orchestrator | 2026-04-18 03:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:27.121635 | orchestrator | 2026-04-18 03:08:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:27.123223 | orchestrator | 2026-04-18 03:08:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:27.123305 | orchestrator | 2026-04-18 03:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:30.164384 | orchestrator | 2026-04-18 03:08:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:30.165354 | orchestrator | 2026-04-18 03:08:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:30.165753 | orchestrator | 2026-04-18 03:08:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:33.208230 | orchestrator | 2026-04-18 03:08:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:33.208647 | orchestrator | 2026-04-18 03:08:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:33.208664 | orchestrator | 2026-04-18 03:08:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:36.253540 | orchestrator | 2026-04-18 03:08:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:36.254756 | orchestrator | 2026-04-18 03:08:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:36.254811 | orchestrator | 2026-04-18 03:08:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:39.301162 | orchestrator | 2026-04-18 03:08:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:39.302862 | orchestrator | 2026-04-18 03:08:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:39.303007 | orchestrator | 2026-04-18 03:08:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:42.351071 | orchestrator | 2026-04-18 03:08:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:42.352787 | orchestrator | 2026-04-18 03:08:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:42.352854 | orchestrator | 2026-04-18 03:08:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:45.399069 | orchestrator | 2026-04-18 03:08:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:45.400663 | orchestrator | 2026-04-18 03:08:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:45.401210 | orchestrator | 2026-04-18 03:08:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:48.449600 | orchestrator | 2026-04-18 03:08:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:48.449995 | orchestrator | 2026-04-18 03:08:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:48.450064 | orchestrator | 2026-04-18 03:08:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:51.492580 | orchestrator | 2026-04-18 03:08:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:51.496235 | orchestrator | 2026-04-18 03:08:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:51.496908 | orchestrator | 2026-04-18 03:08:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:54.545166 | orchestrator | 2026-04-18 03:08:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:54.548028 | orchestrator | 2026-04-18 03:08:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:54.548104 | orchestrator | 2026-04-18 03:08:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:08:57.595165 | orchestrator | 2026-04-18 03:08:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:08:57.599010 | orchestrator | 2026-04-18 03:08:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:08:57.599098 | orchestrator | 2026-04-18 03:08:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:00.643331 | orchestrator | 2026-04-18 03:09:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:00.644441 | orchestrator | 2026-04-18 03:09:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:00.644480 | orchestrator | 2026-04-18 03:09:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:03.694995 | orchestrator | 2026-04-18 03:09:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:03.696478 | orchestrator | 2026-04-18 03:09:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:03.696529 | orchestrator | 2026-04-18 03:09:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:06.746296 | orchestrator | 2026-04-18 03:09:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:06.747544 | orchestrator | 2026-04-18 03:09:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:06.748068 | orchestrator | 2026-04-18 03:09:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:09.791664 | orchestrator | 2026-04-18 03:09:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:09.793869 | orchestrator | 2026-04-18 03:09:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:09.793931 | orchestrator | 2026-04-18 03:09:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:12.839069 | orchestrator | 2026-04-18 03:09:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:12.840932 | orchestrator | 2026-04-18 03:09:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:12.841013 | orchestrator | 2026-04-18 03:09:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:15.888712 | orchestrator | 2026-04-18 03:09:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:15.890147 | orchestrator | 2026-04-18 03:09:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:15.890194 | orchestrator | 2026-04-18 03:09:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:18.942694 | orchestrator | 2026-04-18 03:09:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:18.943223 | orchestrator | 2026-04-18 03:09:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:18.943588 | orchestrator | 2026-04-18 03:09:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:21.989326 | orchestrator | 2026-04-18 03:09:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:21.991084 | orchestrator | 2026-04-18 03:09:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:21.991139 | orchestrator | 2026-04-18 03:09:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:25.038106 | orchestrator | 2026-04-18 03:09:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:25.038326 | orchestrator | 2026-04-18 03:09:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:25.038344 | orchestrator | 2026-04-18 03:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:28.084504 | orchestrator | 2026-04-18 03:09:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:28.087186 | orchestrator | 2026-04-18 03:09:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:28.087273 | orchestrator | 2026-04-18 03:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:31.134430 | orchestrator | 2026-04-18 03:09:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:31.136130 | orchestrator | 2026-04-18 03:09:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:31.136176 | orchestrator | 2026-04-18 03:09:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:34.187877 | orchestrator | 2026-04-18 03:09:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:34.189494 | orchestrator | 2026-04-18 03:09:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:34.189789 | orchestrator | 2026-04-18 03:09:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:37.238808 | orchestrator | 2026-04-18 03:09:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:37.239484 | orchestrator | 2026-04-18 03:09:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:37.239521 | orchestrator | 2026-04-18 03:09:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:40.283038 | orchestrator | 2026-04-18 03:09:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:40.284586 | orchestrator | 2026-04-18 03:09:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:40.284644 | orchestrator | 2026-04-18 03:09:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:43.330316 | orchestrator | 2026-04-18 03:09:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:43.332580 | orchestrator | 2026-04-18 03:09:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:43.332648 | orchestrator | 2026-04-18 03:09:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:46.386187 | orchestrator | 2026-04-18 03:09:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:46.387741 | orchestrator | 2026-04-18 03:09:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:46.387812 | orchestrator | 2026-04-18 03:09:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:49.434107 | orchestrator | 2026-04-18 03:09:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:49.436218 | orchestrator | 2026-04-18 03:09:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:49.436434 | orchestrator | 2026-04-18 03:09:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:52.481006 | orchestrator | 2026-04-18 03:09:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:52.481817 | orchestrator | 2026-04-18 03:09:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:52.481854 | orchestrator | 2026-04-18 03:09:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:55.529341 | orchestrator | 2026-04-18 03:09:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:55.532891 | orchestrator | 2026-04-18 03:09:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:55.533750 | orchestrator | 2026-04-18 03:09:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:09:58.584449 | orchestrator | 2026-04-18 03:09:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:09:58.586231 | orchestrator | 2026-04-18 03:09:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:09:58.586357 | orchestrator | 2026-04-18 03:09:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:01.638679 | orchestrator | 2026-04-18 03:10:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:01.640205 | orchestrator | 2026-04-18 03:10:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:01.640315 | orchestrator | 2026-04-18 03:10:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:04.689879 | orchestrator | 2026-04-18 03:10:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:04.692433 | orchestrator | 2026-04-18 03:10:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:04.692694 | orchestrator | 2026-04-18 03:10:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:07.752172 | orchestrator | 2026-04-18 03:10:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:07.754088 | orchestrator | 2026-04-18 03:10:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:07.754155 | orchestrator | 2026-04-18 03:10:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:10.807246 | orchestrator | 2026-04-18 03:10:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:10.808723 | orchestrator | 2026-04-18 03:10:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:10.809165 | orchestrator | 2026-04-18 03:10:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:13.857278 | orchestrator | 2026-04-18 03:10:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:13.858643 | orchestrator | 2026-04-18 03:10:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:13.858683 | orchestrator | 2026-04-18 03:10:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:16.912319 | orchestrator | 2026-04-18 03:10:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:16.914451 | orchestrator | 2026-04-18 03:10:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:16.914943 | orchestrator | 2026-04-18 03:10:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:19.967740 | orchestrator | 2026-04-18 03:10:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:19.969224 | orchestrator | 2026-04-18 03:10:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:19.969287 | orchestrator | 2026-04-18 03:10:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:23.024009 | orchestrator | 2026-04-18 03:10:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:23.025776 | orchestrator | 2026-04-18 03:10:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:23.025846 | orchestrator | 2026-04-18 03:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:26.074693 | orchestrator | 2026-04-18 03:10:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:26.077109 | orchestrator | 2026-04-18 03:10:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:26.077152 | orchestrator | 2026-04-18 03:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:29.122768 | orchestrator | 2026-04-18 03:10:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:29.123646 | orchestrator | 2026-04-18 03:10:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:29.123694 | orchestrator | 2026-04-18 03:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:32.176666 | orchestrator | 2026-04-18 03:10:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:32.177049 | orchestrator | 2026-04-18 03:10:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:32.177114 | orchestrator | 2026-04-18 03:10:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:35.228025 | orchestrator | 2026-04-18 03:10:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:35.229657 | orchestrator | 2026-04-18 03:10:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:35.229715 | orchestrator | 2026-04-18 03:10:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:38.283128 | orchestrator | 2026-04-18 03:10:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:38.285424 | orchestrator | 2026-04-18 03:10:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:38.285482 | orchestrator | 2026-04-18 03:10:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:41.337598 | orchestrator | 2026-04-18 03:10:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:41.338695 | orchestrator | 2026-04-18 03:10:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:41.338844 | orchestrator | 2026-04-18 03:10:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:44.389032 | orchestrator | 2026-04-18 03:10:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:44.390741 | orchestrator | 2026-04-18 03:10:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:44.390809 | orchestrator | 2026-04-18 03:10:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:47.436993 | orchestrator | 2026-04-18 03:10:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:47.438123 | orchestrator | 2026-04-18 03:10:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:47.438381 | orchestrator | 2026-04-18 03:10:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:50.490210 | orchestrator | 2026-04-18 03:10:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:50.491037 | orchestrator | 2026-04-18 03:10:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:50.491077 | orchestrator | 2026-04-18 03:10:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:53.542932 | orchestrator | 2026-04-18 03:10:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:53.544227 | orchestrator | 2026-04-18 03:10:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:53.544779 | orchestrator | 2026-04-18 03:10:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:56.593372 | orchestrator | 2026-04-18 03:10:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:56.594423 | orchestrator | 2026-04-18 03:10:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:56.594455 | orchestrator | 2026-04-18 03:10:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:10:59.637669 | orchestrator | 2026-04-18 03:10:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:10:59.638212 | orchestrator | 2026-04-18 03:10:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:10:59.638259 | orchestrator | 2026-04-18 03:10:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:02.689703 | orchestrator | 2026-04-18 03:11:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:02.690817 | orchestrator | 2026-04-18 03:11:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:02.691506 | orchestrator | 2026-04-18 03:11:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:05.740118 | orchestrator | 2026-04-18 03:11:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:05.741677 | orchestrator | 2026-04-18 03:11:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:05.742124 | orchestrator | 2026-04-18 03:11:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:08.795599 | orchestrator | 2026-04-18 03:11:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:08.796087 | orchestrator | 2026-04-18 03:11:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:08.796157 | orchestrator | 2026-04-18 03:11:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:11.848625 | orchestrator | 2026-04-18 03:11:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:11.850142 | orchestrator | 2026-04-18 03:11:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:11.850187 | orchestrator | 2026-04-18 03:11:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:14.900932 | orchestrator | 2026-04-18 03:11:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:14.903138 | orchestrator | 2026-04-18 03:11:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:14.903189 | orchestrator | 2026-04-18 03:11:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:17.952859 | orchestrator | 2026-04-18 03:11:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:17.954475 | orchestrator | 2026-04-18 03:11:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:17.954513 | orchestrator | 2026-04-18 03:11:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:21.002730 | orchestrator | 2026-04-18 03:11:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:21.005864 | orchestrator | 2026-04-18 03:11:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:21.005930 | orchestrator | 2026-04-18 03:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:24.056890 | orchestrator | 2026-04-18 03:11:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:24.059886 | orchestrator | 2026-04-18 03:11:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:24.059945 | orchestrator | 2026-04-18 03:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:27.107498 | orchestrator | 2026-04-18 03:11:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:27.108781 | orchestrator | 2026-04-18 03:11:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:27.108848 | orchestrator | 2026-04-18 03:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:30.157930 | orchestrator | 2026-04-18 03:11:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:30.160964 | orchestrator | 2026-04-18 03:11:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:30.161028 | orchestrator | 2026-04-18 03:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:33.214341 | orchestrator | 2026-04-18 03:11:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:33.215597 | orchestrator | 2026-04-18 03:11:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:33.215640 | orchestrator | 2026-04-18 03:11:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:36.260658 | orchestrator | 2026-04-18 03:11:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:36.262010 | orchestrator | 2026-04-18 03:11:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:36.262148 | orchestrator | 2026-04-18 03:11:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:39.311434 | orchestrator | 2026-04-18 03:11:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:39.313419 | orchestrator | 2026-04-18 03:11:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:39.313503 | orchestrator | 2026-04-18 03:11:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:42.358254 | orchestrator | 2026-04-18 03:11:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:42.359589 | orchestrator | 2026-04-18 03:11:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:42.359640 | orchestrator | 2026-04-18 03:11:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:45.410268 | orchestrator | 2026-04-18 03:11:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:45.411874 | orchestrator | 2026-04-18 03:11:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:45.411945 | orchestrator | 2026-04-18 03:11:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:48.460212 | orchestrator | 2026-04-18 03:11:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:48.461005 | orchestrator | 2026-04-18 03:11:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:48.461041 | orchestrator | 2026-04-18 03:11:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:51.514702 | orchestrator | 2026-04-18 03:11:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:51.516765 | orchestrator | 2026-04-18 03:11:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:51.516838 | orchestrator | 2026-04-18 03:11:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:54.565847 | orchestrator | 2026-04-18 03:11:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:54.568226 | orchestrator | 2026-04-18 03:11:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:54.568311 | orchestrator | 2026-04-18 03:11:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:11:57.619144 | orchestrator | 2026-04-18 03:11:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:11:57.619520 | orchestrator | 2026-04-18 03:11:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:11:57.619542 | orchestrator | 2026-04-18 03:11:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:00.664777 | orchestrator | 2026-04-18 03:12:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:00.665098 | orchestrator | 2026-04-18 03:12:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:00.665124 | orchestrator | 2026-04-18 03:12:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:03.710360 | orchestrator | 2026-04-18 03:12:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:03.712770 | orchestrator | 2026-04-18 03:12:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:03.712851 | orchestrator | 2026-04-18 03:12:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:06.757469 | orchestrator | 2026-04-18 03:12:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:06.759541 | orchestrator | 2026-04-18 03:12:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:06.759639 | orchestrator | 2026-04-18 03:12:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:09.797885 | orchestrator | 2026-04-18 03:12:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:09.800055 | orchestrator | 2026-04-18 03:12:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:09.800192 | orchestrator | 2026-04-18 03:12:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:12.850434 | orchestrator | 2026-04-18 03:12:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:12.851503 | orchestrator | 2026-04-18 03:12:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:12.851540 | orchestrator | 2026-04-18 03:12:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:15.903205 | orchestrator | 2026-04-18 03:12:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:15.905673 | orchestrator | 2026-04-18 03:12:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:15.905762 | orchestrator | 2026-04-18 03:12:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:18.954642 | orchestrator | 2026-04-18 03:12:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:18.958950 | orchestrator | 2026-04-18 03:12:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:18.959042 | orchestrator | 2026-04-18 03:12:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:22.008595 | orchestrator | 2026-04-18 03:12:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:22.010473 | orchestrator | 2026-04-18 03:12:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:22.010580 | orchestrator | 2026-04-18 03:12:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:25.066433 | orchestrator | 2026-04-18 03:12:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:25.068638 | orchestrator | 2026-04-18 03:12:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:25.068686 | orchestrator | 2026-04-18 03:12:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:28.118858 | orchestrator | 2026-04-18 03:12:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:28.119638 | orchestrator | 2026-04-18 03:12:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:28.119673 | orchestrator | 2026-04-18 03:12:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:31.170835 | orchestrator | 2026-04-18 03:12:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:31.172078 | orchestrator | 2026-04-18 03:12:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:31.172146 | orchestrator | 2026-04-18 03:12:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:34.223858 | orchestrator | 2026-04-18 03:12:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:34.224782 | orchestrator | 2026-04-18 03:12:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:34.224806 | orchestrator | 2026-04-18 03:12:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:37.270472 | orchestrator | 2026-04-18 03:12:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:37.271614 | orchestrator | 2026-04-18 03:12:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:37.271785 | orchestrator | 2026-04-18 03:12:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:40.322473 | orchestrator | 2026-04-18 03:12:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:40.322636 | orchestrator | 2026-04-18 03:12:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:40.322648 | orchestrator | 2026-04-18 03:12:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:43.369072 | orchestrator | 2026-04-18 03:12:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:43.371433 | orchestrator | 2026-04-18 03:12:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:43.371518 | orchestrator | 2026-04-18 03:12:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:46.412752 | orchestrator | 2026-04-18 03:12:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:46.414434 | orchestrator | 2026-04-18 03:12:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:46.414478 | orchestrator | 2026-04-18 03:12:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:49.464076 | orchestrator | 2026-04-18 03:12:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:49.465486 | orchestrator | 2026-04-18 03:12:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:49.465537 | orchestrator | 2026-04-18 03:12:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:52.518358 | orchestrator | 2026-04-18 03:12:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:52.520311 | orchestrator | 2026-04-18 03:12:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:52.520380 | orchestrator | 2026-04-18 03:12:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:55.566696 | orchestrator | 2026-04-18 03:12:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:55.568845 | orchestrator | 2026-04-18 03:12:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:55.568926 | orchestrator | 2026-04-18 03:12:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:12:58.620647 | orchestrator | 2026-04-18 03:12:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:12:58.622641 | orchestrator | 2026-04-18 03:12:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:12:58.622961 | orchestrator | 2026-04-18 03:12:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:01.667326 | orchestrator | 2026-04-18 03:13:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:01.667849 | orchestrator | 2026-04-18 03:13:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:01.667913 | orchestrator | 2026-04-18 03:13:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:04.710727 | orchestrator | 2026-04-18 03:13:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:04.711596 | orchestrator | 2026-04-18 03:13:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:04.711828 | orchestrator | 2026-04-18 03:13:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:07.762478 | orchestrator | 2026-04-18 03:13:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:07.763777 | orchestrator | 2026-04-18 03:13:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:07.763809 | orchestrator | 2026-04-18 03:13:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:10.815452 | orchestrator | 2026-04-18 03:13:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:10.816386 | orchestrator | 2026-04-18 03:13:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:10.816587 | orchestrator | 2026-04-18 03:13:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:13.869848 | orchestrator | 2026-04-18 03:13:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:13.869970 | orchestrator | 2026-04-18 03:13:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:13.869993 | orchestrator | 2026-04-18 03:13:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:16.913407 | orchestrator | 2026-04-18 03:13:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:16.914928 | orchestrator | 2026-04-18 03:13:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:16.915214 | orchestrator | 2026-04-18 03:13:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:19.958086 | orchestrator | 2026-04-18 03:13:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:19.960392 | orchestrator | 2026-04-18 03:13:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:19.960583 | orchestrator | 2026-04-18 03:13:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:23.009335 | orchestrator | 2026-04-18 03:13:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:23.010665 | orchestrator | 2026-04-18 03:13:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:23.010738 | orchestrator | 2026-04-18 03:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:26.056972 | orchestrator | 2026-04-18 03:13:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:26.060066 | orchestrator | 2026-04-18 03:13:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:26.060178 | orchestrator | 2026-04-18 03:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:29.120353 | orchestrator | 2026-04-18 03:13:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:29.121898 | orchestrator | 2026-04-18 03:13:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:29.121954 | orchestrator | 2026-04-18 03:13:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:32.175446 | orchestrator | 2026-04-18 03:13:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:32.177390 | orchestrator | 2026-04-18 03:13:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:32.177438 | orchestrator | 2026-04-18 03:13:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:35.231393 | orchestrator | 2026-04-18 03:13:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:35.231473 | orchestrator | 2026-04-18 03:13:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:35.231492 | orchestrator | 2026-04-18 03:13:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:38.282532 | orchestrator | 2026-04-18 03:13:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:38.285626 | orchestrator | 2026-04-18 03:13:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:38.285695 | orchestrator | 2026-04-18 03:13:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:41.337503 | orchestrator | 2026-04-18 03:13:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:41.338585 | orchestrator | 2026-04-18 03:13:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:41.338632 | orchestrator | 2026-04-18 03:13:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:44.395546 | orchestrator | 2026-04-18 03:13:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:44.397593 | orchestrator | 2026-04-18 03:13:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:44.397657 | orchestrator | 2026-04-18 03:13:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:47.436756 | orchestrator | 2026-04-18 03:13:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:47.438138 | orchestrator | 2026-04-18 03:13:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:47.438178 | orchestrator | 2026-04-18 03:13:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:50.488206 | orchestrator | 2026-04-18 03:13:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:50.490239 | orchestrator | 2026-04-18 03:13:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:50.490297 | orchestrator | 2026-04-18 03:13:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:53.535628 | orchestrator | 2026-04-18 03:13:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:53.537042 | orchestrator | 2026-04-18 03:13:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:53.537132 | orchestrator | 2026-04-18 03:13:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:56.588171 | orchestrator | 2026-04-18 03:13:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:56.589910 | orchestrator | 2026-04-18 03:13:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:56.590160 | orchestrator | 2026-04-18 03:13:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:13:59.635331 | orchestrator | 2026-04-18 03:13:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:13:59.638676 | orchestrator | 2026-04-18 03:13:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:13:59.638753 | orchestrator | 2026-04-18 03:13:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:02.689047 | orchestrator | 2026-04-18 03:14:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:02.690900 | orchestrator | 2026-04-18 03:14:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:02.690944 | orchestrator | 2026-04-18 03:14:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:05.743278 | orchestrator | 2026-04-18 03:14:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:05.743958 | orchestrator | 2026-04-18 03:14:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:05.744269 | orchestrator | 2026-04-18 03:14:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:08.790231 | orchestrator | 2026-04-18 03:14:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:08.791960 | orchestrator | 2026-04-18 03:14:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:08.792120 | orchestrator | 2026-04-18 03:14:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:11.846414 | orchestrator | 2026-04-18 03:14:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:11.847819 | orchestrator | 2026-04-18 03:14:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:11.847868 | orchestrator | 2026-04-18 03:14:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:14.901343 | orchestrator | 2026-04-18 03:14:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:14.902760 | orchestrator | 2026-04-18 03:14:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:14.902800 | orchestrator | 2026-04-18 03:14:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:17.951794 | orchestrator | 2026-04-18 03:14:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:17.953560 | orchestrator | 2026-04-18 03:14:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:17.953611 | orchestrator | 2026-04-18 03:14:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:20.997345 | orchestrator | 2026-04-18 03:14:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:20.999900 | orchestrator | 2026-04-18 03:14:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:20.999941 | orchestrator | 2026-04-18 03:14:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:24.044756 | orchestrator | 2026-04-18 03:14:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:24.045911 | orchestrator | 2026-04-18 03:14:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:24.046005 | orchestrator | 2026-04-18 03:14:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:27.098262 | orchestrator | 2026-04-18 03:14:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:27.100736 | orchestrator | 2026-04-18 03:14:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:27.100793 | orchestrator | 2026-04-18 03:14:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:30.150318 | orchestrator | 2026-04-18 03:14:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:30.151640 | orchestrator | 2026-04-18 03:14:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:30.151682 | orchestrator | 2026-04-18 03:14:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:33.203524 | orchestrator | 2026-04-18 03:14:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:33.204614 | orchestrator | 2026-04-18 03:14:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:33.204664 | orchestrator | 2026-04-18 03:14:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:36.252867 | orchestrator | 2026-04-18 03:14:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:36.253461 | orchestrator | 2026-04-18 03:14:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:36.253825 | orchestrator | 2026-04-18 03:14:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:39.296679 | orchestrator | 2026-04-18 03:14:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:39.297130 | orchestrator | 2026-04-18 03:14:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:39.297242 | orchestrator | 2026-04-18 03:14:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:42.351097 | orchestrator | 2026-04-18 03:14:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:42.352797 | orchestrator | 2026-04-18 03:14:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:42.352860 | orchestrator | 2026-04-18 03:14:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:45.406241 | orchestrator | 2026-04-18 03:14:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:45.407667 | orchestrator | 2026-04-18 03:14:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:45.407717 | orchestrator | 2026-04-18 03:14:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:48.455923 | orchestrator | 2026-04-18 03:14:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:48.458287 | orchestrator | 2026-04-18 03:14:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:48.458406 | orchestrator | 2026-04-18 03:14:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:51.504784 | orchestrator | 2026-04-18 03:14:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:51.505768 | orchestrator | 2026-04-18 03:14:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:51.505807 | orchestrator | 2026-04-18 03:14:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:54.553393 | orchestrator | 2026-04-18 03:14:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:54.554809 | orchestrator | 2026-04-18 03:14:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:54.554861 | orchestrator | 2026-04-18 03:14:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:14:57.601600 | orchestrator | 2026-04-18 03:14:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:14:57.604343 | orchestrator | 2026-04-18 03:14:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:14:57.604393 | orchestrator | 2026-04-18 03:14:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:00.653298 | orchestrator | 2026-04-18 03:15:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:00.655007 | orchestrator | 2026-04-18 03:15:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:00.655059 | orchestrator | 2026-04-18 03:15:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:03.705432 | orchestrator | 2026-04-18 03:15:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:03.706718 | orchestrator | 2026-04-18 03:15:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:03.706772 | orchestrator | 2026-04-18 03:15:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:06.754902 | orchestrator | 2026-04-18 03:15:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:06.756638 | orchestrator | 2026-04-18 03:15:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:06.756895 | orchestrator | 2026-04-18 03:15:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:09.804937 | orchestrator | 2026-04-18 03:15:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:09.806094 | orchestrator | 2026-04-18 03:15:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:09.806157 | orchestrator | 2026-04-18 03:15:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:12.858588 | orchestrator | 2026-04-18 03:15:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:12.859730 | orchestrator | 2026-04-18 03:15:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:12.859848 | orchestrator | 2026-04-18 03:15:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:15.904611 | orchestrator | 2026-04-18 03:15:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:15.906525 | orchestrator | 2026-04-18 03:15:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:15.906587 | orchestrator | 2026-04-18 03:15:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:18.954866 | orchestrator | 2026-04-18 03:15:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:18.957186 | orchestrator | 2026-04-18 03:15:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:18.957260 | orchestrator | 2026-04-18 03:15:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:22.005596 | orchestrator | 2026-04-18 03:15:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:22.007003 | orchestrator | 2026-04-18 03:15:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:22.007090 | orchestrator | 2026-04-18 03:15:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:25.057808 | orchestrator | 2026-04-18 03:15:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:25.060357 | orchestrator | 2026-04-18 03:15:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:25.060481 | orchestrator | 2026-04-18 03:15:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:28.120201 | orchestrator | 2026-04-18 03:15:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:28.120400 | orchestrator | 2026-04-18 03:15:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:28.120417 | orchestrator | 2026-04-18 03:15:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:31.163413 | orchestrator | 2026-04-18 03:15:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:31.164570 | orchestrator | 2026-04-18 03:15:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:31.164676 | orchestrator | 2026-04-18 03:15:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:34.207244 | orchestrator | 2026-04-18 03:15:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:34.209062 | orchestrator | 2026-04-18 03:15:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:34.209122 | orchestrator | 2026-04-18 03:15:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:37.251872 | orchestrator | 2026-04-18 03:15:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:37.252056 | orchestrator | 2026-04-18 03:15:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:37.252073 | orchestrator | 2026-04-18 03:15:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:40.294716 | orchestrator | 2026-04-18 03:15:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:40.295672 | orchestrator | 2026-04-18 03:15:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:40.295711 | orchestrator | 2026-04-18 03:15:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:43.348281 | orchestrator | 2026-04-18 03:15:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:43.349965 | orchestrator | 2026-04-18 03:15:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:43.350011 | orchestrator | 2026-04-18 03:15:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:46.401388 | orchestrator | 2026-04-18 03:15:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:46.403175 | orchestrator | 2026-04-18 03:15:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:46.403269 | orchestrator | 2026-04-18 03:15:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:49.455639 | orchestrator | 2026-04-18 03:15:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:49.457757 | orchestrator | 2026-04-18 03:15:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:49.457821 | orchestrator | 2026-04-18 03:15:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:52.512512 | orchestrator | 2026-04-18 03:15:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:52.514748 | orchestrator | 2026-04-18 03:15:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:52.514798 | orchestrator | 2026-04-18 03:15:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:55.569455 | orchestrator | 2026-04-18 03:15:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:55.570818 | orchestrator | 2026-04-18 03:15:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:55.570970 | orchestrator | 2026-04-18 03:15:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:15:58.620656 | orchestrator | 2026-04-18 03:15:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:15:58.623033 | orchestrator | 2026-04-18 03:15:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:15:58.623096 | orchestrator | 2026-04-18 03:15:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:01.680542 | orchestrator | 2026-04-18 03:16:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:01.681994 | orchestrator | 2026-04-18 03:16:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:01.682127 | orchestrator | 2026-04-18 03:16:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:04.727415 | orchestrator | 2026-04-18 03:16:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:04.728365 | orchestrator | 2026-04-18 03:16:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:04.728432 | orchestrator | 2026-04-18 03:16:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:07.781505 | orchestrator | 2026-04-18 03:16:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:07.783869 | orchestrator | 2026-04-18 03:16:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:07.784024 | orchestrator | 2026-04-18 03:16:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:10.832822 | orchestrator | 2026-04-18 03:16:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:10.834104 | orchestrator | 2026-04-18 03:16:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:10.834253 | orchestrator | 2026-04-18 03:16:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:13.890574 | orchestrator | 2026-04-18 03:16:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:13.891766 | orchestrator | 2026-04-18 03:16:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:13.891881 | orchestrator | 2026-04-18 03:16:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:16.944568 | orchestrator | 2026-04-18 03:16:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:16.947218 | orchestrator | 2026-04-18 03:16:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:16.947329 | orchestrator | 2026-04-18 03:16:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:19.993815 | orchestrator | 2026-04-18 03:16:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:19.994995 | orchestrator | 2026-04-18 03:16:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:19.995043 | orchestrator | 2026-04-18 03:16:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:23.044402 | orchestrator | 2026-04-18 03:16:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:23.045396 | orchestrator | 2026-04-18 03:16:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:23.045445 | orchestrator | 2026-04-18 03:16:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:26.101697 | orchestrator | 2026-04-18 03:16:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:26.103051 | orchestrator | 2026-04-18 03:16:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:26.103099 | orchestrator | 2026-04-18 03:16:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:29.150434 | orchestrator | 2026-04-18 03:16:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:29.151051 | orchestrator | 2026-04-18 03:16:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:29.151081 | orchestrator | 2026-04-18 03:16:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:32.211549 | orchestrator | 2026-04-18 03:16:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:32.212861 | orchestrator | 2026-04-18 03:16:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:32.212914 | orchestrator | 2026-04-18 03:16:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:35.268434 | orchestrator | 2026-04-18 03:16:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:35.268547 | orchestrator | 2026-04-18 03:16:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:35.268600 | orchestrator | 2026-04-18 03:16:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:38.311217 | orchestrator | 2026-04-18 03:16:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:38.312557 | orchestrator | 2026-04-18 03:16:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:38.312625 | orchestrator | 2026-04-18 03:16:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:41.356583 | orchestrator | 2026-04-18 03:16:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:41.358688 | orchestrator | 2026-04-18 03:16:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:41.358761 | orchestrator | 2026-04-18 03:16:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:44.413468 | orchestrator | 2026-04-18 03:16:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:44.415439 | orchestrator | 2026-04-18 03:16:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:44.415528 | orchestrator | 2026-04-18 03:16:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:47.473582 | orchestrator | 2026-04-18 03:16:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:47.475071 | orchestrator | 2026-04-18 03:16:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:47.475133 | orchestrator | 2026-04-18 03:16:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:50.527090 | orchestrator | 2026-04-18 03:16:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:50.529023 | orchestrator | 2026-04-18 03:16:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:50.529092 | orchestrator | 2026-04-18 03:16:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:53.576698 | orchestrator | 2026-04-18 03:16:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:53.577533 | orchestrator | 2026-04-18 03:16:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:53.577830 | orchestrator | 2026-04-18 03:16:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:56.641485 | orchestrator | 2026-04-18 03:16:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:56.643055 | orchestrator | 2026-04-18 03:16:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:56.643887 | orchestrator | 2026-04-18 03:16:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:16:59.691048 | orchestrator | 2026-04-18 03:16:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:16:59.692551 | orchestrator | 2026-04-18 03:16:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:16:59.692621 | orchestrator | 2026-04-18 03:16:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:02.752170 | orchestrator | 2026-04-18 03:17:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:02.753533 | orchestrator | 2026-04-18 03:17:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:02.753983 | orchestrator | 2026-04-18 03:17:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:05.804798 | orchestrator | 2026-04-18 03:17:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:05.806258 | orchestrator | 2026-04-18 03:17:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:05.806294 | orchestrator | 2026-04-18 03:17:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:08.856614 | orchestrator | 2026-04-18 03:17:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:08.857264 | orchestrator | 2026-04-18 03:17:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:08.857297 | orchestrator | 2026-04-18 03:17:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:11.905058 | orchestrator | 2026-04-18 03:17:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:11.906167 | orchestrator | 2026-04-18 03:17:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:11.906201 | orchestrator | 2026-04-18 03:17:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:14.952166 | orchestrator | 2026-04-18 03:17:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:14.955527 | orchestrator | 2026-04-18 03:17:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:14.955623 | orchestrator | 2026-04-18 03:17:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:18.012123 | orchestrator | 2026-04-18 03:17:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:18.013241 | orchestrator | 2026-04-18 03:17:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:18.013292 | orchestrator | 2026-04-18 03:17:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:21.057456 | orchestrator | 2026-04-18 03:17:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:21.058957 | orchestrator | 2026-04-18 03:17:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:21.059098 | orchestrator | 2026-04-18 03:17:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:24.103044 | orchestrator | 2026-04-18 03:17:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:24.105093 | orchestrator | 2026-04-18 03:17:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:24.105215 | orchestrator | 2026-04-18 03:17:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:27.152239 | orchestrator | 2026-04-18 03:17:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:27.152949 | orchestrator | 2026-04-18 03:17:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:27.152979 | orchestrator | 2026-04-18 03:17:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:30.202124 | orchestrator | 2026-04-18 03:17:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:30.203436 | orchestrator | 2026-04-18 03:17:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:30.203633 | orchestrator | 2026-04-18 03:17:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:33.265313 | orchestrator | 2026-04-18 03:17:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:33.267161 | orchestrator | 2026-04-18 03:17:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:33.267212 | orchestrator | 2026-04-18 03:17:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:36.314201 | orchestrator | 2026-04-18 03:17:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:36.316775 | orchestrator | 2026-04-18 03:17:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:36.316836 | orchestrator | 2026-04-18 03:17:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:39.365667 | orchestrator | 2026-04-18 03:17:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:39.366069 | orchestrator | 2026-04-18 03:17:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:39.366265 | orchestrator | 2026-04-18 03:17:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:42.411391 | orchestrator | 2026-04-18 03:17:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:42.414156 | orchestrator | 2026-04-18 03:17:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:42.414248 | orchestrator | 2026-04-18 03:17:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:45.470174 | orchestrator | 2026-04-18 03:17:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:45.470911 | orchestrator | 2026-04-18 03:17:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:45.470982 | orchestrator | 2026-04-18 03:17:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:48.520647 | orchestrator | 2026-04-18 03:17:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:48.522224 | orchestrator | 2026-04-18 03:17:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:48.522477 | orchestrator | 2026-04-18 03:17:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:51.569897 | orchestrator | 2026-04-18 03:17:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:51.573312 | orchestrator | 2026-04-18 03:17:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:51.573399 | orchestrator | 2026-04-18 03:17:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:54.631451 | orchestrator | 2026-04-18 03:17:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:54.631527 | orchestrator | 2026-04-18 03:17:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:54.631536 | orchestrator | 2026-04-18 03:17:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:17:57.676145 | orchestrator | 2026-04-18 03:17:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:17:57.676360 | orchestrator | 2026-04-18 03:17:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:17:57.676385 | orchestrator | 2026-04-18 03:17:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:00.724152 | orchestrator | 2026-04-18 03:18:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:00.726872 | orchestrator | 2026-04-18 03:18:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:00.726960 | orchestrator | 2026-04-18 03:18:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:03.775960 | orchestrator | 2026-04-18 03:18:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:03.777772 | orchestrator | 2026-04-18 03:18:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:03.777862 | orchestrator | 2026-04-18 03:18:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:06.824588 | orchestrator | 2026-04-18 03:18:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:06.826287 | orchestrator | 2026-04-18 03:18:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:06.826354 | orchestrator | 2026-04-18 03:18:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:09.875717 | orchestrator | 2026-04-18 03:18:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:09.876552 | orchestrator | 2026-04-18 03:18:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:09.876896 | orchestrator | 2026-04-18 03:18:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:12.921278 | orchestrator | 2026-04-18 03:18:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:12.922993 | orchestrator | 2026-04-18 03:18:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:12.923117 | orchestrator | 2026-04-18 03:18:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:15.970691 | orchestrator | 2026-04-18 03:18:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:15.972355 | orchestrator | 2026-04-18 03:18:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:15.972410 | orchestrator | 2026-04-18 03:18:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:19.023682 | orchestrator | 2026-04-18 03:18:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:19.025033 | orchestrator | 2026-04-18 03:18:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:19.025432 | orchestrator | 2026-04-18 03:18:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:22.075357 | orchestrator | 2026-04-18 03:18:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:22.076909 | orchestrator | 2026-04-18 03:18:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:22.076982 | orchestrator | 2026-04-18 03:18:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:25.122934 | orchestrator | 2026-04-18 03:18:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:25.124496 | orchestrator | 2026-04-18 03:18:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:25.124753 | orchestrator | 2026-04-18 03:18:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:28.167693 | orchestrator | 2026-04-18 03:18:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:28.168795 | orchestrator | 2026-04-18 03:18:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:28.168838 | orchestrator | 2026-04-18 03:18:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:31.210270 | orchestrator | 2026-04-18 03:18:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:31.211702 | orchestrator | 2026-04-18 03:18:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:31.211744 | orchestrator | 2026-04-18 03:18:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:34.261433 | orchestrator | 2026-04-18 03:18:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:34.262868 | orchestrator | 2026-04-18 03:18:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:34.262959 | orchestrator | 2026-04-18 03:18:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:37.314280 | orchestrator | 2026-04-18 03:18:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:37.318907 | orchestrator | 2026-04-18 03:18:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:37.318989 | orchestrator | 2026-04-18 03:18:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:40.360266 | orchestrator | 2026-04-18 03:18:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:40.361150 | orchestrator | 2026-04-18 03:18:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:40.361192 | orchestrator | 2026-04-18 03:18:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:43.406416 | orchestrator | 2026-04-18 03:18:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:43.408117 | orchestrator | 2026-04-18 03:18:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:43.408179 | orchestrator | 2026-04-18 03:18:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:46.452394 | orchestrator | 2026-04-18 03:18:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:46.454129 | orchestrator | 2026-04-18 03:18:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:46.454167 | orchestrator | 2026-04-18 03:18:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:49.492652 | orchestrator | 2026-04-18 03:18:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:49.494127 | orchestrator | 2026-04-18 03:18:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:49.494303 | orchestrator | 2026-04-18 03:18:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:52.546743 | orchestrator | 2026-04-18 03:18:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:52.548252 | orchestrator | 2026-04-18 03:18:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:52.548395 | orchestrator | 2026-04-18 03:18:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:55.601900 | orchestrator | 2026-04-18 03:18:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:55.603053 | orchestrator | 2026-04-18 03:18:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:55.603094 | orchestrator | 2026-04-18 03:18:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:18:58.652192 | orchestrator | 2026-04-18 03:18:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:18:58.655174 | orchestrator | 2026-04-18 03:18:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:18:58.655235 | orchestrator | 2026-04-18 03:18:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:01.705803 | orchestrator | 2026-04-18 03:19:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:01.707111 | orchestrator | 2026-04-18 03:19:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:01.707153 | orchestrator | 2026-04-18 03:19:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:04.758461 | orchestrator | 2026-04-18 03:19:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:04.758834 | orchestrator | 2026-04-18 03:19:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:04.758864 | orchestrator | 2026-04-18 03:19:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:07.811531 | orchestrator | 2026-04-18 03:19:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:07.813461 | orchestrator | 2026-04-18 03:19:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:07.813517 | orchestrator | 2026-04-18 03:19:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:10.863639 | orchestrator | 2026-04-18 03:19:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:10.865789 | orchestrator | 2026-04-18 03:19:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:10.865950 | orchestrator | 2026-04-18 03:19:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:13.909985 | orchestrator | 2026-04-18 03:19:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:13.912352 | orchestrator | 2026-04-18 03:19:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:13.912428 | orchestrator | 2026-04-18 03:19:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:16.961600 | orchestrator | 2026-04-18 03:19:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:16.962043 | orchestrator | 2026-04-18 03:19:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:16.962071 | orchestrator | 2026-04-18 03:19:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:20.008867 | orchestrator | 2026-04-18 03:19:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:20.010342 | orchestrator | 2026-04-18 03:19:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:20.010422 | orchestrator | 2026-04-18 03:19:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:23.057656 | orchestrator | 2026-04-18 03:19:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:23.058893 | orchestrator | 2026-04-18 03:19:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:23.058949 | orchestrator | 2026-04-18 03:19:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:26.100881 | orchestrator | 2026-04-18 03:19:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:26.102100 | orchestrator | 2026-04-18 03:19:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:26.102303 | orchestrator | 2026-04-18 03:19:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:29.144185 | orchestrator | 2026-04-18 03:19:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:29.146275 | orchestrator | 2026-04-18 03:19:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:29.146343 | orchestrator | 2026-04-18 03:19:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:32.194495 | orchestrator | 2026-04-18 03:19:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:32.195760 | orchestrator | 2026-04-18 03:19:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:32.195860 | orchestrator | 2026-04-18 03:19:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:35.243724 | orchestrator | 2026-04-18 03:19:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:35.245124 | orchestrator | 2026-04-18 03:19:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:35.245572 | orchestrator | 2026-04-18 03:19:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:38.293203 | orchestrator | 2026-04-18 03:19:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:38.295994 | orchestrator | 2026-04-18 03:19:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:38.296048 | orchestrator | 2026-04-18 03:19:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:41.336193 | orchestrator | 2026-04-18 03:19:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:41.337729 | orchestrator | 2026-04-18 03:19:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:41.337783 | orchestrator | 2026-04-18 03:19:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:44.382645 | orchestrator | 2026-04-18 03:19:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:44.383294 | orchestrator | 2026-04-18 03:19:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:44.383493 | orchestrator | 2026-04-18 03:19:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:47.428073 | orchestrator | 2026-04-18 03:19:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:47.431473 | orchestrator | 2026-04-18 03:19:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:47.431596 | orchestrator | 2026-04-18 03:19:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:50.472081 | orchestrator | 2026-04-18 03:19:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:50.473237 | orchestrator | 2026-04-18 03:19:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:50.473302 | orchestrator | 2026-04-18 03:19:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:53.523081 | orchestrator | 2026-04-18 03:19:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:53.525283 | orchestrator | 2026-04-18 03:19:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:53.525332 | orchestrator | 2026-04-18 03:19:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:56.568826 | orchestrator | 2026-04-18 03:19:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:56.570358 | orchestrator | 2026-04-18 03:19:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:56.570647 | orchestrator | 2026-04-18 03:19:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:19:59.623784 | orchestrator | 2026-04-18 03:19:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:19:59.625198 | orchestrator | 2026-04-18 03:19:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:19:59.625282 | orchestrator | 2026-04-18 03:19:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:02.672428 | orchestrator | 2026-04-18 03:20:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:02.674262 | orchestrator | 2026-04-18 03:20:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:02.674411 | orchestrator | 2026-04-18 03:20:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:05.719953 | orchestrator | 2026-04-18 03:20:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:05.721689 | orchestrator | 2026-04-18 03:20:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:05.721792 | orchestrator | 2026-04-18 03:20:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:08.772404 | orchestrator | 2026-04-18 03:20:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:08.774665 | orchestrator | 2026-04-18 03:20:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:08.774745 | orchestrator | 2026-04-18 03:20:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:11.818807 | orchestrator | 2026-04-18 03:20:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:11.820231 | orchestrator | 2026-04-18 03:20:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:11.820333 | orchestrator | 2026-04-18 03:20:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:14.864307 | orchestrator | 2026-04-18 03:20:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:14.865260 | orchestrator | 2026-04-18 03:20:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:14.865315 | orchestrator | 2026-04-18 03:20:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:17.913757 | orchestrator | 2026-04-18 03:20:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:17.915674 | orchestrator | 2026-04-18 03:20:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:17.915739 | orchestrator | 2026-04-18 03:20:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:20.955702 | orchestrator | 2026-04-18 03:20:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:20.958210 | orchestrator | 2026-04-18 03:20:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:20.958262 | orchestrator | 2026-04-18 03:20:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:24.010336 | orchestrator | 2026-04-18 03:20:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:24.012123 | orchestrator | 2026-04-18 03:20:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:24.012211 | orchestrator | 2026-04-18 03:20:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:27.058908 | orchestrator | 2026-04-18 03:20:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:27.060070 | orchestrator | 2026-04-18 03:20:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:27.060128 | orchestrator | 2026-04-18 03:20:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:30.106430 | orchestrator | 2026-04-18 03:20:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:30.107669 | orchestrator | 2026-04-18 03:20:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:30.107779 | orchestrator | 2026-04-18 03:20:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:33.152412 | orchestrator | 2026-04-18 03:20:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:33.154686 | orchestrator | 2026-04-18 03:20:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:33.154740 | orchestrator | 2026-04-18 03:20:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:36.206211 | orchestrator | 2026-04-18 03:20:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:36.208892 | orchestrator | 2026-04-18 03:20:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:36.208955 | orchestrator | 2026-04-18 03:20:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:39.256780 | orchestrator | 2026-04-18 03:20:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:39.258055 | orchestrator | 2026-04-18 03:20:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:39.258140 | orchestrator | 2026-04-18 03:20:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:42.306990 | orchestrator | 2026-04-18 03:20:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:42.309012 | orchestrator | 2026-04-18 03:20:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:42.309070 | orchestrator | 2026-04-18 03:20:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:45.356302 | orchestrator | 2026-04-18 03:20:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:45.357553 | orchestrator | 2026-04-18 03:20:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:45.357651 | orchestrator | 2026-04-18 03:20:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:48.405685 | orchestrator | 2026-04-18 03:20:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:48.407382 | orchestrator | 2026-04-18 03:20:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:48.407609 | orchestrator | 2026-04-18 03:20:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:51.457172 | orchestrator | 2026-04-18 03:20:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:51.458618 | orchestrator | 2026-04-18 03:20:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:51.458888 | orchestrator | 2026-04-18 03:20:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:54.511385 | orchestrator | 2026-04-18 03:20:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:54.513015 | orchestrator | 2026-04-18 03:20:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:54.513077 | orchestrator | 2026-04-18 03:20:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:20:57.563306 | orchestrator | 2026-04-18 03:20:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:20:57.564540 | orchestrator | 2026-04-18 03:20:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:20:57.564600 | orchestrator | 2026-04-18 03:20:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:00.615687 | orchestrator | 2026-04-18 03:21:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:00.618155 | orchestrator | 2026-04-18 03:21:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:00.618242 | orchestrator | 2026-04-18 03:21:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:03.670570 | orchestrator | 2026-04-18 03:21:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:03.671714 | orchestrator | 2026-04-18 03:21:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:03.671779 | orchestrator | 2026-04-18 03:21:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:06.717248 | orchestrator | 2026-04-18 03:21:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:06.718095 | orchestrator | 2026-04-18 03:21:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:06.718138 | orchestrator | 2026-04-18 03:21:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:09.769709 | orchestrator | 2026-04-18 03:21:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:09.772140 | orchestrator | 2026-04-18 03:21:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:09.772186 | orchestrator | 2026-04-18 03:21:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:12.822901 | orchestrator | 2026-04-18 03:21:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:12.824155 | orchestrator | 2026-04-18 03:21:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:12.824471 | orchestrator | 2026-04-18 03:21:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:15.874780 | orchestrator | 2026-04-18 03:21:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:15.876470 | orchestrator | 2026-04-18 03:21:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:15.876513 | orchestrator | 2026-04-18 03:21:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:18.923995 | orchestrator | 2026-04-18 03:21:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:18.926541 | orchestrator | 2026-04-18 03:21:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:18.926612 | orchestrator | 2026-04-18 03:21:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:21.967516 | orchestrator | 2026-04-18 03:21:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:21.968849 | orchestrator | 2026-04-18 03:21:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:21.968895 | orchestrator | 2026-04-18 03:21:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:25.007483 | orchestrator | 2026-04-18 03:21:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:25.008733 | orchestrator | 2026-04-18 03:21:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:25.008888 | orchestrator | 2026-04-18 03:21:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:28.058210 | orchestrator | 2026-04-18 03:21:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:28.061640 | orchestrator | 2026-04-18 03:21:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:28.061754 | orchestrator | 2026-04-18 03:21:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:31.111957 | orchestrator | 2026-04-18 03:21:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:31.115464 | orchestrator | 2026-04-18 03:21:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:31.115523 | orchestrator | 2026-04-18 03:21:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:34.165207 | orchestrator | 2026-04-18 03:21:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:34.167749 | orchestrator | 2026-04-18 03:21:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:34.167801 | orchestrator | 2026-04-18 03:21:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:37.220581 | orchestrator | 2026-04-18 03:21:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:37.220674 | orchestrator | 2026-04-18 03:21:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:37.220687 | orchestrator | 2026-04-18 03:21:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:40.272832 | orchestrator | 2026-04-18 03:21:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:40.274766 | orchestrator | 2026-04-18 03:21:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:40.274800 | orchestrator | 2026-04-18 03:21:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:43.317536 | orchestrator | 2026-04-18 03:21:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:43.319160 | orchestrator | 2026-04-18 03:21:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:43.319200 | orchestrator | 2026-04-18 03:21:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:46.377699 | orchestrator | 2026-04-18 03:21:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:46.377767 | orchestrator | 2026-04-18 03:21:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:46.377773 | orchestrator | 2026-04-18 03:21:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:49.418907 | orchestrator | 2026-04-18 03:21:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:49.419835 | orchestrator | 2026-04-18 03:21:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:49.419874 | orchestrator | 2026-04-18 03:21:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:52.472990 | orchestrator | 2026-04-18 03:21:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:52.474011 | orchestrator | 2026-04-18 03:21:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:52.474103 | orchestrator | 2026-04-18 03:21:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:55.533507 | orchestrator | 2026-04-18 03:21:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:55.535879 | orchestrator | 2026-04-18 03:21:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:55.535943 | orchestrator | 2026-04-18 03:21:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:21:58.587803 | orchestrator | 2026-04-18 03:21:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:21:58.589557 | orchestrator | 2026-04-18 03:21:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:21:58.589698 | orchestrator | 2026-04-18 03:21:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:01.640374 | orchestrator | 2026-04-18 03:22:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:01.642838 | orchestrator | 2026-04-18 03:22:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:01.642887 | orchestrator | 2026-04-18 03:22:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:04.687039 | orchestrator | 2026-04-18 03:22:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:04.687960 | orchestrator | 2026-04-18 03:22:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:04.688018 | orchestrator | 2026-04-18 03:22:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:07.735431 | orchestrator | 2026-04-18 03:22:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:07.736571 | orchestrator | 2026-04-18 03:22:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:07.736618 | orchestrator | 2026-04-18 03:22:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:10.785261 | orchestrator | 2026-04-18 03:22:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:10.785550 | orchestrator | 2026-04-18 03:22:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:10.785574 | orchestrator | 2026-04-18 03:22:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:13.831435 | orchestrator | 2026-04-18 03:22:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:13.833041 | orchestrator | 2026-04-18 03:22:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:13.833089 | orchestrator | 2026-04-18 03:22:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:16.887148 | orchestrator | 2026-04-18 03:22:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:16.889334 | orchestrator | 2026-04-18 03:22:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:16.889442 | orchestrator | 2026-04-18 03:22:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:19.945722 | orchestrator | 2026-04-18 03:22:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:19.947834 | orchestrator | 2026-04-18 03:22:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:19.947880 | orchestrator | 2026-04-18 03:22:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:22.997855 | orchestrator | 2026-04-18 03:22:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:22.998649 | orchestrator | 2026-04-18 03:22:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:22.998701 | orchestrator | 2026-04-18 03:22:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:26.056231 | orchestrator | 2026-04-18 03:22:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:26.057904 | orchestrator | 2026-04-18 03:22:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:26.058051 | orchestrator | 2026-04-18 03:22:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:29.105985 | orchestrator | 2026-04-18 03:22:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:29.106537 | orchestrator | 2026-04-18 03:22:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:29.106609 | orchestrator | 2026-04-18 03:22:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:32.156116 | orchestrator | 2026-04-18 03:22:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:32.157196 | orchestrator | 2026-04-18 03:22:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:32.157262 | orchestrator | 2026-04-18 03:22:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:35.210445 | orchestrator | 2026-04-18 03:22:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:35.212215 | orchestrator | 2026-04-18 03:22:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:35.212259 | orchestrator | 2026-04-18 03:22:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:38.258306 | orchestrator | 2026-04-18 03:22:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:38.259482 | orchestrator | 2026-04-18 03:22:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:38.259611 | orchestrator | 2026-04-18 03:22:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:41.311268 | orchestrator | 2026-04-18 03:22:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:41.312943 | orchestrator | 2026-04-18 03:22:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:41.313028 | orchestrator | 2026-04-18 03:22:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:44.366545 | orchestrator | 2026-04-18 03:22:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:44.368905 | orchestrator | 2026-04-18 03:22:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:44.369021 | orchestrator | 2026-04-18 03:22:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:47.422879 | orchestrator | 2026-04-18 03:22:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:47.426803 | orchestrator | 2026-04-18 03:22:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:47.426892 | orchestrator | 2026-04-18 03:22:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:50.483828 | orchestrator | 2026-04-18 03:22:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:50.485114 | orchestrator | 2026-04-18 03:22:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:50.485169 | orchestrator | 2026-04-18 03:22:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:53.532279 | orchestrator | 2026-04-18 03:22:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:53.535487 | orchestrator | 2026-04-18 03:22:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:53.535559 | orchestrator | 2026-04-18 03:22:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:56.576267 | orchestrator | 2026-04-18 03:22:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:56.577995 | orchestrator | 2026-04-18 03:22:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:56.578174 | orchestrator | 2026-04-18 03:22:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:22:59.624694 | orchestrator | 2026-04-18 03:22:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:22:59.626701 | orchestrator | 2026-04-18 03:22:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:22:59.626760 | orchestrator | 2026-04-18 03:22:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:02.672693 | orchestrator | 2026-04-18 03:23:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:02.673989 | orchestrator | 2026-04-18 03:23:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:02.674333 | orchestrator | 2026-04-18 03:23:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:05.728129 | orchestrator | 2026-04-18 03:23:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:05.730436 | orchestrator | 2026-04-18 03:23:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:05.730903 | orchestrator | 2026-04-18 03:23:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:08.778916 | orchestrator | 2026-04-18 03:23:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:08.780614 | orchestrator | 2026-04-18 03:23:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:08.780700 | orchestrator | 2026-04-18 03:23:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:11.826962 | orchestrator | 2026-04-18 03:23:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:11.829080 | orchestrator | 2026-04-18 03:23:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:11.829162 | orchestrator | 2026-04-18 03:23:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:14.877158 | orchestrator | 2026-04-18 03:23:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:14.879031 | orchestrator | 2026-04-18 03:23:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:14.879112 | orchestrator | 2026-04-18 03:23:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:17.926242 | orchestrator | 2026-04-18 03:23:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:17.928376 | orchestrator | 2026-04-18 03:23:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:17.928450 | orchestrator | 2026-04-18 03:23:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:20.976365 | orchestrator | 2026-04-18 03:23:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:20.977824 | orchestrator | 2026-04-18 03:23:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:20.977940 | orchestrator | 2026-04-18 03:23:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:24.032638 | orchestrator | 2026-04-18 03:23:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:24.033853 | orchestrator | 2026-04-18 03:23:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:24.033915 | orchestrator | 2026-04-18 03:23:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:27.080111 | orchestrator | 2026-04-18 03:23:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:27.081351 | orchestrator | 2026-04-18 03:23:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:27.081472 | orchestrator | 2026-04-18 03:23:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:30.135581 | orchestrator | 2026-04-18 03:23:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:30.137809 | orchestrator | 2026-04-18 03:23:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:30.137870 | orchestrator | 2026-04-18 03:23:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:33.185752 | orchestrator | 2026-04-18 03:23:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:33.188194 | orchestrator | 2026-04-18 03:23:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:33.188412 | orchestrator | 2026-04-18 03:23:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:36.236107 | orchestrator | 2026-04-18 03:23:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:36.238486 | orchestrator | 2026-04-18 03:23:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:36.238546 | orchestrator | 2026-04-18 03:23:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:39.286926 | orchestrator | 2026-04-18 03:23:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:39.288921 | orchestrator | 2026-04-18 03:23:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:39.288986 | orchestrator | 2026-04-18 03:23:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:42.338605 | orchestrator | 2026-04-18 03:23:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:42.341834 | orchestrator | 2026-04-18 03:23:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:42.341915 | orchestrator | 2026-04-18 03:23:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:45.383870 | orchestrator | 2026-04-18 03:23:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:45.385445 | orchestrator | 2026-04-18 03:23:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:45.385499 | orchestrator | 2026-04-18 03:23:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:48.423339 | orchestrator | 2026-04-18 03:23:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:48.590206 | orchestrator | 2026-04-18 03:23:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:48.590302 | orchestrator | 2026-04-18 03:23:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:51.471491 | orchestrator | 2026-04-18 03:23:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:51.472229 | orchestrator | 2026-04-18 03:23:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:51.472303 | orchestrator | 2026-04-18 03:23:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:54.515535 | orchestrator | 2026-04-18 03:23:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:54.516117 | orchestrator | 2026-04-18 03:23:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:54.516149 | orchestrator | 2026-04-18 03:23:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:23:57.563038 | orchestrator | 2026-04-18 03:23:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:23:57.565811 | orchestrator | 2026-04-18 03:23:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:23:57.565891 | orchestrator | 2026-04-18 03:23:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:24:00.611083 | orchestrator | 2026-04-18 03:24:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:24:00.612097 | orchestrator | 2026-04-18 03:24:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:24:00.612116 | orchestrator | 2026-04-18 03:24:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:24:03.659068 | orchestrator | 2026-04-18 03:24:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:24:03.660495 | orchestrator | 2026-04-18 03:24:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:24:03.660547 | orchestrator | 2026-04-18 03:24:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:24:06.715009 | orchestrator | 2026-04-18 03:24:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:24:06.717100 | orchestrator | 2026-04-18 03:24:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:24:06.717295 | orchestrator | 2026-04-18 03:24:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:24:09.757553 | orchestrator | 2026-04-18 03:24:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:24:09.759015 | orchestrator | 2026-04-18 03:24:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:24:09.759091 | orchestrator | 2026-04-18 03:24:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:24:12.803066 | orchestrator | 2026-04-18 03:24:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:24:12.805425 | orchestrator | 2026-04-18 03:24:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:24:12.805480 | orchestrator | 2026-04-18 03:24:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:24:15.853366 | orchestrator | 2026-04-18 03:24:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:24:15.854325 | orchestrator | 2026-04-18 03:24:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:24:15.854415 | orchestrator | 2026-04-18 03:24:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:18.990749 | orchestrator | 2026-04-18 03:26:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:18.990838 | orchestrator | 2026-04-18 03:26:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:18.990849 | orchestrator | 2026-04-18 03:26:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:22.041073 | orchestrator | 2026-04-18 03:26:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:22.043471 | orchestrator | 2026-04-18 03:26:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:22.043542 | orchestrator | 2026-04-18 03:26:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:25.091444 | orchestrator | 2026-04-18 03:26:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:25.094099 | orchestrator | 2026-04-18 03:26:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:25.152065 | orchestrator | 2026-04-18 03:26:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:28.141742 | orchestrator | 2026-04-18 03:26:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:28.144318 | orchestrator | 2026-04-18 03:26:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:28.144515 | orchestrator | 2026-04-18 03:26:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:31.187760 | orchestrator | 2026-04-18 03:26:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:31.188937 | orchestrator | 2026-04-18 03:26:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:31.188990 | orchestrator | 2026-04-18 03:26:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:34.230671 | orchestrator | 2026-04-18 03:26:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:34.232498 | orchestrator | 2026-04-18 03:26:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:34.232574 | orchestrator | 2026-04-18 03:26:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:37.276913 | orchestrator | 2026-04-18 03:26:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:37.278801 | orchestrator | 2026-04-18 03:26:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:37.278875 | orchestrator | 2026-04-18 03:26:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:40.315168 | orchestrator | 2026-04-18 03:26:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:40.318369 | orchestrator | 2026-04-18 03:26:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:40.319026 | orchestrator | 2026-04-18 03:26:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:43.363912 | orchestrator | 2026-04-18 03:26:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:43.365953 | orchestrator | 2026-04-18 03:26:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:43.366003 | orchestrator | 2026-04-18 03:26:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:46.412760 | orchestrator | 2026-04-18 03:26:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:46.414487 | orchestrator | 2026-04-18 03:26:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:46.414582 | orchestrator | 2026-04-18 03:26:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:49.457563 | orchestrator | 2026-04-18 03:26:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:49.458356 | orchestrator | 2026-04-18 03:26:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:49.458474 | orchestrator | 2026-04-18 03:26:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:52.504676 | orchestrator | 2026-04-18 03:26:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:52.506655 | orchestrator | 2026-04-18 03:26:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:52.506823 | orchestrator | 2026-04-18 03:26:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:55.546797 | orchestrator | 2026-04-18 03:26:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:55.548900 | orchestrator | 2026-04-18 03:26:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:55.548980 | orchestrator | 2026-04-18 03:26:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:26:58.594561 | orchestrator | 2026-04-18 03:26:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:26:58.596935 | orchestrator | 2026-04-18 03:26:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:26:58.596989 | orchestrator | 2026-04-18 03:26:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:01.647252 | orchestrator | 2026-04-18 03:27:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:01.649008 | orchestrator | 2026-04-18 03:27:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:01.649059 | orchestrator | 2026-04-18 03:27:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:04.693780 | orchestrator | 2026-04-18 03:27:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:04.694987 | orchestrator | 2026-04-18 03:27:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:04.695069 | orchestrator | 2026-04-18 03:27:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:07.738999 | orchestrator | 2026-04-18 03:27:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:07.740743 | orchestrator | 2026-04-18 03:27:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:07.740817 | orchestrator | 2026-04-18 03:27:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:10.783736 | orchestrator | 2026-04-18 03:27:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:10.785339 | orchestrator | 2026-04-18 03:27:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:10.785393 | orchestrator | 2026-04-18 03:27:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:13.829638 | orchestrator | 2026-04-18 03:27:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:13.831364 | orchestrator | 2026-04-18 03:27:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:13.831464 | orchestrator | 2026-04-18 03:27:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:16.875011 | orchestrator | 2026-04-18 03:27:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:16.876372 | orchestrator | 2026-04-18 03:27:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:16.876457 | orchestrator | 2026-04-18 03:27:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:19.918681 | orchestrator | 2026-04-18 03:27:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:19.920894 | orchestrator | 2026-04-18 03:27:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:19.920948 | orchestrator | 2026-04-18 03:27:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:22.965884 | orchestrator | 2026-04-18 03:27:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:22.967719 | orchestrator | 2026-04-18 03:27:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:22.967767 | orchestrator | 2026-04-18 03:27:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:26.013311 | orchestrator | 2026-04-18 03:27:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:26.015700 | orchestrator | 2026-04-18 03:27:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:26.015778 | orchestrator | 2026-04-18 03:27:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:29.058342 | orchestrator | 2026-04-18 03:27:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:29.059595 | orchestrator | 2026-04-18 03:27:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:29.059671 | orchestrator | 2026-04-18 03:27:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:32.106610 | orchestrator | 2026-04-18 03:27:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:32.108524 | orchestrator | 2026-04-18 03:27:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:32.108576 | orchestrator | 2026-04-18 03:27:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:35.153236 | orchestrator | 2026-04-18 03:27:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:35.154737 | orchestrator | 2026-04-18 03:27:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:35.154830 | orchestrator | 2026-04-18 03:27:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:38.199528 | orchestrator | 2026-04-18 03:27:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:38.202280 | orchestrator | 2026-04-18 03:27:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:38.202339 | orchestrator | 2026-04-18 03:27:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:41.252828 | orchestrator | 2026-04-18 03:27:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:41.256380 | orchestrator | 2026-04-18 03:27:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:41.256481 | orchestrator | 2026-04-18 03:27:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:44.305146 | orchestrator | 2026-04-18 03:27:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:44.307658 | orchestrator | 2026-04-18 03:27:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:44.307737 | orchestrator | 2026-04-18 03:27:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:47.351254 | orchestrator | 2026-04-18 03:27:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:47.354262 | orchestrator | 2026-04-18 03:27:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:47.354347 | orchestrator | 2026-04-18 03:27:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:50.398485 | orchestrator | 2026-04-18 03:27:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:50.402494 | orchestrator | 2026-04-18 03:27:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:50.402561 | orchestrator | 2026-04-18 03:27:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:53.441347 | orchestrator | 2026-04-18 03:27:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:53.443263 | orchestrator | 2026-04-18 03:27:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:53.443373 | orchestrator | 2026-04-18 03:27:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:56.483917 | orchestrator | 2026-04-18 03:27:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:56.485572 | orchestrator | 2026-04-18 03:27:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:56.485646 | orchestrator | 2026-04-18 03:27:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:27:59.530413 | orchestrator | 2026-04-18 03:27:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:27:59.533585 | orchestrator | 2026-04-18 03:27:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:27:59.533678 | orchestrator | 2026-04-18 03:27:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:02.577433 | orchestrator | 2026-04-18 03:28:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:02.578471 | orchestrator | 2026-04-18 03:28:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:02.578532 | orchestrator | 2026-04-18 03:28:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:05.619184 | orchestrator | 2026-04-18 03:28:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:05.621852 | orchestrator | 2026-04-18 03:28:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:05.621905 | orchestrator | 2026-04-18 03:28:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:08.668554 | orchestrator | 2026-04-18 03:28:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:08.670602 | orchestrator | 2026-04-18 03:28:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:08.670707 | orchestrator | 2026-04-18 03:28:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:11.712977 | orchestrator | 2026-04-18 03:28:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:11.714567 | orchestrator | 2026-04-18 03:28:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:11.714625 | orchestrator | 2026-04-18 03:28:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:14.760244 | orchestrator | 2026-04-18 03:28:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:14.762500 | orchestrator | 2026-04-18 03:28:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:14.762573 | orchestrator | 2026-04-18 03:28:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:17.815721 | orchestrator | 2026-04-18 03:28:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:17.820160 | orchestrator | 2026-04-18 03:28:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:17.820745 | orchestrator | 2026-04-18 03:28:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:20.866183 | orchestrator | 2026-04-18 03:28:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:20.869573 | orchestrator | 2026-04-18 03:28:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:20.869635 | orchestrator | 2026-04-18 03:28:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:23.916949 | orchestrator | 2026-04-18 03:28:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:23.919919 | orchestrator | 2026-04-18 03:28:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:23.919979 | orchestrator | 2026-04-18 03:28:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:26.963852 | orchestrator | 2026-04-18 03:28:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:26.964732 | orchestrator | 2026-04-18 03:28:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:26.964826 | orchestrator | 2026-04-18 03:28:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:30.007792 | orchestrator | 2026-04-18 03:28:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:30.009082 | orchestrator | 2026-04-18 03:28:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:30.009139 | orchestrator | 2026-04-18 03:28:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:33.057521 | orchestrator | 2026-04-18 03:28:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:33.058418 | orchestrator | 2026-04-18 03:28:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:33.058472 | orchestrator | 2026-04-18 03:28:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:36.099926 | orchestrator | 2026-04-18 03:28:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:36.101465 | orchestrator | 2026-04-18 03:28:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:36.101538 | orchestrator | 2026-04-18 03:28:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:39.138987 | orchestrator | 2026-04-18 03:28:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:39.140239 | orchestrator | 2026-04-18 03:28:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:39.140309 | orchestrator | 2026-04-18 03:28:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:42.178986 | orchestrator | 2026-04-18 03:28:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:42.180991 | orchestrator | 2026-04-18 03:28:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:42.181269 | orchestrator | 2026-04-18 03:28:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:45.221704 | orchestrator | 2026-04-18 03:28:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:45.222140 | orchestrator | 2026-04-18 03:28:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:45.222211 | orchestrator | 2026-04-18 03:28:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:48.265632 | orchestrator | 2026-04-18 03:28:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:48.267304 | orchestrator | 2026-04-18 03:28:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:48.267366 | orchestrator | 2026-04-18 03:28:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:51.321305 | orchestrator | 2026-04-18 03:28:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:51.324846 | orchestrator | 2026-04-18 03:28:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:51.324887 | orchestrator | 2026-04-18 03:28:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:54.364315 | orchestrator | 2026-04-18 03:28:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:54.365532 | orchestrator | 2026-04-18 03:28:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:54.365561 | orchestrator | 2026-04-18 03:28:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:28:57.405837 | orchestrator | 2026-04-18 03:28:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:28:57.407316 | orchestrator | 2026-04-18 03:28:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:28:57.407357 | orchestrator | 2026-04-18 03:28:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:00.448844 | orchestrator | 2026-04-18 03:29:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:00.450980 | orchestrator | 2026-04-18 03:29:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:00.451096 | orchestrator | 2026-04-18 03:29:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:03.499159 | orchestrator | 2026-04-18 03:29:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:03.501127 | orchestrator | 2026-04-18 03:29:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:03.501177 | orchestrator | 2026-04-18 03:29:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:06.547625 | orchestrator | 2026-04-18 03:29:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:06.549968 | orchestrator | 2026-04-18 03:29:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:06.550177 | orchestrator | 2026-04-18 03:29:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:09.591161 | orchestrator | 2026-04-18 03:29:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:09.593422 | orchestrator | 2026-04-18 03:29:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:09.593470 | orchestrator | 2026-04-18 03:29:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:12.633810 | orchestrator | 2026-04-18 03:29:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:12.635736 | orchestrator | 2026-04-18 03:29:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:12.635941 | orchestrator | 2026-04-18 03:29:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:15.673593 | orchestrator | 2026-04-18 03:29:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:15.675514 | orchestrator | 2026-04-18 03:29:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:15.675693 | orchestrator | 2026-04-18 03:29:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:18.727088 | orchestrator | 2026-04-18 03:29:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:18.729096 | orchestrator | 2026-04-18 03:29:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:18.729170 | orchestrator | 2026-04-18 03:29:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:21.776287 | orchestrator | 2026-04-18 03:29:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:21.778844 | orchestrator | 2026-04-18 03:29:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:21.778940 | orchestrator | 2026-04-18 03:29:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:24.828639 | orchestrator | 2026-04-18 03:29:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:24.831371 | orchestrator | 2026-04-18 03:29:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:24.831553 | orchestrator | 2026-04-18 03:29:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:27.879722 | orchestrator | 2026-04-18 03:29:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:27.880877 | orchestrator | 2026-04-18 03:29:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:27.880929 | orchestrator | 2026-04-18 03:29:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:30.927087 | orchestrator | 2026-04-18 03:29:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:30.929655 | orchestrator | 2026-04-18 03:29:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:30.929945 | orchestrator | 2026-04-18 03:29:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:33.977117 | orchestrator | 2026-04-18 03:29:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:33.979780 | orchestrator | 2026-04-18 03:29:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:33.979841 | orchestrator | 2026-04-18 03:29:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:37.021399 | orchestrator | 2026-04-18 03:29:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:37.023544 | orchestrator | 2026-04-18 03:29:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:37.023628 | orchestrator | 2026-04-18 03:29:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:40.061784 | orchestrator | 2026-04-18 03:29:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:40.064150 | orchestrator | 2026-04-18 03:29:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:40.064203 | orchestrator | 2026-04-18 03:29:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:43.112317 | orchestrator | 2026-04-18 03:29:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:43.113382 | orchestrator | 2026-04-18 03:29:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:43.113609 | orchestrator | 2026-04-18 03:29:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:46.163220 | orchestrator | 2026-04-18 03:29:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:46.165442 | orchestrator | 2026-04-18 03:29:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:46.165497 | orchestrator | 2026-04-18 03:29:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:49.214644 | orchestrator | 2026-04-18 03:29:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:49.216358 | orchestrator | 2026-04-18 03:29:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:49.216420 | orchestrator | 2026-04-18 03:29:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:52.262169 | orchestrator | 2026-04-18 03:29:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:52.262602 | orchestrator | 2026-04-18 03:29:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:52.262641 | orchestrator | 2026-04-18 03:29:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:55.302177 | orchestrator | 2026-04-18 03:29:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:55.303063 | orchestrator | 2026-04-18 03:29:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:55.303132 | orchestrator | 2026-04-18 03:29:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:29:58.342441 | orchestrator | 2026-04-18 03:29:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:29:58.345380 | orchestrator | 2026-04-18 03:29:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:29:58.345790 | orchestrator | 2026-04-18 03:29:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:01.387185 | orchestrator | 2026-04-18 03:30:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:01.389131 | orchestrator | 2026-04-18 03:30:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:01.389204 | orchestrator | 2026-04-18 03:30:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:04.440898 | orchestrator | 2026-04-18 03:30:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:04.443814 | orchestrator | 2026-04-18 03:30:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:04.443880 | orchestrator | 2026-04-18 03:30:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:07.486876 | orchestrator | 2026-04-18 03:30:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:07.488015 | orchestrator | 2026-04-18 03:30:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:07.488077 | orchestrator | 2026-04-18 03:30:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:10.532208 | orchestrator | 2026-04-18 03:30:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:10.533603 | orchestrator | 2026-04-18 03:30:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:10.533638 | orchestrator | 2026-04-18 03:30:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:13.574316 | orchestrator | 2026-04-18 03:30:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:13.575712 | orchestrator | 2026-04-18 03:30:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:13.575747 | orchestrator | 2026-04-18 03:30:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:16.618237 | orchestrator | 2026-04-18 03:30:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:16.619868 | orchestrator | 2026-04-18 03:30:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:16.619938 | orchestrator | 2026-04-18 03:30:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:19.666410 | orchestrator | 2026-04-18 03:30:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:19.669479 | orchestrator | 2026-04-18 03:30:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:19.669549 | orchestrator | 2026-04-18 03:30:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:22.707265 | orchestrator | 2026-04-18 03:30:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:22.708681 | orchestrator | 2026-04-18 03:30:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:22.708734 | orchestrator | 2026-04-18 03:30:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:25.756940 | orchestrator | 2026-04-18 03:30:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:25.759405 | orchestrator | 2026-04-18 03:30:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:25.759464 | orchestrator | 2026-04-18 03:30:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:28.806605 | orchestrator | 2026-04-18 03:30:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:28.808512 | orchestrator | 2026-04-18 03:30:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:28.808599 | orchestrator | 2026-04-18 03:30:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:31.852341 | orchestrator | 2026-04-18 03:30:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:31.854614 | orchestrator | 2026-04-18 03:30:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:31.854653 | orchestrator | 2026-04-18 03:30:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:34.903380 | orchestrator | 2026-04-18 03:30:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:34.904055 | orchestrator | 2026-04-18 03:30:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:34.904088 | orchestrator | 2026-04-18 03:30:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:37.944819 | orchestrator | 2026-04-18 03:30:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:37.948001 | orchestrator | 2026-04-18 03:30:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:37.948066 | orchestrator | 2026-04-18 03:30:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:40.990815 | orchestrator | 2026-04-18 03:30:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:40.992526 | orchestrator | 2026-04-18 03:30:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:40.992580 | orchestrator | 2026-04-18 03:30:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:44.038858 | orchestrator | 2026-04-18 03:30:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:44.041024 | orchestrator | 2026-04-18 03:30:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:44.041082 | orchestrator | 2026-04-18 03:30:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:47.084353 | orchestrator | 2026-04-18 03:30:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:47.086521 | orchestrator | 2026-04-18 03:30:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:47.086567 | orchestrator | 2026-04-18 03:30:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:50.135903 | orchestrator | 2026-04-18 03:30:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:50.137395 | orchestrator | 2026-04-18 03:30:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:50.137441 | orchestrator | 2026-04-18 03:30:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:53.182749 | orchestrator | 2026-04-18 03:30:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:53.184591 | orchestrator | 2026-04-18 03:30:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:53.184656 | orchestrator | 2026-04-18 03:30:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:56.229701 | orchestrator | 2026-04-18 03:30:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:56.231011 | orchestrator | 2026-04-18 03:30:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:56.231081 | orchestrator | 2026-04-18 03:30:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:30:59.278571 | orchestrator | 2026-04-18 03:30:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:30:59.280212 | orchestrator | 2026-04-18 03:30:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:30:59.280319 | orchestrator | 2026-04-18 03:30:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:02.328395 | orchestrator | 2026-04-18 03:31:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:02.329419 | orchestrator | 2026-04-18 03:31:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:02.329470 | orchestrator | 2026-04-18 03:31:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:05.369814 | orchestrator | 2026-04-18 03:31:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:05.370329 | orchestrator | 2026-04-18 03:31:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:05.370377 | orchestrator | 2026-04-18 03:31:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:08.417792 | orchestrator | 2026-04-18 03:31:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:08.420120 | orchestrator | 2026-04-18 03:31:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:08.420166 | orchestrator | 2026-04-18 03:31:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:11.471178 | orchestrator | 2026-04-18 03:31:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:11.473978 | orchestrator | 2026-04-18 03:31:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:11.474088 | orchestrator | 2026-04-18 03:31:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:14.523370 | orchestrator | 2026-04-18 03:31:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:14.525772 | orchestrator | 2026-04-18 03:31:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:14.525894 | orchestrator | 2026-04-18 03:31:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:17.574696 | orchestrator | 2026-04-18 03:31:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:17.576656 | orchestrator | 2026-04-18 03:31:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:17.576690 | orchestrator | 2026-04-18 03:31:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:20.625726 | orchestrator | 2026-04-18 03:31:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:20.705643 | orchestrator | 2026-04-18 03:31:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:20.705693 | orchestrator | 2026-04-18 03:31:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:23.672177 | orchestrator | 2026-04-18 03:31:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:23.673587 | orchestrator | 2026-04-18 03:31:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:23.673647 | orchestrator | 2026-04-18 03:31:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:26.719818 | orchestrator | 2026-04-18 03:31:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:26.722374 | orchestrator | 2026-04-18 03:31:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:26.722485 | orchestrator | 2026-04-18 03:31:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:29.766978 | orchestrator | 2026-04-18 03:31:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:29.768193 | orchestrator | 2026-04-18 03:31:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:29.768234 | orchestrator | 2026-04-18 03:31:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:32.829280 | orchestrator | 2026-04-18 03:31:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:32.831131 | orchestrator | 2026-04-18 03:31:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:32.831196 | orchestrator | 2026-04-18 03:31:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:35.878356 | orchestrator | 2026-04-18 03:31:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:35.879838 | orchestrator | 2026-04-18 03:31:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:35.880043 | orchestrator | 2026-04-18 03:31:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:38.925486 | orchestrator | 2026-04-18 03:31:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:38.927875 | orchestrator | 2026-04-18 03:31:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:38.928105 | orchestrator | 2026-04-18 03:31:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:41.968756 | orchestrator | 2026-04-18 03:31:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:41.970243 | orchestrator | 2026-04-18 03:31:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:41.970291 | orchestrator | 2026-04-18 03:31:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:45.014618 | orchestrator | 2026-04-18 03:31:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:45.016410 | orchestrator | 2026-04-18 03:31:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:45.016645 | orchestrator | 2026-04-18 03:31:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:48.064875 | orchestrator | 2026-04-18 03:31:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:48.066235 | orchestrator | 2026-04-18 03:31:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:48.066287 | orchestrator | 2026-04-18 03:31:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:51.107528 | orchestrator | 2026-04-18 03:31:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:51.108870 | orchestrator | 2026-04-18 03:31:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:51.108938 | orchestrator | 2026-04-18 03:31:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:54.154807 | orchestrator | 2026-04-18 03:31:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:54.156155 | orchestrator | 2026-04-18 03:31:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:54.156200 | orchestrator | 2026-04-18 03:31:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:31:57.200831 | orchestrator | 2026-04-18 03:31:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:31:57.202245 | orchestrator | 2026-04-18 03:31:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:31:57.202319 | orchestrator | 2026-04-18 03:31:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:00.259253 | orchestrator | 2026-04-18 03:32:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:00.262616 | orchestrator | 2026-04-18 03:32:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:00.262673 | orchestrator | 2026-04-18 03:32:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:03.306366 | orchestrator | 2026-04-18 03:32:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:03.306473 | orchestrator | 2026-04-18 03:32:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:03.306482 | orchestrator | 2026-04-18 03:32:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:06.355508 | orchestrator | 2026-04-18 03:32:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:06.357497 | orchestrator | 2026-04-18 03:32:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:06.357581 | orchestrator | 2026-04-18 03:32:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:09.404326 | orchestrator | 2026-04-18 03:32:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:09.405945 | orchestrator | 2026-04-18 03:32:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:09.406048 | orchestrator | 2026-04-18 03:32:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:12.454260 | orchestrator | 2026-04-18 03:32:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:12.456210 | orchestrator | 2026-04-18 03:32:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:12.456835 | orchestrator | 2026-04-18 03:32:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:15.505087 | orchestrator | 2026-04-18 03:32:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:15.506736 | orchestrator | 2026-04-18 03:32:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:15.506788 | orchestrator | 2026-04-18 03:32:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:18.557418 | orchestrator | 2026-04-18 03:32:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:18.560460 | orchestrator | 2026-04-18 03:32:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:18.560741 | orchestrator | 2026-04-18 03:32:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:21.612761 | orchestrator | 2026-04-18 03:32:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:21.613587 | orchestrator | 2026-04-18 03:32:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:21.613931 | orchestrator | 2026-04-18 03:32:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:24.660284 | orchestrator | 2026-04-18 03:32:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:24.662305 | orchestrator | 2026-04-18 03:32:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:24.662354 | orchestrator | 2026-04-18 03:32:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:27.706248 | orchestrator | 2026-04-18 03:32:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:27.707634 | orchestrator | 2026-04-18 03:32:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:27.707699 | orchestrator | 2026-04-18 03:32:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:30.753625 | orchestrator | 2026-04-18 03:32:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:30.755301 | orchestrator | 2026-04-18 03:32:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:30.755364 | orchestrator | 2026-04-18 03:32:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:33.801768 | orchestrator | 2026-04-18 03:32:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:33.803564 | orchestrator | 2026-04-18 03:32:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:33.803619 | orchestrator | 2026-04-18 03:32:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:36.858542 | orchestrator | 2026-04-18 03:32:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:36.859321 | orchestrator | 2026-04-18 03:32:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:36.859426 | orchestrator | 2026-04-18 03:32:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:39.906221 | orchestrator | 2026-04-18 03:32:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:39.908334 | orchestrator | 2026-04-18 03:32:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:39.908409 | orchestrator | 2026-04-18 03:32:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:42.965632 | orchestrator | 2026-04-18 03:32:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:42.968596 | orchestrator | 2026-04-18 03:32:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:42.968664 | orchestrator | 2026-04-18 03:32:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:46.015473 | orchestrator | 2026-04-18 03:32:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:46.017725 | orchestrator | 2026-04-18 03:32:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:46.017837 | orchestrator | 2026-04-18 03:32:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:49.064397 | orchestrator | 2026-04-18 03:32:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:49.064473 | orchestrator | 2026-04-18 03:32:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:49.064480 | orchestrator | 2026-04-18 03:32:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:52.110217 | orchestrator | 2026-04-18 03:32:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:52.112309 | orchestrator | 2026-04-18 03:32:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:52.112374 | orchestrator | 2026-04-18 03:32:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:55.154323 | orchestrator | 2026-04-18 03:32:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:55.155579 | orchestrator | 2026-04-18 03:32:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:55.155733 | orchestrator | 2026-04-18 03:32:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:32:58.208649 | orchestrator | 2026-04-18 03:32:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:32:58.211347 | orchestrator | 2026-04-18 03:32:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:32:58.211454 | orchestrator | 2026-04-18 03:32:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:01.258065 | orchestrator | 2026-04-18 03:33:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:01.259936 | orchestrator | 2026-04-18 03:33:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:01.260005 | orchestrator | 2026-04-18 03:33:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:04.315047 | orchestrator | 2026-04-18 03:33:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:04.317323 | orchestrator | 2026-04-18 03:33:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:04.317411 | orchestrator | 2026-04-18 03:33:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:07.371413 | orchestrator | 2026-04-18 03:33:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:07.373155 | orchestrator | 2026-04-18 03:33:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:07.373234 | orchestrator | 2026-04-18 03:33:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:10.425007 | orchestrator | 2026-04-18 03:33:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:10.426547 | orchestrator | 2026-04-18 03:33:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:10.427040 | orchestrator | 2026-04-18 03:33:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:13.475250 | orchestrator | 2026-04-18 03:33:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:13.476567 | orchestrator | 2026-04-18 03:33:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:13.476689 | orchestrator | 2026-04-18 03:33:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:16.523386 | orchestrator | 2026-04-18 03:33:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:16.524742 | orchestrator | 2026-04-18 03:33:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:16.524805 | orchestrator | 2026-04-18 03:33:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:19.569556 | orchestrator | 2026-04-18 03:33:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:19.570491 | orchestrator | 2026-04-18 03:33:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:19.570536 | orchestrator | 2026-04-18 03:33:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:22.622486 | orchestrator | 2026-04-18 03:33:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:22.623554 | orchestrator | 2026-04-18 03:33:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:22.623595 | orchestrator | 2026-04-18 03:33:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:25.677267 | orchestrator | 2026-04-18 03:33:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:25.679138 | orchestrator | 2026-04-18 03:33:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:25.679197 | orchestrator | 2026-04-18 03:33:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:28.729094 | orchestrator | 2026-04-18 03:33:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:28.731045 | orchestrator | 2026-04-18 03:33:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:28.731143 | orchestrator | 2026-04-18 03:33:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:31.773829 | orchestrator | 2026-04-18 03:33:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:31.776071 | orchestrator | 2026-04-18 03:33:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:31.776157 | orchestrator | 2026-04-18 03:33:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:34.838822 | orchestrator | 2026-04-18 03:33:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:34.840436 | orchestrator | 2026-04-18 03:33:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:34.840476 | orchestrator | 2026-04-18 03:33:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:37.903374 | orchestrator | 2026-04-18 03:33:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:37.904218 | orchestrator | 2026-04-18 03:33:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:37.904497 | orchestrator | 2026-04-18 03:33:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:40.957802 | orchestrator | 2026-04-18 03:33:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:40.960538 | orchestrator | 2026-04-18 03:33:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:40.960615 | orchestrator | 2026-04-18 03:33:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:44.012211 | orchestrator | 2026-04-18 03:33:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:44.013334 | orchestrator | 2026-04-18 03:33:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:44.013404 | orchestrator | 2026-04-18 03:33:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:47.063518 | orchestrator | 2026-04-18 03:33:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:47.066690 | orchestrator | 2026-04-18 03:33:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:47.066790 | orchestrator | 2026-04-18 03:33:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:50.114905 | orchestrator | 2026-04-18 03:33:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:50.117020 | orchestrator | 2026-04-18 03:33:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:50.117124 | orchestrator | 2026-04-18 03:33:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:53.175570 | orchestrator | 2026-04-18 03:33:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:53.176873 | orchestrator | 2026-04-18 03:33:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:53.176930 | orchestrator | 2026-04-18 03:33:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:56.223480 | orchestrator | 2026-04-18 03:33:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:56.225637 | orchestrator | 2026-04-18 03:33:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:56.225672 | orchestrator | 2026-04-18 03:33:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:33:59.293925 | orchestrator | 2026-04-18 03:33:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:33:59.294613 | orchestrator | 2026-04-18 03:33:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:33:59.294724 | orchestrator | 2026-04-18 03:33:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:02.336762 | orchestrator | 2026-04-18 03:34:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:02.338492 | orchestrator | 2026-04-18 03:34:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:02.338545 | orchestrator | 2026-04-18 03:34:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:05.384432 | orchestrator | 2026-04-18 03:34:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:05.387647 | orchestrator | 2026-04-18 03:34:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:05.387708 | orchestrator | 2026-04-18 03:34:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:08.441321 | orchestrator | 2026-04-18 03:34:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:08.442163 | orchestrator | 2026-04-18 03:34:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:08.442233 | orchestrator | 2026-04-18 03:34:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:11.490416 | orchestrator | 2026-04-18 03:34:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:11.492456 | orchestrator | 2026-04-18 03:34:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:11.492516 | orchestrator | 2026-04-18 03:34:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:14.537518 | orchestrator | 2026-04-18 03:34:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:14.540071 | orchestrator | 2026-04-18 03:34:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:14.540205 | orchestrator | 2026-04-18 03:34:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:17.591374 | orchestrator | 2026-04-18 03:34:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:17.592887 | orchestrator | 2026-04-18 03:34:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:17.592940 | orchestrator | 2026-04-18 03:34:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:20.643175 | orchestrator | 2026-04-18 03:34:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:20.645453 | orchestrator | 2026-04-18 03:34:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:20.645545 | orchestrator | 2026-04-18 03:34:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:23.697445 | orchestrator | 2026-04-18 03:34:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:23.700102 | orchestrator | 2026-04-18 03:34:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:23.700150 | orchestrator | 2026-04-18 03:34:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:26.744891 | orchestrator | 2026-04-18 03:34:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:26.746273 | orchestrator | 2026-04-18 03:34:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:26.746327 | orchestrator | 2026-04-18 03:34:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:29.795464 | orchestrator | 2026-04-18 03:34:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:29.797404 | orchestrator | 2026-04-18 03:34:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:29.797448 | orchestrator | 2026-04-18 03:34:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:32.847020 | orchestrator | 2026-04-18 03:34:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:32.848102 | orchestrator | 2026-04-18 03:34:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:32.848278 | orchestrator | 2026-04-18 03:34:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:35.887683 | orchestrator | 2026-04-18 03:34:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:35.889497 | orchestrator | 2026-04-18 03:34:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:35.889619 | orchestrator | 2026-04-18 03:34:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:38.937843 | orchestrator | 2026-04-18 03:34:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:38.940297 | orchestrator | 2026-04-18 03:34:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:38.940347 | orchestrator | 2026-04-18 03:34:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:41.982127 | orchestrator | 2026-04-18 03:34:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:41.983534 | orchestrator | 2026-04-18 03:34:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:41.983579 | orchestrator | 2026-04-18 03:34:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:45.036461 | orchestrator | 2026-04-18 03:34:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:45.038534 | orchestrator | 2026-04-18 03:34:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:45.038637 | orchestrator | 2026-04-18 03:34:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:48.087630 | orchestrator | 2026-04-18 03:34:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:48.088696 | orchestrator | 2026-04-18 03:34:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:48.088745 | orchestrator | 2026-04-18 03:34:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:51.136421 | orchestrator | 2026-04-18 03:34:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:51.140458 | orchestrator | 2026-04-18 03:34:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:51.140617 | orchestrator | 2026-04-18 03:34:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:54.190710 | orchestrator | 2026-04-18 03:34:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:54.192924 | orchestrator | 2026-04-18 03:34:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:54.192974 | orchestrator | 2026-04-18 03:34:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:34:57.238692 | orchestrator | 2026-04-18 03:34:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:34:57.240161 | orchestrator | 2026-04-18 03:34:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:34:57.240206 | orchestrator | 2026-04-18 03:34:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:00.281008 | orchestrator | 2026-04-18 03:35:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:00.283240 | orchestrator | 2026-04-18 03:35:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:00.283344 | orchestrator | 2026-04-18 03:35:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:03.328328 | orchestrator | 2026-04-18 03:35:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:03.329663 | orchestrator | 2026-04-18 03:35:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:03.329722 | orchestrator | 2026-04-18 03:35:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:06.376730 | orchestrator | 2026-04-18 03:35:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:06.378425 | orchestrator | 2026-04-18 03:35:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:06.378470 | orchestrator | 2026-04-18 03:35:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:09.430659 | orchestrator | 2026-04-18 03:35:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:09.433691 | orchestrator | 2026-04-18 03:35:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:09.433873 | orchestrator | 2026-04-18 03:35:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:12.479732 | orchestrator | 2026-04-18 03:35:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:12.481542 | orchestrator | 2026-04-18 03:35:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:12.481586 | orchestrator | 2026-04-18 03:35:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:15.527381 | orchestrator | 2026-04-18 03:35:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:15.528625 | orchestrator | 2026-04-18 03:35:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:15.528642 | orchestrator | 2026-04-18 03:35:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:18.577450 | orchestrator | 2026-04-18 03:35:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:18.579279 | orchestrator | 2026-04-18 03:35:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:18.579345 | orchestrator | 2026-04-18 03:35:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:21.631176 | orchestrator | 2026-04-18 03:35:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:21.634861 | orchestrator | 2026-04-18 03:35:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:21.634963 | orchestrator | 2026-04-18 03:35:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:24.678501 | orchestrator | 2026-04-18 03:35:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:24.680648 | orchestrator | 2026-04-18 03:35:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:24.680721 | orchestrator | 2026-04-18 03:35:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:27.730077 | orchestrator | 2026-04-18 03:35:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:27.731087 | orchestrator | 2026-04-18 03:35:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:27.731131 | orchestrator | 2026-04-18 03:35:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:30.777112 | orchestrator | 2026-04-18 03:35:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:30.779246 | orchestrator | 2026-04-18 03:35:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:30.779377 | orchestrator | 2026-04-18 03:35:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:33.820563 | orchestrator | 2026-04-18 03:35:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:33.823066 | orchestrator | 2026-04-18 03:35:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:33.823134 | orchestrator | 2026-04-18 03:35:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:36.866914 | orchestrator | 2026-04-18 03:35:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:36.868557 | orchestrator | 2026-04-18 03:35:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:36.868640 | orchestrator | 2026-04-18 03:35:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:39.910339 | orchestrator | 2026-04-18 03:35:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:39.911412 | orchestrator | 2026-04-18 03:35:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:39.911618 | orchestrator | 2026-04-18 03:35:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:42.949486 | orchestrator | 2026-04-18 03:35:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:42.951469 | orchestrator | 2026-04-18 03:35:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:42.951582 | orchestrator | 2026-04-18 03:35:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:45.985286 | orchestrator | 2026-04-18 03:35:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:45.987406 | orchestrator | 2026-04-18 03:35:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:45.987472 | orchestrator | 2026-04-18 03:35:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:49.040127 | orchestrator | 2026-04-18 03:35:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:49.043552 | orchestrator | 2026-04-18 03:35:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:49.043702 | orchestrator | 2026-04-18 03:35:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:52.087325 | orchestrator | 2026-04-18 03:35:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:52.088335 | orchestrator | 2026-04-18 03:35:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:52.089552 | orchestrator | 2026-04-18 03:35:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:55.136616 | orchestrator | 2026-04-18 03:35:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:55.137927 | orchestrator | 2026-04-18 03:35:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:55.137977 | orchestrator | 2026-04-18 03:35:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:35:58.186053 | orchestrator | 2026-04-18 03:35:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:35:58.187159 | orchestrator | 2026-04-18 03:35:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:35:58.187212 | orchestrator | 2026-04-18 03:35:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:01.231931 | orchestrator | 2026-04-18 03:36:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:01.234319 | orchestrator | 2026-04-18 03:36:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:01.234375 | orchestrator | 2026-04-18 03:36:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:04.282930 | orchestrator | 2026-04-18 03:36:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:04.284435 | orchestrator | 2026-04-18 03:36:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:04.284477 | orchestrator | 2026-04-18 03:36:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:07.332684 | orchestrator | 2026-04-18 03:36:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:07.334336 | orchestrator | 2026-04-18 03:36:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:07.334368 | orchestrator | 2026-04-18 03:36:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:10.383293 | orchestrator | 2026-04-18 03:36:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:10.385001 | orchestrator | 2026-04-18 03:36:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:10.385058 | orchestrator | 2026-04-18 03:36:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:13.442864 | orchestrator | 2026-04-18 03:36:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:13.446306 | orchestrator | 2026-04-18 03:36:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:13.446365 | orchestrator | 2026-04-18 03:36:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:16.497605 | orchestrator | 2026-04-18 03:36:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:16.498975 | orchestrator | 2026-04-18 03:36:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:16.499030 | orchestrator | 2026-04-18 03:36:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:19.543213 | orchestrator | 2026-04-18 03:36:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:19.545478 | orchestrator | 2026-04-18 03:36:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:19.545574 | orchestrator | 2026-04-18 03:36:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:22.597156 | orchestrator | 2026-04-18 03:36:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:22.599054 | orchestrator | 2026-04-18 03:36:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:22.599122 | orchestrator | 2026-04-18 03:36:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:25.642264 | orchestrator | 2026-04-18 03:36:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:25.643071 | orchestrator | 2026-04-18 03:36:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:25.643123 | orchestrator | 2026-04-18 03:36:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:28.694293 | orchestrator | 2026-04-18 03:36:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:28.695698 | orchestrator | 2026-04-18 03:36:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:28.695902 | orchestrator | 2026-04-18 03:36:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:31.740917 | orchestrator | 2026-04-18 03:36:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:31.744733 | orchestrator | 2026-04-18 03:36:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:31.744811 | orchestrator | 2026-04-18 03:36:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:34.790293 | orchestrator | 2026-04-18 03:36:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:34.791477 | orchestrator | 2026-04-18 03:36:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:34.791521 | orchestrator | 2026-04-18 03:36:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:37.841591 | orchestrator | 2026-04-18 03:36:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:37.842432 | orchestrator | 2026-04-18 03:36:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:37.842466 | orchestrator | 2026-04-18 03:36:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:40.890267 | orchestrator | 2026-04-18 03:36:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:40.893032 | orchestrator | 2026-04-18 03:36:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:40.893179 | orchestrator | 2026-04-18 03:36:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:43.943198 | orchestrator | 2026-04-18 03:36:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:43.944851 | orchestrator | 2026-04-18 03:36:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:43.944902 | orchestrator | 2026-04-18 03:36:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:46.993814 | orchestrator | 2026-04-18 03:36:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:46.994885 | orchestrator | 2026-04-18 03:36:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:46.995118 | orchestrator | 2026-04-18 03:36:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:50.039085 | orchestrator | 2026-04-18 03:36:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:50.040610 | orchestrator | 2026-04-18 03:36:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:50.040676 | orchestrator | 2026-04-18 03:36:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:53.085593 | orchestrator | 2026-04-18 03:36:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:53.086542 | orchestrator | 2026-04-18 03:36:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:53.086611 | orchestrator | 2026-04-18 03:36:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:56.129141 | orchestrator | 2026-04-18 03:36:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:56.131321 | orchestrator | 2026-04-18 03:36:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:56.131388 | orchestrator | 2026-04-18 03:36:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:36:59.178240 | orchestrator | 2026-04-18 03:36:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:36:59.178462 | orchestrator | 2026-04-18 03:36:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:36:59.178483 | orchestrator | 2026-04-18 03:36:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:02.234254 | orchestrator | 2026-04-18 03:37:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:02.235762 | orchestrator | 2026-04-18 03:37:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:02.235804 | orchestrator | 2026-04-18 03:37:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:05.282797 | orchestrator | 2026-04-18 03:37:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:05.285692 | orchestrator | 2026-04-18 03:37:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:05.285813 | orchestrator | 2026-04-18 03:37:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:08.334097 | orchestrator | 2026-04-18 03:37:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:08.336121 | orchestrator | 2026-04-18 03:37:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:08.336159 | orchestrator | 2026-04-18 03:37:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:11.382276 | orchestrator | 2026-04-18 03:37:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:11.385137 | orchestrator | 2026-04-18 03:37:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:11.385233 | orchestrator | 2026-04-18 03:37:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:14.434581 | orchestrator | 2026-04-18 03:37:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:14.435908 | orchestrator | 2026-04-18 03:37:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:14.436116 | orchestrator | 2026-04-18 03:37:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:17.485972 | orchestrator | 2026-04-18 03:37:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:17.486222 | orchestrator | 2026-04-18 03:37:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:17.486506 | orchestrator | 2026-04-18 03:37:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:20.530228 | orchestrator | 2026-04-18 03:37:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:20.530995 | orchestrator | 2026-04-18 03:37:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:20.531107 | orchestrator | 2026-04-18 03:37:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:23.575189 | orchestrator | 2026-04-18 03:37:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:23.577048 | orchestrator | 2026-04-18 03:37:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:23.577102 | orchestrator | 2026-04-18 03:37:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:26.622862 | orchestrator | 2026-04-18 03:37:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:26.625396 | orchestrator | 2026-04-18 03:37:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:26.625452 | orchestrator | 2026-04-18 03:37:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:29.676705 | orchestrator | 2026-04-18 03:37:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:29.677137 | orchestrator | 2026-04-18 03:37:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:29.677328 | orchestrator | 2026-04-18 03:37:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:32.725527 | orchestrator | 2026-04-18 03:37:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:32.727581 | orchestrator | 2026-04-18 03:37:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:32.727631 | orchestrator | 2026-04-18 03:37:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:35.777326 | orchestrator | 2026-04-18 03:37:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:35.778081 | orchestrator | 2026-04-18 03:37:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:35.778137 | orchestrator | 2026-04-18 03:37:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:38.829294 | orchestrator | 2026-04-18 03:37:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:38.830405 | orchestrator | 2026-04-18 03:37:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:38.830437 | orchestrator | 2026-04-18 03:37:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:41.880977 | orchestrator | 2026-04-18 03:37:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:41.882587 | orchestrator | 2026-04-18 03:37:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:41.882612 | orchestrator | 2026-04-18 03:37:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:44.929405 | orchestrator | 2026-04-18 03:37:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:44.931577 | orchestrator | 2026-04-18 03:37:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:44.931650 | orchestrator | 2026-04-18 03:37:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:47.982108 | orchestrator | 2026-04-18 03:37:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:47.983921 | orchestrator | 2026-04-18 03:37:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:47.984004 | orchestrator | 2026-04-18 03:37:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:51.031473 | orchestrator | 2026-04-18 03:37:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:51.034599 | orchestrator | 2026-04-18 03:37:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:51.034677 | orchestrator | 2026-04-18 03:37:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:54.081289 | orchestrator | 2026-04-18 03:37:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:54.083167 | orchestrator | 2026-04-18 03:37:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:54.083237 | orchestrator | 2026-04-18 03:37:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:37:57.124201 | orchestrator | 2026-04-18 03:37:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:37:57.125787 | orchestrator | 2026-04-18 03:37:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:37:57.126107 | orchestrator | 2026-04-18 03:37:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:00.160027 | orchestrator | 2026-04-18 03:38:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:00.162969 | orchestrator | 2026-04-18 03:38:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:00.163048 | orchestrator | 2026-04-18 03:38:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:03.212648 | orchestrator | 2026-04-18 03:38:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:03.214770 | orchestrator | 2026-04-18 03:38:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:03.214920 | orchestrator | 2026-04-18 03:38:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:06.262298 | orchestrator | 2026-04-18 03:38:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:06.265886 | orchestrator | 2026-04-18 03:38:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:06.265946 | orchestrator | 2026-04-18 03:38:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:09.311164 | orchestrator | 2026-04-18 03:38:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:09.314196 | orchestrator | 2026-04-18 03:38:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:09.314386 | orchestrator | 2026-04-18 03:38:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:12.370685 | orchestrator | 2026-04-18 03:38:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:12.373485 | orchestrator | 2026-04-18 03:38:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:12.373580 | orchestrator | 2026-04-18 03:38:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:15.422334 | orchestrator | 2026-04-18 03:38:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:15.424116 | orchestrator | 2026-04-18 03:38:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:15.424241 | orchestrator | 2026-04-18 03:38:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:18.481230 | orchestrator | 2026-04-18 03:38:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:18.484364 | orchestrator | 2026-04-18 03:38:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:18.484424 | orchestrator | 2026-04-18 03:38:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:21.529346 | orchestrator | 2026-04-18 03:38:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:21.532127 | orchestrator | 2026-04-18 03:38:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:21.532301 | orchestrator | 2026-04-18 03:38:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:24.581620 | orchestrator | 2026-04-18 03:38:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:24.584175 | orchestrator | 2026-04-18 03:38:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:24.584257 | orchestrator | 2026-04-18 03:38:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:27.638595 | orchestrator | 2026-04-18 03:38:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:27.639609 | orchestrator | 2026-04-18 03:38:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:27.639676 | orchestrator | 2026-04-18 03:38:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:30.688851 | orchestrator | 2026-04-18 03:38:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:30.690764 | orchestrator | 2026-04-18 03:38:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:30.690830 | orchestrator | 2026-04-18 03:38:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:33.744871 | orchestrator | 2026-04-18 03:38:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:33.746620 | orchestrator | 2026-04-18 03:38:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:33.746753 | orchestrator | 2026-04-18 03:38:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:36.793873 | orchestrator | 2026-04-18 03:38:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:36.795236 | orchestrator | 2026-04-18 03:38:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:36.795286 | orchestrator | 2026-04-18 03:38:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:39.840463 | orchestrator | 2026-04-18 03:38:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:39.842441 | orchestrator | 2026-04-18 03:38:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:39.842497 | orchestrator | 2026-04-18 03:38:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:42.889008 | orchestrator | 2026-04-18 03:38:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:42.889857 | orchestrator | 2026-04-18 03:38:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:42.889926 | orchestrator | 2026-04-18 03:38:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:45.935886 | orchestrator | 2026-04-18 03:38:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:45.937496 | orchestrator | 2026-04-18 03:38:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:45.937561 | orchestrator | 2026-04-18 03:38:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:48.983620 | orchestrator | 2026-04-18 03:38:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:48.985636 | orchestrator | 2026-04-18 03:38:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:48.985695 | orchestrator | 2026-04-18 03:38:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:52.031600 | orchestrator | 2026-04-18 03:38:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:52.032896 | orchestrator | 2026-04-18 03:38:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:52.033001 | orchestrator | 2026-04-18 03:38:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:55.079670 | orchestrator | 2026-04-18 03:38:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:55.081416 | orchestrator | 2026-04-18 03:38:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:55.081488 | orchestrator | 2026-04-18 03:38:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:38:58.134293 | orchestrator | 2026-04-18 03:38:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:38:58.135951 | orchestrator | 2026-04-18 03:38:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:38:58.136002 | orchestrator | 2026-04-18 03:38:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:01.183513 | orchestrator | 2026-04-18 03:39:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:01.185228 | orchestrator | 2026-04-18 03:39:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:01.185266 | orchestrator | 2026-04-18 03:39:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:04.234231 | orchestrator | 2026-04-18 03:39:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:04.235632 | orchestrator | 2026-04-18 03:39:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:04.235715 | orchestrator | 2026-04-18 03:39:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:07.281971 | orchestrator | 2026-04-18 03:39:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:07.284230 | orchestrator | 2026-04-18 03:39:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:07.284285 | orchestrator | 2026-04-18 03:39:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:10.328299 | orchestrator | 2026-04-18 03:39:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:10.330989 | orchestrator | 2026-04-18 03:39:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:10.331169 | orchestrator | 2026-04-18 03:39:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:13.378321 | orchestrator | 2026-04-18 03:39:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:13.380288 | orchestrator | 2026-04-18 03:39:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:13.380375 | orchestrator | 2026-04-18 03:39:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:16.423396 | orchestrator | 2026-04-18 03:39:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:16.425007 | orchestrator | 2026-04-18 03:39:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:16.425037 | orchestrator | 2026-04-18 03:39:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:19.473060 | orchestrator | 2026-04-18 03:39:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:19.475329 | orchestrator | 2026-04-18 03:39:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:19.475448 | orchestrator | 2026-04-18 03:39:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:22.523036 | orchestrator | 2026-04-18 03:39:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:22.525089 | orchestrator | 2026-04-18 03:39:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:22.525153 | orchestrator | 2026-04-18 03:39:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:25.576499 | orchestrator | 2026-04-18 03:39:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:25.578356 | orchestrator | 2026-04-18 03:39:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:25.578422 | orchestrator | 2026-04-18 03:39:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:28.625850 | orchestrator | 2026-04-18 03:39:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:28.626513 | orchestrator | 2026-04-18 03:39:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:28.627054 | orchestrator | 2026-04-18 03:39:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:31.680223 | orchestrator | 2026-04-18 03:39:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:31.681617 | orchestrator | 2026-04-18 03:39:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:31.681713 | orchestrator | 2026-04-18 03:39:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:34.726346 | orchestrator | 2026-04-18 03:39:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:34.727150 | orchestrator | 2026-04-18 03:39:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:34.727192 | orchestrator | 2026-04-18 03:39:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:37.780537 | orchestrator | 2026-04-18 03:39:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:37.783864 | orchestrator | 2026-04-18 03:39:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:37.783925 | orchestrator | 2026-04-18 03:39:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:40.830462 | orchestrator | 2026-04-18 03:39:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:40.833249 | orchestrator | 2026-04-18 03:39:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:40.833316 | orchestrator | 2026-04-18 03:39:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:43.883166 | orchestrator | 2026-04-18 03:39:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:43.885341 | orchestrator | 2026-04-18 03:39:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:43.885391 | orchestrator | 2026-04-18 03:39:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:46.931374 | orchestrator | 2026-04-18 03:39:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:46.932224 | orchestrator | 2026-04-18 03:39:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:46.932289 | orchestrator | 2026-04-18 03:39:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:49.983131 | orchestrator | 2026-04-18 03:39:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:49.985158 | orchestrator | 2026-04-18 03:39:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:49.985217 | orchestrator | 2026-04-18 03:39:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:53.036471 | orchestrator | 2026-04-18 03:39:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:53.037803 | orchestrator | 2026-04-18 03:39:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:53.037856 | orchestrator | 2026-04-18 03:39:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:56.090895 | orchestrator | 2026-04-18 03:39:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:56.092266 | orchestrator | 2026-04-18 03:39:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:56.092385 | orchestrator | 2026-04-18 03:39:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:39:59.138581 | orchestrator | 2026-04-18 03:39:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:39:59.140321 | orchestrator | 2026-04-18 03:39:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:39:59.140410 | orchestrator | 2026-04-18 03:39:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:02.184069 | orchestrator | 2026-04-18 03:40:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:02.186464 | orchestrator | 2026-04-18 03:40:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:02.186539 | orchestrator | 2026-04-18 03:40:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:05.226333 | orchestrator | 2026-04-18 03:40:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:05.227854 | orchestrator | 2026-04-18 03:40:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:05.227941 | orchestrator | 2026-04-18 03:40:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:08.276272 | orchestrator | 2026-04-18 03:40:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:08.277440 | orchestrator | 2026-04-18 03:40:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:08.277488 | orchestrator | 2026-04-18 03:40:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:11.326706 | orchestrator | 2026-04-18 03:40:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:11.329056 | orchestrator | 2026-04-18 03:40:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:11.329139 | orchestrator | 2026-04-18 03:40:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:14.374249 | orchestrator | 2026-04-18 03:40:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:14.374406 | orchestrator | 2026-04-18 03:40:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:14.374449 | orchestrator | 2026-04-18 03:40:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:17.417038 | orchestrator | 2026-04-18 03:40:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:17.417513 | orchestrator | 2026-04-18 03:40:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:17.417836 | orchestrator | 2026-04-18 03:40:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:20.464619 | orchestrator | 2026-04-18 03:40:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:20.466200 | orchestrator | 2026-04-18 03:40:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:20.466463 | orchestrator | 2026-04-18 03:40:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:23.508880 | orchestrator | 2026-04-18 03:40:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:23.510244 | orchestrator | 2026-04-18 03:40:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:23.510319 | orchestrator | 2026-04-18 03:40:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:26.554373 | orchestrator | 2026-04-18 03:40:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:26.556627 | orchestrator | 2026-04-18 03:40:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:26.556755 | orchestrator | 2026-04-18 03:40:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:29.600500 | orchestrator | 2026-04-18 03:40:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:29.604617 | orchestrator | 2026-04-18 03:40:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:29.604705 | orchestrator | 2026-04-18 03:40:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:32.658139 | orchestrator | 2026-04-18 03:40:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:32.660092 | orchestrator | 2026-04-18 03:40:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:32.660156 | orchestrator | 2026-04-18 03:40:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:35.702304 | orchestrator | 2026-04-18 03:40:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:35.704198 | orchestrator | 2026-04-18 03:40:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:35.704272 | orchestrator | 2026-04-18 03:40:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:38.755812 | orchestrator | 2026-04-18 03:40:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:38.758210 | orchestrator | 2026-04-18 03:40:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:38.758264 | orchestrator | 2026-04-18 03:40:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:41.810953 | orchestrator | 2026-04-18 03:40:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:41.812412 | orchestrator | 2026-04-18 03:40:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:41.812746 | orchestrator | 2026-04-18 03:40:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:44.857588 | orchestrator | 2026-04-18 03:40:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:44.859124 | orchestrator | 2026-04-18 03:40:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:44.859300 | orchestrator | 2026-04-18 03:40:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:47.909720 | orchestrator | 2026-04-18 03:40:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:47.911690 | orchestrator | 2026-04-18 03:40:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:47.911779 | orchestrator | 2026-04-18 03:40:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:50.959007 | orchestrator | 2026-04-18 03:40:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:50.961608 | orchestrator | 2026-04-18 03:40:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:50.961707 | orchestrator | 2026-04-18 03:40:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:54.009894 | orchestrator | 2026-04-18 03:40:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:54.014502 | orchestrator | 2026-04-18 03:40:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:54.014603 | orchestrator | 2026-04-18 03:40:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:40:57.059012 | orchestrator | 2026-04-18 03:40:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:40:57.061727 | orchestrator | 2026-04-18 03:40:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:40:57.061880 | orchestrator | 2026-04-18 03:40:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:00.106154 | orchestrator | 2026-04-18 03:41:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:00.108463 | orchestrator | 2026-04-18 03:41:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:00.108525 | orchestrator | 2026-04-18 03:41:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:03.160049 | orchestrator | 2026-04-18 03:41:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:03.162249 | orchestrator | 2026-04-18 03:41:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:03.162325 | orchestrator | 2026-04-18 03:41:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:06.220382 | orchestrator | 2026-04-18 03:41:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:06.223184 | orchestrator | 2026-04-18 03:41:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:06.223307 | orchestrator | 2026-04-18 03:41:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:09.273609 | orchestrator | 2026-04-18 03:41:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:09.274760 | orchestrator | 2026-04-18 03:41:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:09.274866 | orchestrator | 2026-04-18 03:41:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:12.325178 | orchestrator | 2026-04-18 03:41:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:12.327021 | orchestrator | 2026-04-18 03:41:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:12.327081 | orchestrator | 2026-04-18 03:41:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:15.375128 | orchestrator | 2026-04-18 03:41:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:15.376365 | orchestrator | 2026-04-18 03:41:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:15.376423 | orchestrator | 2026-04-18 03:41:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:18.422253 | orchestrator | 2026-04-18 03:41:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:18.424094 | orchestrator | 2026-04-18 03:41:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:18.424193 | orchestrator | 2026-04-18 03:41:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:21.470984 | orchestrator | 2026-04-18 03:41:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:21.473807 | orchestrator | 2026-04-18 03:41:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:21.473887 | orchestrator | 2026-04-18 03:41:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:24.526316 | orchestrator | 2026-04-18 03:41:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:24.526456 | orchestrator | 2026-04-18 03:41:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:24.526471 | orchestrator | 2026-04-18 03:41:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:27.574170 | orchestrator | 2026-04-18 03:41:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:27.575444 | orchestrator | 2026-04-18 03:41:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:27.575737 | orchestrator | 2026-04-18 03:41:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:30.625304 | orchestrator | 2026-04-18 03:41:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:30.627771 | orchestrator | 2026-04-18 03:41:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:30.627847 | orchestrator | 2026-04-18 03:41:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:33.669830 | orchestrator | 2026-04-18 03:41:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:33.671975 | orchestrator | 2026-04-18 03:41:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:33.672064 | orchestrator | 2026-04-18 03:41:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:36.724477 | orchestrator | 2026-04-18 03:41:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:36.726507 | orchestrator | 2026-04-18 03:41:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:36.726549 | orchestrator | 2026-04-18 03:41:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:39.774975 | orchestrator | 2026-04-18 03:41:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:39.777368 | orchestrator | 2026-04-18 03:41:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:39.777494 | orchestrator | 2026-04-18 03:41:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:42.821467 | orchestrator | 2026-04-18 03:41:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:42.822577 | orchestrator | 2026-04-18 03:41:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:42.822711 | orchestrator | 2026-04-18 03:41:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:45.866235 | orchestrator | 2026-04-18 03:41:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:45.867452 | orchestrator | 2026-04-18 03:41:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:45.867502 | orchestrator | 2026-04-18 03:41:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:48.912860 | orchestrator | 2026-04-18 03:41:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:48.914427 | orchestrator | 2026-04-18 03:41:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:48.914487 | orchestrator | 2026-04-18 03:41:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:51.966728 | orchestrator | 2026-04-18 03:41:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:51.968053 | orchestrator | 2026-04-18 03:41:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:51.968112 | orchestrator | 2026-04-18 03:41:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:55.029520 | orchestrator | 2026-04-18 03:41:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:55.030379 | orchestrator | 2026-04-18 03:41:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:55.030426 | orchestrator | 2026-04-18 03:41:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:41:58.076715 | orchestrator | 2026-04-18 03:41:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:41:58.079786 | orchestrator | 2026-04-18 03:41:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:41:58.079843 | orchestrator | 2026-04-18 03:41:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:01.125656 | orchestrator | 2026-04-18 03:42:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:01.128390 | orchestrator | 2026-04-18 03:42:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:01.128459 | orchestrator | 2026-04-18 03:42:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:04.177204 | orchestrator | 2026-04-18 03:42:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:04.178880 | orchestrator | 2026-04-18 03:42:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:04.178985 | orchestrator | 2026-04-18 03:42:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:07.224427 | orchestrator | 2026-04-18 03:42:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:07.226338 | orchestrator | 2026-04-18 03:42:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:07.226568 | orchestrator | 2026-04-18 03:42:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:10.271828 | orchestrator | 2026-04-18 03:42:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:10.272140 | orchestrator | 2026-04-18 03:42:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:10.272165 | orchestrator | 2026-04-18 03:42:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:13.321199 | orchestrator | 2026-04-18 03:42:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:13.321535 | orchestrator | 2026-04-18 03:42:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:13.321809 | orchestrator | 2026-04-18 03:42:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:16.370452 | orchestrator | 2026-04-18 03:42:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:16.372899 | orchestrator | 2026-04-18 03:42:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:16.372984 | orchestrator | 2026-04-18 03:42:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:19.415910 | orchestrator | 2026-04-18 03:42:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:19.418933 | orchestrator | 2026-04-18 03:42:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:19.419001 | orchestrator | 2026-04-18 03:42:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:22.464674 | orchestrator | 2026-04-18 03:42:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:22.466421 | orchestrator | 2026-04-18 03:42:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:22.466544 | orchestrator | 2026-04-18 03:42:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:25.510655 | orchestrator | 2026-04-18 03:42:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:25.512233 | orchestrator | 2026-04-18 03:42:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:25.512301 | orchestrator | 2026-04-18 03:42:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:28.555515 | orchestrator | 2026-04-18 03:42:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:28.557223 | orchestrator | 2026-04-18 03:42:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:28.557273 | orchestrator | 2026-04-18 03:42:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:31.607102 | orchestrator | 2026-04-18 03:42:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:31.608735 | orchestrator | 2026-04-18 03:42:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:31.608771 | orchestrator | 2026-04-18 03:42:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:34.659064 | orchestrator | 2026-04-18 03:42:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:34.661160 | orchestrator | 2026-04-18 03:42:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:34.661245 | orchestrator | 2026-04-18 03:42:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:37.713605 | orchestrator | 2026-04-18 03:42:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:37.715127 | orchestrator | 2026-04-18 03:42:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:37.715210 | orchestrator | 2026-04-18 03:42:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:40.767472 | orchestrator | 2026-04-18 03:42:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:40.768432 | orchestrator | 2026-04-18 03:42:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:40.768490 | orchestrator | 2026-04-18 03:42:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:43.814206 | orchestrator | 2026-04-18 03:42:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:43.815949 | orchestrator | 2026-04-18 03:42:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:43.816004 | orchestrator | 2026-04-18 03:42:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:46.860800 | orchestrator | 2026-04-18 03:42:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:46.862618 | orchestrator | 2026-04-18 03:42:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:46.862783 | orchestrator | 2026-04-18 03:42:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:49.906412 | orchestrator | 2026-04-18 03:42:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:49.908678 | orchestrator | 2026-04-18 03:42:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:49.908863 | orchestrator | 2026-04-18 03:42:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:52.963130 | orchestrator | 2026-04-18 03:42:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:52.965874 | orchestrator | 2026-04-18 03:42:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:52.965979 | orchestrator | 2026-04-18 03:42:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:56.017790 | orchestrator | 2026-04-18 03:42:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:56.018207 | orchestrator | 2026-04-18 03:42:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:56.018233 | orchestrator | 2026-04-18 03:42:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:42:59.070605 | orchestrator | 2026-04-18 03:42:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:42:59.071945 | orchestrator | 2026-04-18 03:42:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:42:59.071986 | orchestrator | 2026-04-18 03:42:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:02.121896 | orchestrator | 2026-04-18 03:43:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:02.124239 | orchestrator | 2026-04-18 03:43:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:02.124308 | orchestrator | 2026-04-18 03:43:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:05.167483 | orchestrator | 2026-04-18 03:43:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:05.168890 | orchestrator | 2026-04-18 03:43:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:05.168945 | orchestrator | 2026-04-18 03:43:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:08.214006 | orchestrator | 2026-04-18 03:43:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:08.216062 | orchestrator | 2026-04-18 03:43:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:08.216157 | orchestrator | 2026-04-18 03:43:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:11.265140 | orchestrator | 2026-04-18 03:43:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:11.267259 | orchestrator | 2026-04-18 03:43:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:11.267472 | orchestrator | 2026-04-18 03:43:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:14.313002 | orchestrator | 2026-04-18 03:43:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:14.314469 | orchestrator | 2026-04-18 03:43:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:14.314496 | orchestrator | 2026-04-18 03:43:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:17.360593 | orchestrator | 2026-04-18 03:43:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:17.361993 | orchestrator | 2026-04-18 03:43:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:17.362131 | orchestrator | 2026-04-18 03:43:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:20.412196 | orchestrator | 2026-04-18 03:43:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:20.414492 | orchestrator | 2026-04-18 03:43:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:20.414632 | orchestrator | 2026-04-18 03:43:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:23.461862 | orchestrator | 2026-04-18 03:43:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:23.463906 | orchestrator | 2026-04-18 03:43:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:23.463978 | orchestrator | 2026-04-18 03:43:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:26.511558 | orchestrator | 2026-04-18 03:43:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:26.513203 | orchestrator | 2026-04-18 03:43:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:26.513278 | orchestrator | 2026-04-18 03:43:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:29.564850 | orchestrator | 2026-04-18 03:43:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:29.567061 | orchestrator | 2026-04-18 03:43:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:29.567144 | orchestrator | 2026-04-18 03:43:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:32.616119 | orchestrator | 2026-04-18 03:43:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:32.618147 | orchestrator | 2026-04-18 03:43:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:32.618187 | orchestrator | 2026-04-18 03:43:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:35.670960 | orchestrator | 2026-04-18 03:43:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:35.672744 | orchestrator | 2026-04-18 03:43:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:35.672989 | orchestrator | 2026-04-18 03:43:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:38.722376 | orchestrator | 2026-04-18 03:43:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:38.724092 | orchestrator | 2026-04-18 03:43:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:38.724148 | orchestrator | 2026-04-18 03:43:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:41.768899 | orchestrator | 2026-04-18 03:43:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:41.770296 | orchestrator | 2026-04-18 03:43:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:41.770329 | orchestrator | 2026-04-18 03:43:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:44.821787 | orchestrator | 2026-04-18 03:43:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:44.823734 | orchestrator | 2026-04-18 03:43:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:44.823824 | orchestrator | 2026-04-18 03:43:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:47.871456 | orchestrator | 2026-04-18 03:43:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:47.873190 | orchestrator | 2026-04-18 03:43:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:47.873300 | orchestrator | 2026-04-18 03:43:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:50.922917 | orchestrator | 2026-04-18 03:43:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:50.924957 | orchestrator | 2026-04-18 03:43:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:50.925019 | orchestrator | 2026-04-18 03:43:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:53.963931 | orchestrator | 2026-04-18 03:43:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:53.966789 | orchestrator | 2026-04-18 03:43:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:53.966851 | orchestrator | 2026-04-18 03:43:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:43:57.011624 | orchestrator | 2026-04-18 03:43:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:43:57.013610 | orchestrator | 2026-04-18 03:43:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:43:57.013670 | orchestrator | 2026-04-18 03:43:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:00.053261 | orchestrator | 2026-04-18 03:44:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:00.054230 | orchestrator | 2026-04-18 03:44:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:00.054278 | orchestrator | 2026-04-18 03:44:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:03.098475 | orchestrator | 2026-04-18 03:44:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:03.098811 | orchestrator | 2026-04-18 03:44:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:03.099298 | orchestrator | 2026-04-18 03:44:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:06.141406 | orchestrator | 2026-04-18 03:44:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:06.142275 | orchestrator | 2026-04-18 03:44:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:06.142331 | orchestrator | 2026-04-18 03:44:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:09.185117 | orchestrator | 2026-04-18 03:44:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:09.188865 | orchestrator | 2026-04-18 03:44:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:09.188961 | orchestrator | 2026-04-18 03:44:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:12.236402 | orchestrator | 2026-04-18 03:44:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:12.237001 | orchestrator | 2026-04-18 03:44:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:12.237042 | orchestrator | 2026-04-18 03:44:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:15.289285 | orchestrator | 2026-04-18 03:44:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:15.291038 | orchestrator | 2026-04-18 03:44:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:15.291165 | orchestrator | 2026-04-18 03:44:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:18.340631 | orchestrator | 2026-04-18 03:44:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:18.342582 | orchestrator | 2026-04-18 03:44:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:18.342665 | orchestrator | 2026-04-18 03:44:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:21.389847 | orchestrator | 2026-04-18 03:44:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:21.391251 | orchestrator | 2026-04-18 03:44:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:21.391364 | orchestrator | 2026-04-18 03:44:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:24.440222 | orchestrator | 2026-04-18 03:44:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:24.441803 | orchestrator | 2026-04-18 03:44:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:24.441891 | orchestrator | 2026-04-18 03:44:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:27.492885 | orchestrator | 2026-04-18 03:44:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:27.494311 | orchestrator | 2026-04-18 03:44:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:27.494393 | orchestrator | 2026-04-18 03:44:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:30.540855 | orchestrator | 2026-04-18 03:44:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:30.544167 | orchestrator | 2026-04-18 03:44:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:30.544226 | orchestrator | 2026-04-18 03:44:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:33.596659 | orchestrator | 2026-04-18 03:44:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:33.602602 | orchestrator | 2026-04-18 03:44:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:33.602834 | orchestrator | 2026-04-18 03:44:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:36.655114 | orchestrator | 2026-04-18 03:44:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:36.657221 | orchestrator | 2026-04-18 03:44:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:36.657288 | orchestrator | 2026-04-18 03:44:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:39.706311 | orchestrator | 2026-04-18 03:44:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:39.707985 | orchestrator | 2026-04-18 03:44:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:39.708116 | orchestrator | 2026-04-18 03:44:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:42.756015 | orchestrator | 2026-04-18 03:44:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:42.757512 | orchestrator | 2026-04-18 03:44:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:42.757593 | orchestrator | 2026-04-18 03:44:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:45.802611 | orchestrator | 2026-04-18 03:44:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:45.804742 | orchestrator | 2026-04-18 03:44:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:45.805322 | orchestrator | 2026-04-18 03:44:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:48.851871 | orchestrator | 2026-04-18 03:44:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:48.853832 | orchestrator | 2026-04-18 03:44:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:48.853884 | orchestrator | 2026-04-18 03:44:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:51.902322 | orchestrator | 2026-04-18 03:44:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:51.904083 | orchestrator | 2026-04-18 03:44:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:51.904137 | orchestrator | 2026-04-18 03:44:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:54.952183 | orchestrator | 2026-04-18 03:44:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:54.953255 | orchestrator | 2026-04-18 03:44:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:54.953312 | orchestrator | 2026-04-18 03:44:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:44:57.997427 | orchestrator | 2026-04-18 03:44:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:44:57.998214 | orchestrator | 2026-04-18 03:44:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:44:57.998341 | orchestrator | 2026-04-18 03:44:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:01.047977 | orchestrator | 2026-04-18 03:45:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:01.049765 | orchestrator | 2026-04-18 03:45:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:01.049807 | orchestrator | 2026-04-18 03:45:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:04.089667 | orchestrator | 2026-04-18 03:45:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:04.090816 | orchestrator | 2026-04-18 03:45:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:04.090903 | orchestrator | 2026-04-18 03:45:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:07.133567 | orchestrator | 2026-04-18 03:45:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:07.136455 | orchestrator | 2026-04-18 03:45:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:07.136567 | orchestrator | 2026-04-18 03:45:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:10.183898 | orchestrator | 2026-04-18 03:45:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:10.185915 | orchestrator | 2026-04-18 03:45:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:10.186101 | orchestrator | 2026-04-18 03:45:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:13.233127 | orchestrator | 2026-04-18 03:45:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:13.235434 | orchestrator | 2026-04-18 03:45:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:13.235530 | orchestrator | 2026-04-18 03:45:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:16.285550 | orchestrator | 2026-04-18 03:45:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:16.287252 | orchestrator | 2026-04-18 03:45:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:16.287293 | orchestrator | 2026-04-18 03:45:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:19.330870 | orchestrator | 2026-04-18 03:45:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:19.331491 | orchestrator | 2026-04-18 03:45:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:19.331573 | orchestrator | 2026-04-18 03:45:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:22.378269 | orchestrator | 2026-04-18 03:45:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:22.379341 | orchestrator | 2026-04-18 03:45:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:22.379398 | orchestrator | 2026-04-18 03:45:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:25.424223 | orchestrator | 2026-04-18 03:45:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:25.426466 | orchestrator | 2026-04-18 03:45:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:25.426559 | orchestrator | 2026-04-18 03:45:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:28.464644 | orchestrator | 2026-04-18 03:45:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:28.466227 | orchestrator | 2026-04-18 03:45:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:28.466305 | orchestrator | 2026-04-18 03:45:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:31.509900 | orchestrator | 2026-04-18 03:45:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:31.511820 | orchestrator | 2026-04-18 03:45:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:31.511882 | orchestrator | 2026-04-18 03:45:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:34.561002 | orchestrator | 2026-04-18 03:45:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:34.561728 | orchestrator | 2026-04-18 03:45:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:34.561767 | orchestrator | 2026-04-18 03:45:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:37.606782 | orchestrator | 2026-04-18 03:45:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:37.609250 | orchestrator | 2026-04-18 03:45:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:37.609371 | orchestrator | 2026-04-18 03:45:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:40.658276 | orchestrator | 2026-04-18 03:45:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:40.659702 | orchestrator | 2026-04-18 03:45:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:40.659758 | orchestrator | 2026-04-18 03:45:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:43.700353 | orchestrator | 2026-04-18 03:45:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:43.701947 | orchestrator | 2026-04-18 03:45:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:43.701998 | orchestrator | 2026-04-18 03:45:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:46.748864 | orchestrator | 2026-04-18 03:45:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:46.750347 | orchestrator | 2026-04-18 03:45:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:46.750390 | orchestrator | 2026-04-18 03:45:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:49.796519 | orchestrator | 2026-04-18 03:45:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:49.798384 | orchestrator | 2026-04-18 03:45:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:49.798453 | orchestrator | 2026-04-18 03:45:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:52.841017 | orchestrator | 2026-04-18 03:45:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:52.842833 | orchestrator | 2026-04-18 03:45:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:52.842901 | orchestrator | 2026-04-18 03:45:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:55.889318 | orchestrator | 2026-04-18 03:45:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:55.889837 | orchestrator | 2026-04-18 03:45:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:55.889932 | orchestrator | 2026-04-18 03:45:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:45:58.935308 | orchestrator | 2026-04-18 03:45:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:45:58.939732 | orchestrator | 2026-04-18 03:45:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:45:58.939846 | orchestrator | 2026-04-18 03:45:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:01.990149 | orchestrator | 2026-04-18 03:46:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:01.993380 | orchestrator | 2026-04-18 03:46:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:01.993443 | orchestrator | 2026-04-18 03:46:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:05.037370 | orchestrator | 2026-04-18 03:46:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:05.040228 | orchestrator | 2026-04-18 03:46:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:05.040312 | orchestrator | 2026-04-18 03:46:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:08.093330 | orchestrator | 2026-04-18 03:46:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:08.095998 | orchestrator | 2026-04-18 03:46:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:08.096077 | orchestrator | 2026-04-18 03:46:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:11.146264 | orchestrator | 2026-04-18 03:46:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:11.148316 | orchestrator | 2026-04-18 03:46:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:11.148358 | orchestrator | 2026-04-18 03:46:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:14.197700 | orchestrator | 2026-04-18 03:46:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:14.200048 | orchestrator | 2026-04-18 03:46:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:14.200103 | orchestrator | 2026-04-18 03:46:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:17.248946 | orchestrator | 2026-04-18 03:46:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:17.250690 | orchestrator | 2026-04-18 03:46:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:17.250769 | orchestrator | 2026-04-18 03:46:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:20.301090 | orchestrator | 2026-04-18 03:46:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:20.302260 | orchestrator | 2026-04-18 03:46:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:20.302341 | orchestrator | 2026-04-18 03:46:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:23.354122 | orchestrator | 2026-04-18 03:46:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:23.355441 | orchestrator | 2026-04-18 03:46:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:23.355552 | orchestrator | 2026-04-18 03:46:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:26.401306 | orchestrator | 2026-04-18 03:46:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:26.403329 | orchestrator | 2026-04-18 03:46:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:26.403373 | orchestrator | 2026-04-18 03:46:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:29.439116 | orchestrator | 2026-04-18 03:46:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:29.440093 | orchestrator | 2026-04-18 03:46:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:29.440135 | orchestrator | 2026-04-18 03:46:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:32.489702 | orchestrator | 2026-04-18 03:46:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:32.492277 | orchestrator | 2026-04-18 03:46:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:32.492356 | orchestrator | 2026-04-18 03:46:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:35.537503 | orchestrator | 2026-04-18 03:46:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:35.543014 | orchestrator | 2026-04-18 03:46:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:35.543084 | orchestrator | 2026-04-18 03:46:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:38.588993 | orchestrator | 2026-04-18 03:46:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:38.591949 | orchestrator | 2026-04-18 03:46:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:38.592047 | orchestrator | 2026-04-18 03:46:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:41.640399 | orchestrator | 2026-04-18 03:46:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:41.641954 | orchestrator | 2026-04-18 03:46:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:41.642011 | orchestrator | 2026-04-18 03:46:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:44.692712 | orchestrator | 2026-04-18 03:46:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:44.694336 | orchestrator | 2026-04-18 03:46:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:44.694402 | orchestrator | 2026-04-18 03:46:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:47.739706 | orchestrator | 2026-04-18 03:46:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:47.741594 | orchestrator | 2026-04-18 03:46:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:47.741666 | orchestrator | 2026-04-18 03:46:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:50.788426 | orchestrator | 2026-04-18 03:46:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:50.790084 | orchestrator | 2026-04-18 03:46:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:50.790216 | orchestrator | 2026-04-18 03:46:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:53.839036 | orchestrator | 2026-04-18 03:46:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:53.839651 | orchestrator | 2026-04-18 03:46:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:53.839682 | orchestrator | 2026-04-18 03:46:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:56.885531 | orchestrator | 2026-04-18 03:46:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:56.887815 | orchestrator | 2026-04-18 03:46:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:56.887914 | orchestrator | 2026-04-18 03:46:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:46:59.929128 | orchestrator | 2026-04-18 03:46:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:46:59.931381 | orchestrator | 2026-04-18 03:46:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:46:59.931461 | orchestrator | 2026-04-18 03:46:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:02.982454 | orchestrator | 2026-04-18 03:47:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:02.985368 | orchestrator | 2026-04-18 03:47:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:02.985565 | orchestrator | 2026-04-18 03:47:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:06.037634 | orchestrator | 2026-04-18 03:47:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:06.043400 | orchestrator | 2026-04-18 03:47:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:06.043984 | orchestrator | 2026-04-18 03:47:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:09.092760 | orchestrator | 2026-04-18 03:47:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:09.094356 | orchestrator | 2026-04-18 03:47:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:09.094413 | orchestrator | 2026-04-18 03:47:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:12.141448 | orchestrator | 2026-04-18 03:47:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:12.144350 | orchestrator | 2026-04-18 03:47:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:12.144428 | orchestrator | 2026-04-18 03:47:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:15.191894 | orchestrator | 2026-04-18 03:47:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:15.192415 | orchestrator | 2026-04-18 03:47:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:15.192434 | orchestrator | 2026-04-18 03:47:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:18.243635 | orchestrator | 2026-04-18 03:47:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:18.246097 | orchestrator | 2026-04-18 03:47:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:18.246259 | orchestrator | 2026-04-18 03:47:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:21.302392 | orchestrator | 2026-04-18 03:47:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:21.304829 | orchestrator | 2026-04-18 03:47:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:21.305142 | orchestrator | 2026-04-18 03:47:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:24.357492 | orchestrator | 2026-04-18 03:47:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:24.360188 | orchestrator | 2026-04-18 03:47:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:24.360256 | orchestrator | 2026-04-18 03:47:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:27.409286 | orchestrator | 2026-04-18 03:47:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:27.410757 | orchestrator | 2026-04-18 03:47:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:27.410799 | orchestrator | 2026-04-18 03:47:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:30.458190 | orchestrator | 2026-04-18 03:47:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:30.460205 | orchestrator | 2026-04-18 03:47:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:30.460282 | orchestrator | 2026-04-18 03:47:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:33.509514 | orchestrator | 2026-04-18 03:47:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:33.511969 | orchestrator | 2026-04-18 03:47:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:33.512111 | orchestrator | 2026-04-18 03:47:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:36.563210 | orchestrator | 2026-04-18 03:47:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:36.563849 | orchestrator | 2026-04-18 03:47:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:36.563886 | orchestrator | 2026-04-18 03:47:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:39.614079 | orchestrator | 2026-04-18 03:47:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:39.616562 | orchestrator | 2026-04-18 03:47:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:39.616697 | orchestrator | 2026-04-18 03:47:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:42.663829 | orchestrator | 2026-04-18 03:47:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:42.664818 | orchestrator | 2026-04-18 03:47:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:42.664857 | orchestrator | 2026-04-18 03:47:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:45.706929 | orchestrator | 2026-04-18 03:47:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:45.709823 | orchestrator | 2026-04-18 03:47:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:45.709864 | orchestrator | 2026-04-18 03:47:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:48.758775 | orchestrator | 2026-04-18 03:47:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:48.760717 | orchestrator | 2026-04-18 03:47:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:48.760844 | orchestrator | 2026-04-18 03:47:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:51.810855 | orchestrator | 2026-04-18 03:47:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:51.813109 | orchestrator | 2026-04-18 03:47:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:51.813193 | orchestrator | 2026-04-18 03:47:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:54.860837 | orchestrator | 2026-04-18 03:47:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:54.862269 | orchestrator | 2026-04-18 03:47:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:54.862327 | orchestrator | 2026-04-18 03:47:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:47:57.909521 | orchestrator | 2026-04-18 03:47:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:47:57.911162 | orchestrator | 2026-04-18 03:47:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:47:57.911206 | orchestrator | 2026-04-18 03:47:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:00.956798 | orchestrator | 2026-04-18 03:48:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:00.957861 | orchestrator | 2026-04-18 03:48:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:00.957908 | orchestrator | 2026-04-18 03:48:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:04.005499 | orchestrator | 2026-04-18 03:48:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:04.007394 | orchestrator | 2026-04-18 03:48:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:04.007486 | orchestrator | 2026-04-18 03:48:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:07.057524 | orchestrator | 2026-04-18 03:48:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:07.059188 | orchestrator | 2026-04-18 03:48:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:07.059306 | orchestrator | 2026-04-18 03:48:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:10.110231 | orchestrator | 2026-04-18 03:48:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:10.111124 | orchestrator | 2026-04-18 03:48:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:10.111156 | orchestrator | 2026-04-18 03:48:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:13.158096 | orchestrator | 2026-04-18 03:48:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:13.158654 | orchestrator | 2026-04-18 03:48:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:13.158683 | orchestrator | 2026-04-18 03:48:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:16.205290 | orchestrator | 2026-04-18 03:48:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:16.207465 | orchestrator | 2026-04-18 03:48:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:16.207701 | orchestrator | 2026-04-18 03:48:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:19.247249 | orchestrator | 2026-04-18 03:48:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:19.248510 | orchestrator | 2026-04-18 03:48:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:19.248740 | orchestrator | 2026-04-18 03:48:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:22.296764 | orchestrator | 2026-04-18 03:48:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:22.298499 | orchestrator | 2026-04-18 03:48:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:22.298559 | orchestrator | 2026-04-18 03:48:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:25.342254 | orchestrator | 2026-04-18 03:48:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:25.344476 | orchestrator | 2026-04-18 03:48:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:25.344563 | orchestrator | 2026-04-18 03:48:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:28.394860 | orchestrator | 2026-04-18 03:48:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:28.396958 | orchestrator | 2026-04-18 03:48:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:28.397023 | orchestrator | 2026-04-18 03:48:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:31.444087 | orchestrator | 2026-04-18 03:48:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:31.448186 | orchestrator | 2026-04-18 03:48:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:31.448272 | orchestrator | 2026-04-18 03:48:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:34.497514 | orchestrator | 2026-04-18 03:48:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:34.499405 | orchestrator | 2026-04-18 03:48:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:34.499470 | orchestrator | 2026-04-18 03:48:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:37.548295 | orchestrator | 2026-04-18 03:48:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:37.553063 | orchestrator | 2026-04-18 03:48:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:37.554134 | orchestrator | 2026-04-18 03:48:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:40.598841 | orchestrator | 2026-04-18 03:48:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:40.601360 | orchestrator | 2026-04-18 03:48:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:40.601413 | orchestrator | 2026-04-18 03:48:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:43.652040 | orchestrator | 2026-04-18 03:48:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:43.654537 | orchestrator | 2026-04-18 03:48:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:43.654593 | orchestrator | 2026-04-18 03:48:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:46.704935 | orchestrator | 2026-04-18 03:48:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:46.706557 | orchestrator | 2026-04-18 03:48:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:46.706710 | orchestrator | 2026-04-18 03:48:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:49.753904 | orchestrator | 2026-04-18 03:48:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:49.755750 | orchestrator | 2026-04-18 03:48:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:49.755792 | orchestrator | 2026-04-18 03:48:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:52.804672 | orchestrator | 2026-04-18 03:48:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:52.806991 | orchestrator | 2026-04-18 03:48:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:52.807143 | orchestrator | 2026-04-18 03:48:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:55.855646 | orchestrator | 2026-04-18 03:48:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:55.856684 | orchestrator | 2026-04-18 03:48:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:55.856744 | orchestrator | 2026-04-18 03:48:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:48:58.904637 | orchestrator | 2026-04-18 03:48:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:48:58.905399 | orchestrator | 2026-04-18 03:48:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:48:58.905423 | orchestrator | 2026-04-18 03:48:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:01.952395 | orchestrator | 2026-04-18 03:49:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:01.952934 | orchestrator | 2026-04-18 03:49:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:01.952955 | orchestrator | 2026-04-18 03:49:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:04.997013 | orchestrator | 2026-04-18 03:49:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:04.999403 | orchestrator | 2026-04-18 03:49:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:04.999503 | orchestrator | 2026-04-18 03:49:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:08.052519 | orchestrator | 2026-04-18 03:49:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:08.053833 | orchestrator | 2026-04-18 03:49:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:08.053922 | orchestrator | 2026-04-18 03:49:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:11.100833 | orchestrator | 2026-04-18 03:49:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:11.103551 | orchestrator | 2026-04-18 03:49:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:11.104044 | orchestrator | 2026-04-18 03:49:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:14.153952 | orchestrator | 2026-04-18 03:49:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:14.155896 | orchestrator | 2026-04-18 03:49:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:14.155942 | orchestrator | 2026-04-18 03:49:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:17.201814 | orchestrator | 2026-04-18 03:49:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:17.203520 | orchestrator | 2026-04-18 03:49:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:17.203576 | orchestrator | 2026-04-18 03:49:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:20.253027 | orchestrator | 2026-04-18 03:49:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:20.256675 | orchestrator | 2026-04-18 03:49:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:20.256752 | orchestrator | 2026-04-18 03:49:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:23.305492 | orchestrator | 2026-04-18 03:49:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:23.305625 | orchestrator | 2026-04-18 03:49:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:23.305638 | orchestrator | 2026-04-18 03:49:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:26.355339 | orchestrator | 2026-04-18 03:49:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:26.356648 | orchestrator | 2026-04-18 03:49:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:26.356696 | orchestrator | 2026-04-18 03:49:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:29.407177 | orchestrator | 2026-04-18 03:49:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:29.408140 | orchestrator | 2026-04-18 03:49:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:29.408180 | orchestrator | 2026-04-18 03:49:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:32.460383 | orchestrator | 2026-04-18 03:49:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:32.462351 | orchestrator | 2026-04-18 03:49:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:32.462430 | orchestrator | 2026-04-18 03:49:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:35.511976 | orchestrator | 2026-04-18 03:49:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:35.513850 | orchestrator | 2026-04-18 03:49:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:35.513925 | orchestrator | 2026-04-18 03:49:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:38.563680 | orchestrator | 2026-04-18 03:49:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:38.565812 | orchestrator | 2026-04-18 03:49:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:38.565905 | orchestrator | 2026-04-18 03:49:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:41.615978 | orchestrator | 2026-04-18 03:49:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:41.617548 | orchestrator | 2026-04-18 03:49:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:41.617574 | orchestrator | 2026-04-18 03:49:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:44.667847 | orchestrator | 2026-04-18 03:49:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:44.669235 | orchestrator | 2026-04-18 03:49:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:44.669280 | orchestrator | 2026-04-18 03:49:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:47.715921 | orchestrator | 2026-04-18 03:49:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:47.717699 | orchestrator | 2026-04-18 03:49:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:47.717760 | orchestrator | 2026-04-18 03:49:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:50.765571 | orchestrator | 2026-04-18 03:49:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:50.767477 | orchestrator | 2026-04-18 03:49:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:50.767569 | orchestrator | 2026-04-18 03:49:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:53.811994 | orchestrator | 2026-04-18 03:49:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:53.813670 | orchestrator | 2026-04-18 03:49:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:53.813733 | orchestrator | 2026-04-18 03:49:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:56.864315 | orchestrator | 2026-04-18 03:49:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:56.866254 | orchestrator | 2026-04-18 03:49:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:56.866307 | orchestrator | 2026-04-18 03:49:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:49:59.911942 | orchestrator | 2026-04-18 03:49:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:49:59.914254 | orchestrator | 2026-04-18 03:49:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:49:59.914518 | orchestrator | 2026-04-18 03:49:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:02.955476 | orchestrator | 2026-04-18 03:50:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:02.957146 | orchestrator | 2026-04-18 03:50:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:02.957198 | orchestrator | 2026-04-18 03:50:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:06.007891 | orchestrator | 2026-04-18 03:50:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:06.009644 | orchestrator | 2026-04-18 03:50:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:06.009762 | orchestrator | 2026-04-18 03:50:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:09.057882 | orchestrator | 2026-04-18 03:50:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:09.060111 | orchestrator | 2026-04-18 03:50:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:09.060404 | orchestrator | 2026-04-18 03:50:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:12.108389 | orchestrator | 2026-04-18 03:50:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:12.109331 | orchestrator | 2026-04-18 03:50:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:12.109380 | orchestrator | 2026-04-18 03:50:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:15.158800 | orchestrator | 2026-04-18 03:50:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:15.161148 | orchestrator | 2026-04-18 03:50:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:15.161219 | orchestrator | 2026-04-18 03:50:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:18.204785 | orchestrator | 2026-04-18 03:50:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:18.205954 | orchestrator | 2026-04-18 03:50:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:18.205979 | orchestrator | 2026-04-18 03:50:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:21.254863 | orchestrator | 2026-04-18 03:50:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:21.256865 | orchestrator | 2026-04-18 03:50:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:21.256929 | orchestrator | 2026-04-18 03:50:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:24.303029 | orchestrator | 2026-04-18 03:50:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:24.304108 | orchestrator | 2026-04-18 03:50:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:24.304188 | orchestrator | 2026-04-18 03:50:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:27.349803 | orchestrator | 2026-04-18 03:50:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:27.351230 | orchestrator | 2026-04-18 03:50:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:27.351379 | orchestrator | 2026-04-18 03:50:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:30.398923 | orchestrator | 2026-04-18 03:50:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:30.400606 | orchestrator | 2026-04-18 03:50:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:30.400708 | orchestrator | 2026-04-18 03:50:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:33.448951 | orchestrator | 2026-04-18 03:50:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:33.450877 | orchestrator | 2026-04-18 03:50:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:33.450977 | orchestrator | 2026-04-18 03:50:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:36.499519 | orchestrator | 2026-04-18 03:50:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:36.501250 | orchestrator | 2026-04-18 03:50:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:36.501332 | orchestrator | 2026-04-18 03:50:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:39.548666 | orchestrator | 2026-04-18 03:50:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:39.550517 | orchestrator | 2026-04-18 03:50:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:39.550570 | orchestrator | 2026-04-18 03:50:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:42.592103 | orchestrator | 2026-04-18 03:50:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:42.594498 | orchestrator | 2026-04-18 03:50:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:42.594562 | orchestrator | 2026-04-18 03:50:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:45.639061 | orchestrator | 2026-04-18 03:50:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:45.721833 | orchestrator | 2026-04-18 03:50:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:45.721979 | orchestrator | 2026-04-18 03:50:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:48.690506 | orchestrator | 2026-04-18 03:50:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:48.692124 | orchestrator | 2026-04-18 03:50:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:48.692180 | orchestrator | 2026-04-18 03:50:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:51.744805 | orchestrator | 2026-04-18 03:50:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:51.746584 | orchestrator | 2026-04-18 03:50:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:51.746697 | orchestrator | 2026-04-18 03:50:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:54.788553 | orchestrator | 2026-04-18 03:50:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:54.792156 | orchestrator | 2026-04-18 03:50:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:54.792247 | orchestrator | 2026-04-18 03:50:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:50:57.839421 | orchestrator | 2026-04-18 03:50:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:50:57.841313 | orchestrator | 2026-04-18 03:50:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:50:57.841406 | orchestrator | 2026-04-18 03:50:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:00.891433 | orchestrator | 2026-04-18 03:51:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:00.893536 | orchestrator | 2026-04-18 03:51:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:00.893634 | orchestrator | 2026-04-18 03:51:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:03.941321 | orchestrator | 2026-04-18 03:51:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:03.942846 | orchestrator | 2026-04-18 03:51:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:03.942931 | orchestrator | 2026-04-18 03:51:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:06.988152 | orchestrator | 2026-04-18 03:51:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:06.989568 | orchestrator | 2026-04-18 03:51:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:06.989655 | orchestrator | 2026-04-18 03:51:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:10.034003 | orchestrator | 2026-04-18 03:51:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:10.035715 | orchestrator | 2026-04-18 03:51:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:10.035809 | orchestrator | 2026-04-18 03:51:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:13.090154 | orchestrator | 2026-04-18 03:51:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:13.090593 | orchestrator | 2026-04-18 03:51:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:13.090782 | orchestrator | 2026-04-18 03:51:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:16.143023 | orchestrator | 2026-04-18 03:51:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:16.144587 | orchestrator | 2026-04-18 03:51:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:16.144693 | orchestrator | 2026-04-18 03:51:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:19.195374 | orchestrator | 2026-04-18 03:51:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:19.196907 | orchestrator | 2026-04-18 03:51:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:19.196961 | orchestrator | 2026-04-18 03:51:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:22.243354 | orchestrator | 2026-04-18 03:51:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:22.244146 | orchestrator | 2026-04-18 03:51:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:22.244206 | orchestrator | 2026-04-18 03:51:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:25.290582 | orchestrator | 2026-04-18 03:51:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:25.291713 | orchestrator | 2026-04-18 03:51:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:25.291743 | orchestrator | 2026-04-18 03:51:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:28.340696 | orchestrator | 2026-04-18 03:51:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:28.342879 | orchestrator | 2026-04-18 03:51:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:28.342947 | orchestrator | 2026-04-18 03:51:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:31.383634 | orchestrator | 2026-04-18 03:51:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:31.384970 | orchestrator | 2026-04-18 03:51:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:31.385000 | orchestrator | 2026-04-18 03:51:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:34.433061 | orchestrator | 2026-04-18 03:51:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:34.433156 | orchestrator | 2026-04-18 03:51:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:34.433173 | orchestrator | 2026-04-18 03:51:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:37.480329 | orchestrator | 2026-04-18 03:51:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:37.481747 | orchestrator | 2026-04-18 03:51:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:37.481857 | orchestrator | 2026-04-18 03:51:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:40.528078 | orchestrator | 2026-04-18 03:51:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:40.530732 | orchestrator | 2026-04-18 03:51:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:40.530756 | orchestrator | 2026-04-18 03:51:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:43.580099 | orchestrator | 2026-04-18 03:51:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:43.581489 | orchestrator | 2026-04-18 03:51:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:43.581606 | orchestrator | 2026-04-18 03:51:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:46.659615 | orchestrator | 2026-04-18 03:51:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:46.661385 | orchestrator | 2026-04-18 03:51:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:46.661443 | orchestrator | 2026-04-18 03:51:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:49.704465 | orchestrator | 2026-04-18 03:51:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:49.705244 | orchestrator | 2026-04-18 03:51:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:49.705334 | orchestrator | 2026-04-18 03:51:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:52.753567 | orchestrator | 2026-04-18 03:51:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:52.755430 | orchestrator | 2026-04-18 03:51:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:52.755493 | orchestrator | 2026-04-18 03:51:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:55.813483 | orchestrator | 2026-04-18 03:51:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:55.814433 | orchestrator | 2026-04-18 03:51:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:55.814697 | orchestrator | 2026-04-18 03:51:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:51:58.867520 | orchestrator | 2026-04-18 03:51:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:51:58.869300 | orchestrator | 2026-04-18 03:51:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:51:58.869387 | orchestrator | 2026-04-18 03:51:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:01.930989 | orchestrator | 2026-04-18 03:52:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:01.933412 | orchestrator | 2026-04-18 03:52:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:01.933480 | orchestrator | 2026-04-18 03:52:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:04.983246 | orchestrator | 2026-04-18 03:52:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:04.985425 | orchestrator | 2026-04-18 03:52:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:04.985486 | orchestrator | 2026-04-18 03:52:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:08.035717 | orchestrator | 2026-04-18 03:52:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:08.038687 | orchestrator | 2026-04-18 03:52:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:08.038793 | orchestrator | 2026-04-18 03:52:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:11.086303 | orchestrator | 2026-04-18 03:52:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:11.089117 | orchestrator | 2026-04-18 03:52:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:11.089215 | orchestrator | 2026-04-18 03:52:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:14.137079 | orchestrator | 2026-04-18 03:52:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:14.138800 | orchestrator | 2026-04-18 03:52:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:14.138854 | orchestrator | 2026-04-18 03:52:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:17.188349 | orchestrator | 2026-04-18 03:52:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:17.189506 | orchestrator | 2026-04-18 03:52:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:17.189563 | orchestrator | 2026-04-18 03:52:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:20.236580 | orchestrator | 2026-04-18 03:52:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:20.237904 | orchestrator | 2026-04-18 03:52:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:20.237954 | orchestrator | 2026-04-18 03:52:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:23.285374 | orchestrator | 2026-04-18 03:52:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:23.286689 | orchestrator | 2026-04-18 03:52:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:23.286821 | orchestrator | 2026-04-18 03:52:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:26.331845 | orchestrator | 2026-04-18 03:52:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:26.333275 | orchestrator | 2026-04-18 03:52:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:26.333323 | orchestrator | 2026-04-18 03:52:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:29.385403 | orchestrator | 2026-04-18 03:52:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:29.387379 | orchestrator | 2026-04-18 03:52:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:29.387463 | orchestrator | 2026-04-18 03:52:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:32.440606 | orchestrator | 2026-04-18 03:52:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:32.441365 | orchestrator | 2026-04-18 03:52:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:32.441427 | orchestrator | 2026-04-18 03:52:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:35.480181 | orchestrator | 2026-04-18 03:52:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:35.481635 | orchestrator | 2026-04-18 03:52:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:35.481680 | orchestrator | 2026-04-18 03:52:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:38.531971 | orchestrator | 2026-04-18 03:52:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:38.533046 | orchestrator | 2026-04-18 03:52:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:38.533166 | orchestrator | 2026-04-18 03:52:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:41.582977 | orchestrator | 2026-04-18 03:52:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:41.586502 | orchestrator | 2026-04-18 03:52:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:41.586574 | orchestrator | 2026-04-18 03:52:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:44.637571 | orchestrator | 2026-04-18 03:52:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:44.639515 | orchestrator | 2026-04-18 03:52:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:44.639562 | orchestrator | 2026-04-18 03:52:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:47.690182 | orchestrator | 2026-04-18 03:52:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:47.692373 | orchestrator | 2026-04-18 03:52:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:47.692435 | orchestrator | 2026-04-18 03:52:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:50.745243 | orchestrator | 2026-04-18 03:52:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:50.747310 | orchestrator | 2026-04-18 03:52:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:50.747381 | orchestrator | 2026-04-18 03:52:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:53.796167 | orchestrator | 2026-04-18 03:52:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:53.797703 | orchestrator | 2026-04-18 03:52:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:53.797771 | orchestrator | 2026-04-18 03:52:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:56.846300 | orchestrator | 2026-04-18 03:52:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:56.848156 | orchestrator | 2026-04-18 03:52:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:56.848206 | orchestrator | 2026-04-18 03:52:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:52:59.895422 | orchestrator | 2026-04-18 03:52:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:52:59.896373 | orchestrator | 2026-04-18 03:52:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:52:59.896420 | orchestrator | 2026-04-18 03:52:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:02.951837 | orchestrator | 2026-04-18 03:53:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:02.952932 | orchestrator | 2026-04-18 03:53:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:02.953024 | orchestrator | 2026-04-18 03:53:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:05.999369 | orchestrator | 2026-04-18 03:53:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:06.002349 | orchestrator | 2026-04-18 03:53:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:06.002436 | orchestrator | 2026-04-18 03:53:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:09.046657 | orchestrator | 2026-04-18 03:53:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:09.048084 | orchestrator | 2026-04-18 03:53:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:09.048193 | orchestrator | 2026-04-18 03:53:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:12.094309 | orchestrator | 2026-04-18 03:53:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:12.096306 | orchestrator | 2026-04-18 03:53:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:12.096365 | orchestrator | 2026-04-18 03:53:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:15.142504 | orchestrator | 2026-04-18 03:53:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:15.144711 | orchestrator | 2026-04-18 03:53:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:15.144779 | orchestrator | 2026-04-18 03:53:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:18.194516 | orchestrator | 2026-04-18 03:53:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:18.196722 | orchestrator | 2026-04-18 03:53:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:18.196777 | orchestrator | 2026-04-18 03:53:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:21.245704 | orchestrator | 2026-04-18 03:53:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:21.246765 | orchestrator | 2026-04-18 03:53:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:21.246854 | orchestrator | 2026-04-18 03:53:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:24.296895 | orchestrator | 2026-04-18 03:53:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:24.299837 | orchestrator | 2026-04-18 03:53:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:24.299913 | orchestrator | 2026-04-18 03:53:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:27.345524 | orchestrator | 2026-04-18 03:53:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:27.346841 | orchestrator | 2026-04-18 03:53:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:27.346944 | orchestrator | 2026-04-18 03:53:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:30.393501 | orchestrator | 2026-04-18 03:53:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:30.395091 | orchestrator | 2026-04-18 03:53:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:30.395589 | orchestrator | 2026-04-18 03:53:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:33.440351 | orchestrator | 2026-04-18 03:53:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:33.441287 | orchestrator | 2026-04-18 03:53:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:33.441345 | orchestrator | 2026-04-18 03:53:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:36.492267 | orchestrator | 2026-04-18 03:53:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:36.495320 | orchestrator | 2026-04-18 03:53:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:36.495399 | orchestrator | 2026-04-18 03:53:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:39.541382 | orchestrator | 2026-04-18 03:53:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:39.542404 | orchestrator | 2026-04-18 03:53:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:39.542495 | orchestrator | 2026-04-18 03:53:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:42.588192 | orchestrator | 2026-04-18 03:53:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:42.589254 | orchestrator | 2026-04-18 03:53:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:42.589282 | orchestrator | 2026-04-18 03:53:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:45.636083 | orchestrator | 2026-04-18 03:53:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:45.638731 | orchestrator | 2026-04-18 03:53:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:45.638830 | orchestrator | 2026-04-18 03:53:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:48.684275 | orchestrator | 2026-04-18 03:53:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:48.686132 | orchestrator | 2026-04-18 03:53:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:48.686216 | orchestrator | 2026-04-18 03:53:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:51.738142 | orchestrator | 2026-04-18 03:53:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:51.738673 | orchestrator | 2026-04-18 03:53:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:51.738708 | orchestrator | 2026-04-18 03:53:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:54.782634 | orchestrator | 2026-04-18 03:53:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:54.783507 | orchestrator | 2026-04-18 03:53:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:54.783551 | orchestrator | 2026-04-18 03:53:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:53:57.827427 | orchestrator | 2026-04-18 03:53:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:53:57.830710 | orchestrator | 2026-04-18 03:53:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:53:57.830793 | orchestrator | 2026-04-18 03:53:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:00.879327 | orchestrator | 2026-04-18 03:54:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:00.881762 | orchestrator | 2026-04-18 03:54:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:00.881859 | orchestrator | 2026-04-18 03:54:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:03.925484 | orchestrator | 2026-04-18 03:54:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:03.926825 | orchestrator | 2026-04-18 03:54:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:03.926909 | orchestrator | 2026-04-18 03:54:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:06.972480 | orchestrator | 2026-04-18 03:54:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:06.974145 | orchestrator | 2026-04-18 03:54:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:06.974222 | orchestrator | 2026-04-18 03:54:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:10.030335 | orchestrator | 2026-04-18 03:54:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:10.031974 | orchestrator | 2026-04-18 03:54:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:10.032046 | orchestrator | 2026-04-18 03:54:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:13.075650 | orchestrator | 2026-04-18 03:54:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:13.077099 | orchestrator | 2026-04-18 03:54:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:13.077144 | orchestrator | 2026-04-18 03:54:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:16.119926 | orchestrator | 2026-04-18 03:54:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:16.121231 | orchestrator | 2026-04-18 03:54:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:16.121309 | orchestrator | 2026-04-18 03:54:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:19.164524 | orchestrator | 2026-04-18 03:54:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:19.167081 | orchestrator | 2026-04-18 03:54:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:19.167149 | orchestrator | 2026-04-18 03:54:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:22.216215 | orchestrator | 2026-04-18 03:54:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:22.218562 | orchestrator | 2026-04-18 03:54:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:22.218724 | orchestrator | 2026-04-18 03:54:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:25.258335 | orchestrator | 2026-04-18 03:54:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:25.260795 | orchestrator | 2026-04-18 03:54:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:25.260850 | orchestrator | 2026-04-18 03:54:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:28.308314 | orchestrator | 2026-04-18 03:54:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:28.311293 | orchestrator | 2026-04-18 03:54:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:28.311409 | orchestrator | 2026-04-18 03:54:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:31.353599 | orchestrator | 2026-04-18 03:54:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:31.355261 | orchestrator | 2026-04-18 03:54:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:31.355333 | orchestrator | 2026-04-18 03:54:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:34.404911 | orchestrator | 2026-04-18 03:54:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:34.406646 | orchestrator | 2026-04-18 03:54:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:34.406703 | orchestrator | 2026-04-18 03:54:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:37.451217 | orchestrator | 2026-04-18 03:54:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:37.453784 | orchestrator | 2026-04-18 03:54:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:37.453884 | orchestrator | 2026-04-18 03:54:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:40.493468 | orchestrator | 2026-04-18 03:54:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:40.494473 | orchestrator | 2026-04-18 03:54:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:40.494566 | orchestrator | 2026-04-18 03:54:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:43.541219 | orchestrator | 2026-04-18 03:54:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:43.542956 | orchestrator | 2026-04-18 03:54:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:43.543018 | orchestrator | 2026-04-18 03:54:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:46.589725 | orchestrator | 2026-04-18 03:54:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:46.591145 | orchestrator | 2026-04-18 03:54:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:46.591503 | orchestrator | 2026-04-18 03:54:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:49.644568 | orchestrator | 2026-04-18 03:54:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:49.645854 | orchestrator | 2026-04-18 03:54:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:49.645898 | orchestrator | 2026-04-18 03:54:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:52.689970 | orchestrator | 2026-04-18 03:54:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:52.691377 | orchestrator | 2026-04-18 03:54:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:52.691419 | orchestrator | 2026-04-18 03:54:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:55.743564 | orchestrator | 2026-04-18 03:54:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:55.744557 | orchestrator | 2026-04-18 03:54:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:55.744737 | orchestrator | 2026-04-18 03:54:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:54:58.785214 | orchestrator | 2026-04-18 03:54:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:54:58.786725 | orchestrator | 2026-04-18 03:54:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:54:58.786776 | orchestrator | 2026-04-18 03:54:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:55:01.841578 | orchestrator | 2026-04-18 03:55:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:55:01.843579 | orchestrator | 2026-04-18 03:55:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:55:01.843633 | orchestrator | 2026-04-18 03:55:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:55:04.896461 | orchestrator | 2026-04-18 03:55:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:55:04.897468 | orchestrator | 2026-04-18 03:55:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:55:04.897566 | orchestrator | 2026-04-18 03:55:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:55:07.948648 | orchestrator | 2026-04-18 03:55:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:55:07.950080 | orchestrator | 2026-04-18 03:55:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:55:07.950192 | orchestrator | 2026-04-18 03:55:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:55:10.995116 | orchestrator | 2026-04-18 03:55:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:55:10.998104 | orchestrator | 2026-04-18 03:55:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:55:10.998183 | orchestrator | 2026-04-18 03:55:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:55:14.049468 | orchestrator | 2026-04-18 03:55:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:55:14.050880 | orchestrator | 2026-04-18 03:55:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:55:14.052195 | orchestrator | 2026-04-18 03:55:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:55:17.098428 | orchestrator | 2026-04-18 03:55:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:55:17.100398 | orchestrator | 2026-04-18 03:55:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:55:17.100477 | orchestrator | 2026-04-18 03:55:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:55:20.151655 | orchestrator | 2026-04-18 03:55:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:20.250161 | orchestrator | 2026-04-18 03:57:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:20.250242 | orchestrator | 2026-04-18 03:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:23.290179 | orchestrator | 2026-04-18 03:57:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:23.291665 | orchestrator | 2026-04-18 03:57:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:23.291725 | orchestrator | 2026-04-18 03:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:26.333240 | orchestrator | 2026-04-18 03:57:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:26.334484 | orchestrator | 2026-04-18 03:57:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:26.334515 | orchestrator | 2026-04-18 03:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:29.380656 | orchestrator | 2026-04-18 03:57:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:29.383711 | orchestrator | 2026-04-18 03:57:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:29.383770 | orchestrator | 2026-04-18 03:57:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:32.430673 | orchestrator | 2026-04-18 03:57:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:32.432199 | orchestrator | 2026-04-18 03:57:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:32.432247 | orchestrator | 2026-04-18 03:57:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:35.471194 | orchestrator | 2026-04-18 03:57:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:35.472806 | orchestrator | 2026-04-18 03:57:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:35.472884 | orchestrator | 2026-04-18 03:57:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:38.520240 | orchestrator | 2026-04-18 03:57:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:38.522737 | orchestrator | 2026-04-18 03:57:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:38.522822 | orchestrator | 2026-04-18 03:57:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:41.575078 | orchestrator | 2026-04-18 03:57:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:41.579963 | orchestrator | 2026-04-18 03:57:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:41.580051 | orchestrator | 2026-04-18 03:57:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:44.620349 | orchestrator | 2026-04-18 03:57:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:44.622066 | orchestrator | 2026-04-18 03:57:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:44.622130 | orchestrator | 2026-04-18 03:57:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:47.664481 | orchestrator | 2026-04-18 03:57:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:47.665755 | orchestrator | 2026-04-18 03:57:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:47.665886 | orchestrator | 2026-04-18 03:57:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:50.713922 | orchestrator | 2026-04-18 03:57:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:50.714813 | orchestrator | 2026-04-18 03:57:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:50.714951 | orchestrator | 2026-04-18 03:57:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:53.765057 | orchestrator | 2026-04-18 03:57:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:53.767787 | orchestrator | 2026-04-18 03:57:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:53.767874 | orchestrator | 2026-04-18 03:57:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:56.801968 | orchestrator | 2026-04-18 03:57:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:56.803569 | orchestrator | 2026-04-18 03:57:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:56.803698 | orchestrator | 2026-04-18 03:57:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:57:59.849136 | orchestrator | 2026-04-18 03:57:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:57:59.852213 | orchestrator | 2026-04-18 03:57:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:57:59.852301 | orchestrator | 2026-04-18 03:57:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:02.891927 | orchestrator | 2026-04-18 03:58:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:02.893613 | orchestrator | 2026-04-18 03:58:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:02.893667 | orchestrator | 2026-04-18 03:58:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:05.944602 | orchestrator | 2026-04-18 03:58:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:05.946246 | orchestrator | 2026-04-18 03:58:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:05.946307 | orchestrator | 2026-04-18 03:58:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:08.992789 | orchestrator | 2026-04-18 03:58:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:08.993611 | orchestrator | 2026-04-18 03:58:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:08.993640 | orchestrator | 2026-04-18 03:58:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:12.039572 | orchestrator | 2026-04-18 03:58:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:12.041125 | orchestrator | 2026-04-18 03:58:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:12.041184 | orchestrator | 2026-04-18 03:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:15.087652 | orchestrator | 2026-04-18 03:58:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:15.090005 | orchestrator | 2026-04-18 03:58:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:15.090110 | orchestrator | 2026-04-18 03:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:18.130522 | orchestrator | 2026-04-18 03:58:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:18.132739 | orchestrator | 2026-04-18 03:58:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:18.132888 | orchestrator | 2026-04-18 03:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:21.174206 | orchestrator | 2026-04-18 03:58:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:21.178197 | orchestrator | 2026-04-18 03:58:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:21.178462 | orchestrator | 2026-04-18 03:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:24.229502 | orchestrator | 2026-04-18 03:58:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:24.230619 | orchestrator | 2026-04-18 03:58:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:24.230666 | orchestrator | 2026-04-18 03:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:27.282644 | orchestrator | 2026-04-18 03:58:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:27.284479 | orchestrator | 2026-04-18 03:58:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:27.284653 | orchestrator | 2026-04-18 03:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:30.328764 | orchestrator | 2026-04-18 03:58:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:30.330229 | orchestrator | 2026-04-18 03:58:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:30.330315 | orchestrator | 2026-04-18 03:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:33.374660 | orchestrator | 2026-04-18 03:58:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:33.376747 | orchestrator | 2026-04-18 03:58:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:33.376852 | orchestrator | 2026-04-18 03:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:36.414330 | orchestrator | 2026-04-18 03:58:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:36.415956 | orchestrator | 2026-04-18 03:58:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:36.416024 | orchestrator | 2026-04-18 03:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:39.456113 | orchestrator | 2026-04-18 03:58:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:39.457453 | orchestrator | 2026-04-18 03:58:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:39.457500 | orchestrator | 2026-04-18 03:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:42.500288 | orchestrator | 2026-04-18 03:58:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:42.501969 | orchestrator | 2026-04-18 03:58:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:42.502005 | orchestrator | 2026-04-18 03:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:45.551522 | orchestrator | 2026-04-18 03:58:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:45.553704 | orchestrator | 2026-04-18 03:58:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:45.553765 | orchestrator | 2026-04-18 03:58:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:48.597841 | orchestrator | 2026-04-18 03:58:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:48.600264 | orchestrator | 2026-04-18 03:58:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:48.600337 | orchestrator | 2026-04-18 03:58:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:51.645058 | orchestrator | 2026-04-18 03:58:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:51.646873 | orchestrator | 2026-04-18 03:58:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:51.647022 | orchestrator | 2026-04-18 03:58:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:54.698572 | orchestrator | 2026-04-18 03:58:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:54.700277 | orchestrator | 2026-04-18 03:58:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:54.700328 | orchestrator | 2026-04-18 03:58:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:58:57.743877 | orchestrator | 2026-04-18 03:58:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:58:57.746183 | orchestrator | 2026-04-18 03:58:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:58:57.746298 | orchestrator | 2026-04-18 03:58:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:00.792174 | orchestrator | 2026-04-18 03:59:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:00.793580 | orchestrator | 2026-04-18 03:59:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:00.793638 | orchestrator | 2026-04-18 03:59:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:03.840322 | orchestrator | 2026-04-18 03:59:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:03.841815 | orchestrator | 2026-04-18 03:59:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:03.841859 | orchestrator | 2026-04-18 03:59:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:06.887058 | orchestrator | 2026-04-18 03:59:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:06.889235 | orchestrator | 2026-04-18 03:59:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:06.889293 | orchestrator | 2026-04-18 03:59:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:09.941927 | orchestrator | 2026-04-18 03:59:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:09.943018 | orchestrator | 2026-04-18 03:59:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:09.943072 | orchestrator | 2026-04-18 03:59:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:12.983650 | orchestrator | 2026-04-18 03:59:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:12.986121 | orchestrator | 2026-04-18 03:59:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:12.986172 | orchestrator | 2026-04-18 03:59:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:16.032821 | orchestrator | 2026-04-18 03:59:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:16.035401 | orchestrator | 2026-04-18 03:59:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:16.035460 | orchestrator | 2026-04-18 03:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:19.074927 | orchestrator | 2026-04-18 03:59:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:19.075852 | orchestrator | 2026-04-18 03:59:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:19.076072 | orchestrator | 2026-04-18 03:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:22.123288 | orchestrator | 2026-04-18 03:59:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:22.127479 | orchestrator | 2026-04-18 03:59:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:22.127549 | orchestrator | 2026-04-18 03:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:25.175648 | orchestrator | 2026-04-18 03:59:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:25.177693 | orchestrator | 2026-04-18 03:59:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:25.177755 | orchestrator | 2026-04-18 03:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:28.222818 | orchestrator | 2026-04-18 03:59:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:28.224101 | orchestrator | 2026-04-18 03:59:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:28.224202 | orchestrator | 2026-04-18 03:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:31.268406 | orchestrator | 2026-04-18 03:59:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:31.271142 | orchestrator | 2026-04-18 03:59:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:31.271681 | orchestrator | 2026-04-18 03:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:34.322639 | orchestrator | 2026-04-18 03:59:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:34.324108 | orchestrator | 2026-04-18 03:59:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:34.324219 | orchestrator | 2026-04-18 03:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:37.371688 | orchestrator | 2026-04-18 03:59:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:37.373156 | orchestrator | 2026-04-18 03:59:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:37.373222 | orchestrator | 2026-04-18 03:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:40.426892 | orchestrator | 2026-04-18 03:59:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:40.429934 | orchestrator | 2026-04-18 03:59:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:40.429984 | orchestrator | 2026-04-18 03:59:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:43.477228 | orchestrator | 2026-04-18 03:59:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:43.478797 | orchestrator | 2026-04-18 03:59:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:43.478900 | orchestrator | 2026-04-18 03:59:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:46.527288 | orchestrator | 2026-04-18 03:59:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:46.529585 | orchestrator | 2026-04-18 03:59:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:46.529682 | orchestrator | 2026-04-18 03:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:49.578401 | orchestrator | 2026-04-18 03:59:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:49.580645 | orchestrator | 2026-04-18 03:59:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:49.580861 | orchestrator | 2026-04-18 03:59:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:52.625857 | orchestrator | 2026-04-18 03:59:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:52.627013 | orchestrator | 2026-04-18 03:59:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:52.627050 | orchestrator | 2026-04-18 03:59:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:55.672781 | orchestrator | 2026-04-18 03:59:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:55.674268 | orchestrator | 2026-04-18 03:59:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:55.674335 | orchestrator | 2026-04-18 03:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 03:59:58.716513 | orchestrator | 2026-04-18 03:59:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 03:59:58.718653 | orchestrator | 2026-04-18 03:59:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 03:59:58.718760 | orchestrator | 2026-04-18 03:59:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:01.764982 | orchestrator | 2026-04-18 04:00:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:01.767424 | orchestrator | 2026-04-18 04:00:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:01.767485 | orchestrator | 2026-04-18 04:00:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:04.811084 | orchestrator | 2026-04-18 04:00:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:04.812792 | orchestrator | 2026-04-18 04:00:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:04.812864 | orchestrator | 2026-04-18 04:00:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:07.863855 | orchestrator | 2026-04-18 04:00:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:07.865011 | orchestrator | 2026-04-18 04:00:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:07.865440 | orchestrator | 2026-04-18 04:00:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:10.910991 | orchestrator | 2026-04-18 04:00:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:10.912916 | orchestrator | 2026-04-18 04:00:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:10.913009 | orchestrator | 2026-04-18 04:00:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:13.962360 | orchestrator | 2026-04-18 04:00:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:13.964923 | orchestrator | 2026-04-18 04:00:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:13.964995 | orchestrator | 2026-04-18 04:00:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:17.011266 | orchestrator | 2026-04-18 04:00:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:17.014629 | orchestrator | 2026-04-18 04:00:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:17.014755 | orchestrator | 2026-04-18 04:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:20.055759 | orchestrator | 2026-04-18 04:00:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:20.058304 | orchestrator | 2026-04-18 04:00:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:20.058422 | orchestrator | 2026-04-18 04:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:23.102105 | orchestrator | 2026-04-18 04:00:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:23.104439 | orchestrator | 2026-04-18 04:00:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:23.104493 | orchestrator | 2026-04-18 04:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:26.149782 | orchestrator | 2026-04-18 04:00:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:26.150684 | orchestrator | 2026-04-18 04:00:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:26.150909 | orchestrator | 2026-04-18 04:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:29.189438 | orchestrator | 2026-04-18 04:00:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:29.190589 | orchestrator | 2026-04-18 04:00:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:29.190614 | orchestrator | 2026-04-18 04:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:32.232633 | orchestrator | 2026-04-18 04:00:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:32.235948 | orchestrator | 2026-04-18 04:00:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:32.236095 | orchestrator | 2026-04-18 04:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:35.279197 | orchestrator | 2026-04-18 04:00:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:35.280970 | orchestrator | 2026-04-18 04:00:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:35.281010 | orchestrator | 2026-04-18 04:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:38.325381 | orchestrator | 2026-04-18 04:00:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:38.326920 | orchestrator | 2026-04-18 04:00:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:38.327096 | orchestrator | 2026-04-18 04:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:41.374050 | orchestrator | 2026-04-18 04:00:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:41.377251 | orchestrator | 2026-04-18 04:00:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:41.377315 | orchestrator | 2026-04-18 04:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:44.418575 | orchestrator | 2026-04-18 04:00:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:44.420666 | orchestrator | 2026-04-18 04:00:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:44.420736 | orchestrator | 2026-04-18 04:00:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:47.468005 | orchestrator | 2026-04-18 04:00:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:47.470134 | orchestrator | 2026-04-18 04:00:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:47.470184 | orchestrator | 2026-04-18 04:00:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:50.512232 | orchestrator | 2026-04-18 04:00:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:50.513613 | orchestrator | 2026-04-18 04:00:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:50.513665 | orchestrator | 2026-04-18 04:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:53.559901 | orchestrator | 2026-04-18 04:00:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:53.561627 | orchestrator | 2026-04-18 04:00:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:53.561732 | orchestrator | 2026-04-18 04:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:56.604409 | orchestrator | 2026-04-18 04:00:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:56.606741 | orchestrator | 2026-04-18 04:00:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:56.606836 | orchestrator | 2026-04-18 04:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:00:59.658074 | orchestrator | 2026-04-18 04:00:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:00:59.660995 | orchestrator | 2026-04-18 04:00:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:00:59.661102 | orchestrator | 2026-04-18 04:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:02.706944 | orchestrator | 2026-04-18 04:01:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:02.709626 | orchestrator | 2026-04-18 04:01:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:02.709721 | orchestrator | 2026-04-18 04:01:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:05.751771 | orchestrator | 2026-04-18 04:01:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:05.755009 | orchestrator | 2026-04-18 04:01:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:05.755071 | orchestrator | 2026-04-18 04:01:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:08.797300 | orchestrator | 2026-04-18 04:01:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:08.799305 | orchestrator | 2026-04-18 04:01:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:08.799398 | orchestrator | 2026-04-18 04:01:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:11.846213 | orchestrator | 2026-04-18 04:01:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:11.847082 | orchestrator | 2026-04-18 04:01:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:11.847135 | orchestrator | 2026-04-18 04:01:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:14.893908 | orchestrator | 2026-04-18 04:01:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:14.896106 | orchestrator | 2026-04-18 04:01:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:14.896198 | orchestrator | 2026-04-18 04:01:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:17.935884 | orchestrator | 2026-04-18 04:01:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:17.937097 | orchestrator | 2026-04-18 04:01:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:17.937163 | orchestrator | 2026-04-18 04:01:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:20.979157 | orchestrator | 2026-04-18 04:01:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:20.981192 | orchestrator | 2026-04-18 04:01:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:20.981270 | orchestrator | 2026-04-18 04:01:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:24.025092 | orchestrator | 2026-04-18 04:01:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:24.027253 | orchestrator | 2026-04-18 04:01:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:24.027316 | orchestrator | 2026-04-18 04:01:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:27.074433 | orchestrator | 2026-04-18 04:01:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:27.078386 | orchestrator | 2026-04-18 04:01:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:27.078452 | orchestrator | 2026-04-18 04:01:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:30.117365 | orchestrator | 2026-04-18 04:01:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:30.119292 | orchestrator | 2026-04-18 04:01:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:30.119359 | orchestrator | 2026-04-18 04:01:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:33.163034 | orchestrator | 2026-04-18 04:01:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:33.164414 | orchestrator | 2026-04-18 04:01:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:33.164461 | orchestrator | 2026-04-18 04:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:36.212798 | orchestrator | 2026-04-18 04:01:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:36.215063 | orchestrator | 2026-04-18 04:01:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:36.215097 | orchestrator | 2026-04-18 04:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:39.265733 | orchestrator | 2026-04-18 04:01:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:39.269173 | orchestrator | 2026-04-18 04:01:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:39.269327 | orchestrator | 2026-04-18 04:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:42.327048 | orchestrator | 2026-04-18 04:01:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:42.330304 | orchestrator | 2026-04-18 04:01:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:42.330388 | orchestrator | 2026-04-18 04:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:45.369577 | orchestrator | 2026-04-18 04:01:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:45.372427 | orchestrator | 2026-04-18 04:01:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:45.372509 | orchestrator | 2026-04-18 04:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:48.420073 | orchestrator | 2026-04-18 04:01:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:48.422060 | orchestrator | 2026-04-18 04:01:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:48.422123 | orchestrator | 2026-04-18 04:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:51.466419 | orchestrator | 2026-04-18 04:01:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:51.468483 | orchestrator | 2026-04-18 04:01:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:51.468535 | orchestrator | 2026-04-18 04:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:54.507462 | orchestrator | 2026-04-18 04:01:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:54.510178 | orchestrator | 2026-04-18 04:01:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:54.510295 | orchestrator | 2026-04-18 04:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:01:57.558213 | orchestrator | 2026-04-18 04:01:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:01:57.560006 | orchestrator | 2026-04-18 04:01:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:01:57.560081 | orchestrator | 2026-04-18 04:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:00.599443 | orchestrator | 2026-04-18 04:02:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:00.601993 | orchestrator | 2026-04-18 04:02:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:00.602109 | orchestrator | 2026-04-18 04:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:03.645437 | orchestrator | 2026-04-18 04:02:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:03.646205 | orchestrator | 2026-04-18 04:02:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:03.646237 | orchestrator | 2026-04-18 04:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:06.695187 | orchestrator | 2026-04-18 04:02:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:06.696807 | orchestrator | 2026-04-18 04:02:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:06.696865 | orchestrator | 2026-04-18 04:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:09.737519 | orchestrator | 2026-04-18 04:02:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:09.738924 | orchestrator | 2026-04-18 04:02:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:09.739318 | orchestrator | 2026-04-18 04:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:12.787812 | orchestrator | 2026-04-18 04:02:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:12.789199 | orchestrator | 2026-04-18 04:02:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:12.789274 | orchestrator | 2026-04-18 04:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:15.834306 | orchestrator | 2026-04-18 04:02:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:15.835687 | orchestrator | 2026-04-18 04:02:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:15.835819 | orchestrator | 2026-04-18 04:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:18.882274 | orchestrator | 2026-04-18 04:02:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:18.884167 | orchestrator | 2026-04-18 04:02:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:18.884334 | orchestrator | 2026-04-18 04:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:21.928202 | orchestrator | 2026-04-18 04:02:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:21.930831 | orchestrator | 2026-04-18 04:02:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:21.930905 | orchestrator | 2026-04-18 04:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:24.973217 | orchestrator | 2026-04-18 04:02:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:24.975780 | orchestrator | 2026-04-18 04:02:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:24.975883 | orchestrator | 2026-04-18 04:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:28.016925 | orchestrator | 2026-04-18 04:02:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:28.020027 | orchestrator | 2026-04-18 04:02:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:28.020192 | orchestrator | 2026-04-18 04:02:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:31.060999 | orchestrator | 2026-04-18 04:02:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:31.063456 | orchestrator | 2026-04-18 04:02:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:31.063581 | orchestrator | 2026-04-18 04:02:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:34.110630 | orchestrator | 2026-04-18 04:02:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:34.111310 | orchestrator | 2026-04-18 04:02:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:34.111346 | orchestrator | 2026-04-18 04:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:37.157638 | orchestrator | 2026-04-18 04:02:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:37.158696 | orchestrator | 2026-04-18 04:02:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:37.158728 | orchestrator | 2026-04-18 04:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:40.204402 | orchestrator | 2026-04-18 04:02:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:40.206867 | orchestrator | 2026-04-18 04:02:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:40.206941 | orchestrator | 2026-04-18 04:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:43.254077 | orchestrator | 2026-04-18 04:02:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:43.255782 | orchestrator | 2026-04-18 04:02:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:43.255864 | orchestrator | 2026-04-18 04:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:46.293873 | orchestrator | 2026-04-18 04:02:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:46.295816 | orchestrator | 2026-04-18 04:02:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:46.295882 | orchestrator | 2026-04-18 04:02:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:49.338077 | orchestrator | 2026-04-18 04:02:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:49.340209 | orchestrator | 2026-04-18 04:02:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:49.340251 | orchestrator | 2026-04-18 04:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:52.385444 | orchestrator | 2026-04-18 04:02:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:52.386457 | orchestrator | 2026-04-18 04:02:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:52.386650 | orchestrator | 2026-04-18 04:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:55.440870 | orchestrator | 2026-04-18 04:02:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:55.444199 | orchestrator | 2026-04-18 04:02:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:55.444274 | orchestrator | 2026-04-18 04:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:02:58.496384 | orchestrator | 2026-04-18 04:02:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:02:58.497951 | orchestrator | 2026-04-18 04:02:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:02:58.498134 | orchestrator | 2026-04-18 04:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:01.553365 | orchestrator | 2026-04-18 04:03:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:01.554625 | orchestrator | 2026-04-18 04:03:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:01.554689 | orchestrator | 2026-04-18 04:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:04.603358 | orchestrator | 2026-04-18 04:03:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:04.604325 | orchestrator | 2026-04-18 04:03:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:04.604376 | orchestrator | 2026-04-18 04:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:07.654144 | orchestrator | 2026-04-18 04:03:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:07.656840 | orchestrator | 2026-04-18 04:03:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:07.656890 | orchestrator | 2026-04-18 04:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:10.710485 | orchestrator | 2026-04-18 04:03:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:10.712132 | orchestrator | 2026-04-18 04:03:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:10.712171 | orchestrator | 2026-04-18 04:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:13.766408 | orchestrator | 2026-04-18 04:03:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:13.768280 | orchestrator | 2026-04-18 04:03:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:13.768327 | orchestrator | 2026-04-18 04:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:16.820167 | orchestrator | 2026-04-18 04:03:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:16.822633 | orchestrator | 2026-04-18 04:03:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:16.823030 | orchestrator | 2026-04-18 04:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:19.871461 | orchestrator | 2026-04-18 04:03:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:19.873618 | orchestrator | 2026-04-18 04:03:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:19.873664 | orchestrator | 2026-04-18 04:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:22.928174 | orchestrator | 2026-04-18 04:03:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:22.929789 | orchestrator | 2026-04-18 04:03:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:22.929825 | orchestrator | 2026-04-18 04:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:25.983504 | orchestrator | 2026-04-18 04:03:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:25.988645 | orchestrator | 2026-04-18 04:03:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:25.988714 | orchestrator | 2026-04-18 04:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:29.040001 | orchestrator | 2026-04-18 04:03:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:29.041943 | orchestrator | 2026-04-18 04:03:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:29.043006 | orchestrator | 2026-04-18 04:03:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:32.101410 | orchestrator | 2026-04-18 04:03:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:32.103116 | orchestrator | 2026-04-18 04:03:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:32.103242 | orchestrator | 2026-04-18 04:03:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:35.154267 | orchestrator | 2026-04-18 04:03:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:35.156409 | orchestrator | 2026-04-18 04:03:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:35.156476 | orchestrator | 2026-04-18 04:03:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:38.210237 | orchestrator | 2026-04-18 04:03:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:38.214010 | orchestrator | 2026-04-18 04:03:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:38.214125 | orchestrator | 2026-04-18 04:03:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:41.268028 | orchestrator | 2026-04-18 04:03:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:41.270245 | orchestrator | 2026-04-18 04:03:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:41.270327 | orchestrator | 2026-04-18 04:03:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:44.322977 | orchestrator | 2026-04-18 04:03:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:44.323215 | orchestrator | 2026-04-18 04:03:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:44.323237 | orchestrator | 2026-04-18 04:03:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:47.372718 | orchestrator | 2026-04-18 04:03:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:47.374728 | orchestrator | 2026-04-18 04:03:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:47.374785 | orchestrator | 2026-04-18 04:03:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:50.416176 | orchestrator | 2026-04-18 04:03:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:50.416388 | orchestrator | 2026-04-18 04:03:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:50.416407 | orchestrator | 2026-04-18 04:03:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:53.464800 | orchestrator | 2026-04-18 04:03:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:53.467297 | orchestrator | 2026-04-18 04:03:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:53.467407 | orchestrator | 2026-04-18 04:03:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:56.511712 | orchestrator | 2026-04-18 04:03:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:56.513709 | orchestrator | 2026-04-18 04:03:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:56.513789 | orchestrator | 2026-04-18 04:03:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:03:59.557293 | orchestrator | 2026-04-18 04:03:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:03:59.558222 | orchestrator | 2026-04-18 04:03:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:03:59.558287 | orchestrator | 2026-04-18 04:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:02.602372 | orchestrator | 2026-04-18 04:04:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:02.603718 | orchestrator | 2026-04-18 04:04:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:02.603892 | orchestrator | 2026-04-18 04:04:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:05.645586 | orchestrator | 2026-04-18 04:04:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:05.645736 | orchestrator | 2026-04-18 04:04:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:05.645752 | orchestrator | 2026-04-18 04:04:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:08.691024 | orchestrator | 2026-04-18 04:04:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:08.693595 | orchestrator | 2026-04-18 04:04:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:08.693724 | orchestrator | 2026-04-18 04:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:11.744736 | orchestrator | 2026-04-18 04:04:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:11.746638 | orchestrator | 2026-04-18 04:04:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:11.746752 | orchestrator | 2026-04-18 04:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:14.792596 | orchestrator | 2026-04-18 04:04:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:14.794880 | orchestrator | 2026-04-18 04:04:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:14.795211 | orchestrator | 2026-04-18 04:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:17.839708 | orchestrator | 2026-04-18 04:04:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:17.842327 | orchestrator | 2026-04-18 04:04:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:17.842640 | orchestrator | 2026-04-18 04:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:20.893271 | orchestrator | 2026-04-18 04:04:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:20.894744 | orchestrator | 2026-04-18 04:04:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:20.894770 | orchestrator | 2026-04-18 04:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:23.944729 | orchestrator | 2026-04-18 04:04:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:23.946764 | orchestrator | 2026-04-18 04:04:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:23.946844 | orchestrator | 2026-04-18 04:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:26.992982 | orchestrator | 2026-04-18 04:04:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:26.993886 | orchestrator | 2026-04-18 04:04:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:26.993943 | orchestrator | 2026-04-18 04:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:30.041744 | orchestrator | 2026-04-18 04:04:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:30.043213 | orchestrator | 2026-04-18 04:04:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:30.043296 | orchestrator | 2026-04-18 04:04:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:33.086459 | orchestrator | 2026-04-18 04:04:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:33.087633 | orchestrator | 2026-04-18 04:04:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:33.087677 | orchestrator | 2026-04-18 04:04:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:36.136321 | orchestrator | 2026-04-18 04:04:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:36.137527 | orchestrator | 2026-04-18 04:04:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:36.137581 | orchestrator | 2026-04-18 04:04:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:39.186648 | orchestrator | 2026-04-18 04:04:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:39.188099 | orchestrator | 2026-04-18 04:04:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:39.188282 | orchestrator | 2026-04-18 04:04:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:42.241289 | orchestrator | 2026-04-18 04:04:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:42.243065 | orchestrator | 2026-04-18 04:04:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:42.243130 | orchestrator | 2026-04-18 04:04:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:45.292822 | orchestrator | 2026-04-18 04:04:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:45.296158 | orchestrator | 2026-04-18 04:04:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:45.296234 | orchestrator | 2026-04-18 04:04:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:48.348173 | orchestrator | 2026-04-18 04:04:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:48.350129 | orchestrator | 2026-04-18 04:04:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:48.350224 | orchestrator | 2026-04-18 04:04:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:51.393861 | orchestrator | 2026-04-18 04:04:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:51.395149 | orchestrator | 2026-04-18 04:04:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:51.395229 | orchestrator | 2026-04-18 04:04:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:54.448583 | orchestrator | 2026-04-18 04:04:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:54.451282 | orchestrator | 2026-04-18 04:04:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:54.451422 | orchestrator | 2026-04-18 04:04:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:04:57.500215 | orchestrator | 2026-04-18 04:04:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:04:57.501755 | orchestrator | 2026-04-18 04:04:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:04:57.501825 | orchestrator | 2026-04-18 04:04:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:00.543087 | orchestrator | 2026-04-18 04:05:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:00.543769 | orchestrator | 2026-04-18 04:05:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:00.543903 | orchestrator | 2026-04-18 04:05:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:03.598791 | orchestrator | 2026-04-18 04:05:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:03.600412 | orchestrator | 2026-04-18 04:05:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:03.600533 | orchestrator | 2026-04-18 04:05:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:06.652876 | orchestrator | 2026-04-18 04:05:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:06.654697 | orchestrator | 2026-04-18 04:05:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:06.654762 | orchestrator | 2026-04-18 04:05:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:09.700131 | orchestrator | 2026-04-18 04:05:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:09.701764 | orchestrator | 2026-04-18 04:05:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:09.701870 | orchestrator | 2026-04-18 04:05:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:12.750665 | orchestrator | 2026-04-18 04:05:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:12.752697 | orchestrator | 2026-04-18 04:05:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:12.752765 | orchestrator | 2026-04-18 04:05:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:15.797571 | orchestrator | 2026-04-18 04:05:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:15.798840 | orchestrator | 2026-04-18 04:05:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:15.798990 | orchestrator | 2026-04-18 04:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:18.847094 | orchestrator | 2026-04-18 04:05:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:18.848634 | orchestrator | 2026-04-18 04:05:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:18.848692 | orchestrator | 2026-04-18 04:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:21.900566 | orchestrator | 2026-04-18 04:05:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:21.902478 | orchestrator | 2026-04-18 04:05:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:21.903141 | orchestrator | 2026-04-18 04:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:24.949758 | orchestrator | 2026-04-18 04:05:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:24.952499 | orchestrator | 2026-04-18 04:05:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:24.952548 | orchestrator | 2026-04-18 04:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:28.000290 | orchestrator | 2026-04-18 04:05:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:28.003088 | orchestrator | 2026-04-18 04:05:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:28.003179 | orchestrator | 2026-04-18 04:05:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:31.054742 | orchestrator | 2026-04-18 04:05:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:31.056781 | orchestrator | 2026-04-18 04:05:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:31.056823 | orchestrator | 2026-04-18 04:05:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:34.101172 | orchestrator | 2026-04-18 04:05:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:34.104265 | orchestrator | 2026-04-18 04:05:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:34.104370 | orchestrator | 2026-04-18 04:05:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:37.152728 | orchestrator | 2026-04-18 04:05:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:37.154879 | orchestrator | 2026-04-18 04:05:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:37.154959 | orchestrator | 2026-04-18 04:05:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:40.200863 | orchestrator | 2026-04-18 04:05:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:40.201989 | orchestrator | 2026-04-18 04:05:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:40.202103 | orchestrator | 2026-04-18 04:05:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:43.248666 | orchestrator | 2026-04-18 04:05:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:43.250598 | orchestrator | 2026-04-18 04:05:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:43.250667 | orchestrator | 2026-04-18 04:05:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:46.292494 | orchestrator | 2026-04-18 04:05:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:46.294371 | orchestrator | 2026-04-18 04:05:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:46.294557 | orchestrator | 2026-04-18 04:05:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:49.337254 | orchestrator | 2026-04-18 04:05:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:49.339288 | orchestrator | 2026-04-18 04:05:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:49.339364 | orchestrator | 2026-04-18 04:05:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:52.382636 | orchestrator | 2026-04-18 04:05:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:52.384139 | orchestrator | 2026-04-18 04:05:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:52.384199 | orchestrator | 2026-04-18 04:05:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:55.436133 | orchestrator | 2026-04-18 04:05:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:55.438197 | orchestrator | 2026-04-18 04:05:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:55.438266 | orchestrator | 2026-04-18 04:05:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:05:58.492356 | orchestrator | 2026-04-18 04:05:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:05:58.495311 | orchestrator | 2026-04-18 04:05:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:05:58.495518 | orchestrator | 2026-04-18 04:05:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:01.541774 | orchestrator | 2026-04-18 04:06:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:01.542584 | orchestrator | 2026-04-18 04:06:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:01.542626 | orchestrator | 2026-04-18 04:06:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:04.593749 | orchestrator | 2026-04-18 04:06:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:04.597349 | orchestrator | 2026-04-18 04:06:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:04.597452 | orchestrator | 2026-04-18 04:06:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:07.646141 | orchestrator | 2026-04-18 04:06:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:07.648531 | orchestrator | 2026-04-18 04:06:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:07.648595 | orchestrator | 2026-04-18 04:06:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:10.696749 | orchestrator | 2026-04-18 04:06:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:10.698473 | orchestrator | 2026-04-18 04:06:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:10.698526 | orchestrator | 2026-04-18 04:06:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:13.749604 | orchestrator | 2026-04-18 04:06:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:13.750281 | orchestrator | 2026-04-18 04:06:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:13.750316 | orchestrator | 2026-04-18 04:06:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:16.797492 | orchestrator | 2026-04-18 04:06:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:16.800798 | orchestrator | 2026-04-18 04:06:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:16.800866 | orchestrator | 2026-04-18 04:06:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:19.842984 | orchestrator | 2026-04-18 04:06:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:19.844717 | orchestrator | 2026-04-18 04:06:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:19.845093 | orchestrator | 2026-04-18 04:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:22.891250 | orchestrator | 2026-04-18 04:06:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:22.894241 | orchestrator | 2026-04-18 04:06:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:22.894309 | orchestrator | 2026-04-18 04:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:25.943962 | orchestrator | 2026-04-18 04:06:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:25.945235 | orchestrator | 2026-04-18 04:06:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:25.945291 | orchestrator | 2026-04-18 04:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:28.988655 | orchestrator | 2026-04-18 04:06:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:28.989205 | orchestrator | 2026-04-18 04:06:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:28.989267 | orchestrator | 2026-04-18 04:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:32.037277 | orchestrator | 2026-04-18 04:06:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:32.039495 | orchestrator | 2026-04-18 04:06:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:32.039555 | orchestrator | 2026-04-18 04:06:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:35.089762 | orchestrator | 2026-04-18 04:06:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:35.091079 | orchestrator | 2026-04-18 04:06:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:35.091151 | orchestrator | 2026-04-18 04:06:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:38.131672 | orchestrator | 2026-04-18 04:06:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:38.131828 | orchestrator | 2026-04-18 04:06:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:38.131842 | orchestrator | 2026-04-18 04:06:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:41.179713 | orchestrator | 2026-04-18 04:06:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:41.180443 | orchestrator | 2026-04-18 04:06:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:41.180516 | orchestrator | 2026-04-18 04:06:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:44.224515 | orchestrator | 2026-04-18 04:06:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:44.226860 | orchestrator | 2026-04-18 04:06:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:44.226922 | orchestrator | 2026-04-18 04:06:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:47.270883 | orchestrator | 2026-04-18 04:06:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:47.272786 | orchestrator | 2026-04-18 04:06:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:47.272807 | orchestrator | 2026-04-18 04:06:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:50.318333 | orchestrator | 2026-04-18 04:06:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:50.321958 | orchestrator | 2026-04-18 04:06:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:50.322969 | orchestrator | 2026-04-18 04:06:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:53.371530 | orchestrator | 2026-04-18 04:06:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:53.373395 | orchestrator | 2026-04-18 04:06:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:53.373436 | orchestrator | 2026-04-18 04:06:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:56.424280 | orchestrator | 2026-04-18 04:06:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:56.426267 | orchestrator | 2026-04-18 04:06:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:56.426326 | orchestrator | 2026-04-18 04:06:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:06:59.480365 | orchestrator | 2026-04-18 04:06:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:06:59.482786 | orchestrator | 2026-04-18 04:06:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:06:59.483161 | orchestrator | 2026-04-18 04:06:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:02.526484 | orchestrator | 2026-04-18 04:07:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:02.528310 | orchestrator | 2026-04-18 04:07:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:02.528361 | orchestrator | 2026-04-18 04:07:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:05.583813 | orchestrator | 2026-04-18 04:07:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:05.584973 | orchestrator | 2026-04-18 04:07:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:05.585021 | orchestrator | 2026-04-18 04:07:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:08.644302 | orchestrator | 2026-04-18 04:07:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:08.646119 | orchestrator | 2026-04-18 04:07:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:08.646161 | orchestrator | 2026-04-18 04:07:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:11.697342 | orchestrator | 2026-04-18 04:07:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:11.700315 | orchestrator | 2026-04-18 04:07:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:11.700423 | orchestrator | 2026-04-18 04:07:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:14.749695 | orchestrator | 2026-04-18 04:07:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:14.751578 | orchestrator | 2026-04-18 04:07:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:14.751634 | orchestrator | 2026-04-18 04:07:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:17.803503 | orchestrator | 2026-04-18 04:07:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:17.804522 | orchestrator | 2026-04-18 04:07:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:17.804568 | orchestrator | 2026-04-18 04:07:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:20.851425 | orchestrator | 2026-04-18 04:07:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:20.851742 | orchestrator | 2026-04-18 04:07:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:20.851816 | orchestrator | 2026-04-18 04:07:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:23.899186 | orchestrator | 2026-04-18 04:07:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:23.901246 | orchestrator | 2026-04-18 04:07:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:23.901301 | orchestrator | 2026-04-18 04:07:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:26.954692 | orchestrator | 2026-04-18 04:07:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:26.955843 | orchestrator | 2026-04-18 04:07:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:26.955880 | orchestrator | 2026-04-18 04:07:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:30.010965 | orchestrator | 2026-04-18 04:07:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:30.012880 | orchestrator | 2026-04-18 04:07:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:30.013007 | orchestrator | 2026-04-18 04:07:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:33.067557 | orchestrator | 2026-04-18 04:07:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:33.069492 | orchestrator | 2026-04-18 04:07:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:33.069568 | orchestrator | 2026-04-18 04:07:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:36.120980 | orchestrator | 2026-04-18 04:07:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:36.122313 | orchestrator | 2026-04-18 04:07:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:36.122397 | orchestrator | 2026-04-18 04:07:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:39.166433 | orchestrator | 2026-04-18 04:07:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:39.169286 | orchestrator | 2026-04-18 04:07:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:39.169396 | orchestrator | 2026-04-18 04:07:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:42.219172 | orchestrator | 2026-04-18 04:07:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:42.220939 | orchestrator | 2026-04-18 04:07:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:42.221017 | orchestrator | 2026-04-18 04:07:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:45.270281 | orchestrator | 2026-04-18 04:07:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:45.272494 | orchestrator | 2026-04-18 04:07:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:45.272606 | orchestrator | 2026-04-18 04:07:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:48.322569 | orchestrator | 2026-04-18 04:07:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:48.324944 | orchestrator | 2026-04-18 04:07:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:48.325010 | orchestrator | 2026-04-18 04:07:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:51.368907 | orchestrator | 2026-04-18 04:07:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:51.370334 | orchestrator | 2026-04-18 04:07:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:51.370392 | orchestrator | 2026-04-18 04:07:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:54.418175 | orchestrator | 2026-04-18 04:07:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:54.420066 | orchestrator | 2026-04-18 04:07:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:54.420157 | orchestrator | 2026-04-18 04:07:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:07:57.468789 | orchestrator | 2026-04-18 04:07:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:07:57.470839 | orchestrator | 2026-04-18 04:07:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:07:57.470911 | orchestrator | 2026-04-18 04:07:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:00.517089 | orchestrator | 2026-04-18 04:08:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:00.518765 | orchestrator | 2026-04-18 04:08:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:00.518781 | orchestrator | 2026-04-18 04:08:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:03.567073 | orchestrator | 2026-04-18 04:08:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:03.569825 | orchestrator | 2026-04-18 04:08:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:03.569902 | orchestrator | 2026-04-18 04:08:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:06.620542 | orchestrator | 2026-04-18 04:08:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:06.622526 | orchestrator | 2026-04-18 04:08:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:06.622585 | orchestrator | 2026-04-18 04:08:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:09.673640 | orchestrator | 2026-04-18 04:08:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:09.674869 | orchestrator | 2026-04-18 04:08:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:09.674927 | orchestrator | 2026-04-18 04:08:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:12.720795 | orchestrator | 2026-04-18 04:08:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:12.722762 | orchestrator | 2026-04-18 04:08:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:12.722851 | orchestrator | 2026-04-18 04:08:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:15.769831 | orchestrator | 2026-04-18 04:08:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:15.771605 | orchestrator | 2026-04-18 04:08:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:15.771688 | orchestrator | 2026-04-18 04:08:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:18.819025 | orchestrator | 2026-04-18 04:08:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:18.822448 | orchestrator | 2026-04-18 04:08:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:18.822547 | orchestrator | 2026-04-18 04:08:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:21.867472 | orchestrator | 2026-04-18 04:08:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:21.869452 | orchestrator | 2026-04-18 04:08:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:21.869526 | orchestrator | 2026-04-18 04:08:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:24.918705 | orchestrator | 2026-04-18 04:08:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:24.920835 | orchestrator | 2026-04-18 04:08:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:24.920917 | orchestrator | 2026-04-18 04:08:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:27.965994 | orchestrator | 2026-04-18 04:08:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:27.967927 | orchestrator | 2026-04-18 04:08:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:27.968059 | orchestrator | 2026-04-18 04:08:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:31.011471 | orchestrator | 2026-04-18 04:08:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:31.014807 | orchestrator | 2026-04-18 04:08:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:31.014900 | orchestrator | 2026-04-18 04:08:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:34.063605 | orchestrator | 2026-04-18 04:08:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:34.064975 | orchestrator | 2026-04-18 04:08:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:34.065030 | orchestrator | 2026-04-18 04:08:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:37.109179 | orchestrator | 2026-04-18 04:08:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:37.110229 | orchestrator | 2026-04-18 04:08:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:37.110275 | orchestrator | 2026-04-18 04:08:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:40.144033 | orchestrator | 2026-04-18 04:08:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:40.145781 | orchestrator | 2026-04-18 04:08:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:40.145827 | orchestrator | 2026-04-18 04:08:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:43.192853 | orchestrator | 2026-04-18 04:08:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:43.194941 | orchestrator | 2026-04-18 04:08:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:43.195023 | orchestrator | 2026-04-18 04:08:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:46.245582 | orchestrator | 2026-04-18 04:08:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:46.246930 | orchestrator | 2026-04-18 04:08:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:46.246992 | orchestrator | 2026-04-18 04:08:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:49.292417 | orchestrator | 2026-04-18 04:08:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:49.294391 | orchestrator | 2026-04-18 04:08:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:49.294466 | orchestrator | 2026-04-18 04:08:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:52.339840 | orchestrator | 2026-04-18 04:08:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:52.341477 | orchestrator | 2026-04-18 04:08:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:52.341524 | orchestrator | 2026-04-18 04:08:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:55.391592 | orchestrator | 2026-04-18 04:08:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:55.394336 | orchestrator | 2026-04-18 04:08:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:55.394404 | orchestrator | 2026-04-18 04:08:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:08:58.439846 | orchestrator | 2026-04-18 04:08:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:08:58.441536 | orchestrator | 2026-04-18 04:08:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:08:58.441583 | orchestrator | 2026-04-18 04:08:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:01.496371 | orchestrator | 2026-04-18 04:09:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:01.497095 | orchestrator | 2026-04-18 04:09:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:01.497160 | orchestrator | 2026-04-18 04:09:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:04.546359 | orchestrator | 2026-04-18 04:09:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:04.548519 | orchestrator | 2026-04-18 04:09:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:04.548593 | orchestrator | 2026-04-18 04:09:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:07.599385 | orchestrator | 2026-04-18 04:09:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:07.601277 | orchestrator | 2026-04-18 04:09:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:07.601580 | orchestrator | 2026-04-18 04:09:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:10.653203 | orchestrator | 2026-04-18 04:09:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:10.655528 | orchestrator | 2026-04-18 04:09:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:10.655599 | orchestrator | 2026-04-18 04:09:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:13.700192 | orchestrator | 2026-04-18 04:09:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:13.701488 | orchestrator | 2026-04-18 04:09:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:13.701543 | orchestrator | 2026-04-18 04:09:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:16.755905 | orchestrator | 2026-04-18 04:09:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:16.757706 | orchestrator | 2026-04-18 04:09:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:16.757794 | orchestrator | 2026-04-18 04:09:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:19.805206 | orchestrator | 2026-04-18 04:09:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:19.806214 | orchestrator | 2026-04-18 04:09:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:19.806271 | orchestrator | 2026-04-18 04:09:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:22.851475 | orchestrator | 2026-04-18 04:09:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:22.853133 | orchestrator | 2026-04-18 04:09:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:22.853947 | orchestrator | 2026-04-18 04:09:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:25.900629 | orchestrator | 2026-04-18 04:09:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:25.902391 | orchestrator | 2026-04-18 04:09:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:25.902463 | orchestrator | 2026-04-18 04:09:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:28.949497 | orchestrator | 2026-04-18 04:09:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:28.952594 | orchestrator | 2026-04-18 04:09:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:28.952810 | orchestrator | 2026-04-18 04:09:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:31.998865 | orchestrator | 2026-04-18 04:09:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:32.003467 | orchestrator | 2026-04-18 04:09:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:32.003540 | orchestrator | 2026-04-18 04:09:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:35.051456 | orchestrator | 2026-04-18 04:09:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:35.053375 | orchestrator | 2026-04-18 04:09:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:35.053441 | orchestrator | 2026-04-18 04:09:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:38.102690 | orchestrator | 2026-04-18 04:09:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:38.105689 | orchestrator | 2026-04-18 04:09:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:38.105778 | orchestrator | 2026-04-18 04:09:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:41.149121 | orchestrator | 2026-04-18 04:09:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:41.152423 | orchestrator | 2026-04-18 04:09:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:41.152555 | orchestrator | 2026-04-18 04:09:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:44.200104 | orchestrator | 2026-04-18 04:09:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:44.202333 | orchestrator | 2026-04-18 04:09:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:44.202395 | orchestrator | 2026-04-18 04:09:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:47.252436 | orchestrator | 2026-04-18 04:09:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:47.254405 | orchestrator | 2026-04-18 04:09:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:47.254444 | orchestrator | 2026-04-18 04:09:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:50.294483 | orchestrator | 2026-04-18 04:09:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:50.296185 | orchestrator | 2026-04-18 04:09:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:50.296290 | orchestrator | 2026-04-18 04:09:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:53.351986 | orchestrator | 2026-04-18 04:09:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:53.353414 | orchestrator | 2026-04-18 04:09:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:53.353462 | orchestrator | 2026-04-18 04:09:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:56.406147 | orchestrator | 2026-04-18 04:09:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:56.408203 | orchestrator | 2026-04-18 04:09:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:56.408284 | orchestrator | 2026-04-18 04:09:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:09:59.449780 | orchestrator | 2026-04-18 04:09:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:09:59.451166 | orchestrator | 2026-04-18 04:09:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:09:59.451231 | orchestrator | 2026-04-18 04:09:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:02.500680 | orchestrator | 2026-04-18 04:10:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:02.502784 | orchestrator | 2026-04-18 04:10:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:02.502927 | orchestrator | 2026-04-18 04:10:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:05.550865 | orchestrator | 2026-04-18 04:10:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:05.554797 | orchestrator | 2026-04-18 04:10:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:05.554871 | orchestrator | 2026-04-18 04:10:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:08.608472 | orchestrator | 2026-04-18 04:10:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:08.609843 | orchestrator | 2026-04-18 04:10:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:08.609963 | orchestrator | 2026-04-18 04:10:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:11.655346 | orchestrator | 2026-04-18 04:10:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:11.657461 | orchestrator | 2026-04-18 04:10:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:11.657537 | orchestrator | 2026-04-18 04:10:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:14.700688 | orchestrator | 2026-04-18 04:10:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:14.702180 | orchestrator | 2026-04-18 04:10:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:14.702229 | orchestrator | 2026-04-18 04:10:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:17.749472 | orchestrator | 2026-04-18 04:10:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:17.750623 | orchestrator | 2026-04-18 04:10:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:17.750784 | orchestrator | 2026-04-18 04:10:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:20.801365 | orchestrator | 2026-04-18 04:10:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:20.804193 | orchestrator | 2026-04-18 04:10:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:20.804341 | orchestrator | 2026-04-18 04:10:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:23.851343 | orchestrator | 2026-04-18 04:10:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:23.852076 | orchestrator | 2026-04-18 04:10:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:23.852300 | orchestrator | 2026-04-18 04:10:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:26.900329 | orchestrator | 2026-04-18 04:10:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:26.902535 | orchestrator | 2026-04-18 04:10:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:26.902910 | orchestrator | 2026-04-18 04:10:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:29.951635 | orchestrator | 2026-04-18 04:10:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:29.953091 | orchestrator | 2026-04-18 04:10:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:29.953137 | orchestrator | 2026-04-18 04:10:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:33.005464 | orchestrator | 2026-04-18 04:10:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:33.006620 | orchestrator | 2026-04-18 04:10:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:33.006704 | orchestrator | 2026-04-18 04:10:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:36.056117 | orchestrator | 2026-04-18 04:10:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:36.057715 | orchestrator | 2026-04-18 04:10:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:36.057779 | orchestrator | 2026-04-18 04:10:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:39.108821 | orchestrator | 2026-04-18 04:10:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:39.109806 | orchestrator | 2026-04-18 04:10:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:39.109862 | orchestrator | 2026-04-18 04:10:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:42.154700 | orchestrator | 2026-04-18 04:10:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:42.155369 | orchestrator | 2026-04-18 04:10:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:42.155408 | orchestrator | 2026-04-18 04:10:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:45.199873 | orchestrator | 2026-04-18 04:10:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:45.200835 | orchestrator | 2026-04-18 04:10:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:45.200870 | orchestrator | 2026-04-18 04:10:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:48.247737 | orchestrator | 2026-04-18 04:10:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:48.248600 | orchestrator | 2026-04-18 04:10:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:48.248777 | orchestrator | 2026-04-18 04:10:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:51.289133 | orchestrator | 2026-04-18 04:10:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:51.292096 | orchestrator | 2026-04-18 04:10:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:51.292168 | orchestrator | 2026-04-18 04:10:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:54.345180 | orchestrator | 2026-04-18 04:10:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:54.346853 | orchestrator | 2026-04-18 04:10:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:54.346946 | orchestrator | 2026-04-18 04:10:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:10:57.390381 | orchestrator | 2026-04-18 04:10:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:10:57.392302 | orchestrator | 2026-04-18 04:10:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:10:57.392354 | orchestrator | 2026-04-18 04:10:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:00.438621 | orchestrator | 2026-04-18 04:11:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:00.438812 | orchestrator | 2026-04-18 04:11:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:00.438973 | orchestrator | 2026-04-18 04:11:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:03.485170 | orchestrator | 2026-04-18 04:11:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:03.485353 | orchestrator | 2026-04-18 04:11:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:03.485366 | orchestrator | 2026-04-18 04:11:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:06.540029 | orchestrator | 2026-04-18 04:11:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:06.541159 | orchestrator | 2026-04-18 04:11:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:06.541316 | orchestrator | 2026-04-18 04:11:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:09.587115 | orchestrator | 2026-04-18 04:11:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:09.588688 | orchestrator | 2026-04-18 04:11:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:09.588726 | orchestrator | 2026-04-18 04:11:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:12.637750 | orchestrator | 2026-04-18 04:11:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:12.639286 | orchestrator | 2026-04-18 04:11:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:12.639339 | orchestrator | 2026-04-18 04:11:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:15.688479 | orchestrator | 2026-04-18 04:11:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:15.689234 | orchestrator | 2026-04-18 04:11:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:15.689285 | orchestrator | 2026-04-18 04:11:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:18.729598 | orchestrator | 2026-04-18 04:11:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:18.730905 | orchestrator | 2026-04-18 04:11:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:18.731031 | orchestrator | 2026-04-18 04:11:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:21.783504 | orchestrator | 2026-04-18 04:11:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:21.785553 | orchestrator | 2026-04-18 04:11:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:21.785635 | orchestrator | 2026-04-18 04:11:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:24.829843 | orchestrator | 2026-04-18 04:11:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:24.830678 | orchestrator | 2026-04-18 04:11:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:24.830704 | orchestrator | 2026-04-18 04:11:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:27.878639 | orchestrator | 2026-04-18 04:11:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:27.879470 | orchestrator | 2026-04-18 04:11:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:27.879498 | orchestrator | 2026-04-18 04:11:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:30.927376 | orchestrator | 2026-04-18 04:11:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:30.928049 | orchestrator | 2026-04-18 04:11:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:30.928066 | orchestrator | 2026-04-18 04:11:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:33.979648 | orchestrator | 2026-04-18 04:11:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:33.981317 | orchestrator | 2026-04-18 04:11:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:33.981413 | orchestrator | 2026-04-18 04:11:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:37.032565 | orchestrator | 2026-04-18 04:11:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:37.033422 | orchestrator | 2026-04-18 04:11:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:37.033452 | orchestrator | 2026-04-18 04:11:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:40.082221 | orchestrator | 2026-04-18 04:11:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:40.083818 | orchestrator | 2026-04-18 04:11:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:40.083897 | orchestrator | 2026-04-18 04:11:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:43.137323 | orchestrator | 2026-04-18 04:11:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:43.139074 | orchestrator | 2026-04-18 04:11:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:43.139137 | orchestrator | 2026-04-18 04:11:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:46.177826 | orchestrator | 2026-04-18 04:11:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:46.178960 | orchestrator | 2026-04-18 04:11:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:46.179042 | orchestrator | 2026-04-18 04:11:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:49.226528 | orchestrator | 2026-04-18 04:11:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:49.228429 | orchestrator | 2026-04-18 04:11:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:49.228473 | orchestrator | 2026-04-18 04:11:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:52.274937 | orchestrator | 2026-04-18 04:11:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:52.278120 | orchestrator | 2026-04-18 04:11:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:52.278231 | orchestrator | 2026-04-18 04:11:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:55.323271 | orchestrator | 2026-04-18 04:11:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:55.323882 | orchestrator | 2026-04-18 04:11:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:55.323912 | orchestrator | 2026-04-18 04:11:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:11:58.369144 | orchestrator | 2026-04-18 04:11:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:11:58.370949 | orchestrator | 2026-04-18 04:11:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:11:58.370998 | orchestrator | 2026-04-18 04:11:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:01.414553 | orchestrator | 2026-04-18 04:12:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:01.415578 | orchestrator | 2026-04-18 04:12:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:01.415633 | orchestrator | 2026-04-18 04:12:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:04.469603 | orchestrator | 2026-04-18 04:12:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:04.471274 | orchestrator | 2026-04-18 04:12:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:04.471337 | orchestrator | 2026-04-18 04:12:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:07.517464 | orchestrator | 2026-04-18 04:12:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:07.519298 | orchestrator | 2026-04-18 04:12:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:07.519359 | orchestrator | 2026-04-18 04:12:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:10.575527 | orchestrator | 2026-04-18 04:12:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:10.578557 | orchestrator | 2026-04-18 04:12:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:10.578617 | orchestrator | 2026-04-18 04:12:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:13.626782 | orchestrator | 2026-04-18 04:12:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:13.627987 | orchestrator | 2026-04-18 04:12:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:13.628256 | orchestrator | 2026-04-18 04:12:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:16.676715 | orchestrator | 2026-04-18 04:12:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:16.679429 | orchestrator | 2026-04-18 04:12:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:16.679530 | orchestrator | 2026-04-18 04:12:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:19.727741 | orchestrator | 2026-04-18 04:12:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:19.729446 | orchestrator | 2026-04-18 04:12:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:19.729493 | orchestrator | 2026-04-18 04:12:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:22.783546 | orchestrator | 2026-04-18 04:12:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:22.784463 | orchestrator | 2026-04-18 04:12:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:22.784502 | orchestrator | 2026-04-18 04:12:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:25.837857 | orchestrator | 2026-04-18 04:12:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:25.839266 | orchestrator | 2026-04-18 04:12:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:25.839376 | orchestrator | 2026-04-18 04:12:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:28.886133 | orchestrator | 2026-04-18 04:12:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:28.888768 | orchestrator | 2026-04-18 04:12:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:28.888961 | orchestrator | 2026-04-18 04:12:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:31.933522 | orchestrator | 2026-04-18 04:12:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:31.936269 | orchestrator | 2026-04-18 04:12:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:31.936346 | orchestrator | 2026-04-18 04:12:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:34.986375 | orchestrator | 2026-04-18 04:12:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:34.990187 | orchestrator | 2026-04-18 04:12:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:34.990275 | orchestrator | 2026-04-18 04:12:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:38.036970 | orchestrator | 2026-04-18 04:12:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:38.038887 | orchestrator | 2026-04-18 04:12:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:38.038963 | orchestrator | 2026-04-18 04:12:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:41.081337 | orchestrator | 2026-04-18 04:12:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:41.083040 | orchestrator | 2026-04-18 04:12:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:41.083108 | orchestrator | 2026-04-18 04:12:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:44.132561 | orchestrator | 2026-04-18 04:12:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:44.135194 | orchestrator | 2026-04-18 04:12:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:44.135284 | orchestrator | 2026-04-18 04:12:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:47.185305 | orchestrator | 2026-04-18 04:12:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:47.186827 | orchestrator | 2026-04-18 04:12:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:47.186896 | orchestrator | 2026-04-18 04:12:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:50.235516 | orchestrator | 2026-04-18 04:12:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:50.237929 | orchestrator | 2026-04-18 04:12:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:50.238069 | orchestrator | 2026-04-18 04:12:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:53.286728 | orchestrator | 2026-04-18 04:12:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:53.288999 | orchestrator | 2026-04-18 04:12:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:53.289071 | orchestrator | 2026-04-18 04:12:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:56.335976 | orchestrator | 2026-04-18 04:12:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:56.337018 | orchestrator | 2026-04-18 04:12:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:56.337067 | orchestrator | 2026-04-18 04:12:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:12:59.387530 | orchestrator | 2026-04-18 04:12:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:12:59.389538 | orchestrator | 2026-04-18 04:12:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:12:59.389576 | orchestrator | 2026-04-18 04:12:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:02.437449 | orchestrator | 2026-04-18 04:13:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:02.438566 | orchestrator | 2026-04-18 04:13:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:02.438664 | orchestrator | 2026-04-18 04:13:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:05.481021 | orchestrator | 2026-04-18 04:13:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:05.482215 | orchestrator | 2026-04-18 04:13:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:05.482261 | orchestrator | 2026-04-18 04:13:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:08.529367 | orchestrator | 2026-04-18 04:13:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:08.532409 | orchestrator | 2026-04-18 04:13:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:08.532479 | orchestrator | 2026-04-18 04:13:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:11.583727 | orchestrator | 2026-04-18 04:13:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:11.584874 | orchestrator | 2026-04-18 04:13:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:11.584902 | orchestrator | 2026-04-18 04:13:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:14.637922 | orchestrator | 2026-04-18 04:13:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:14.639191 | orchestrator | 2026-04-18 04:13:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:14.639276 | orchestrator | 2026-04-18 04:13:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:17.689636 | orchestrator | 2026-04-18 04:13:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:17.692035 | orchestrator | 2026-04-18 04:13:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:17.692071 | orchestrator | 2026-04-18 04:13:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:20.744753 | orchestrator | 2026-04-18 04:13:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:20.746889 | orchestrator | 2026-04-18 04:13:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:20.747091 | orchestrator | 2026-04-18 04:13:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:23.790795 | orchestrator | 2026-04-18 04:13:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:23.792682 | orchestrator | 2026-04-18 04:13:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:23.792779 | orchestrator | 2026-04-18 04:13:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:26.837966 | orchestrator | 2026-04-18 04:13:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:26.839648 | orchestrator | 2026-04-18 04:13:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:26.839690 | orchestrator | 2026-04-18 04:13:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:29.886954 | orchestrator | 2026-04-18 04:13:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:29.890192 | orchestrator | 2026-04-18 04:13:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:29.890276 | orchestrator | 2026-04-18 04:13:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:32.937180 | orchestrator | 2026-04-18 04:13:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:32.939185 | orchestrator | 2026-04-18 04:13:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:32.939248 | orchestrator | 2026-04-18 04:13:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:35.990689 | orchestrator | 2026-04-18 04:13:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:35.992777 | orchestrator | 2026-04-18 04:13:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:35.992916 | orchestrator | 2026-04-18 04:13:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:39.044454 | orchestrator | 2026-04-18 04:13:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:39.045241 | orchestrator | 2026-04-18 04:13:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:39.045286 | orchestrator | 2026-04-18 04:13:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:42.098268 | orchestrator | 2026-04-18 04:13:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:42.099878 | orchestrator | 2026-04-18 04:13:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:42.100084 | orchestrator | 2026-04-18 04:13:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:45.149052 | orchestrator | 2026-04-18 04:13:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:45.150862 | orchestrator | 2026-04-18 04:13:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:45.150908 | orchestrator | 2026-04-18 04:13:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:48.191837 | orchestrator | 2026-04-18 04:13:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:48.192651 | orchestrator | 2026-04-18 04:13:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:48.192720 | orchestrator | 2026-04-18 04:13:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:51.241263 | orchestrator | 2026-04-18 04:13:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:51.244137 | orchestrator | 2026-04-18 04:13:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:51.244206 | orchestrator | 2026-04-18 04:13:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:54.288633 | orchestrator | 2026-04-18 04:13:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:54.290576 | orchestrator | 2026-04-18 04:13:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:54.290745 | orchestrator | 2026-04-18 04:13:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:13:57.335373 | orchestrator | 2026-04-18 04:13:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:13:57.337237 | orchestrator | 2026-04-18 04:13:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:13:57.337299 | orchestrator | 2026-04-18 04:13:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:00.386354 | orchestrator | 2026-04-18 04:14:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:00.387237 | orchestrator | 2026-04-18 04:14:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:00.387272 | orchestrator | 2026-04-18 04:14:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:03.431424 | orchestrator | 2026-04-18 04:14:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:03.433649 | orchestrator | 2026-04-18 04:14:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:03.433719 | orchestrator | 2026-04-18 04:14:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:06.479517 | orchestrator | 2026-04-18 04:14:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:06.481824 | orchestrator | 2026-04-18 04:14:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:06.481876 | orchestrator | 2026-04-18 04:14:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:09.525589 | orchestrator | 2026-04-18 04:14:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:09.527660 | orchestrator | 2026-04-18 04:14:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:09.527727 | orchestrator | 2026-04-18 04:14:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:12.572793 | orchestrator | 2026-04-18 04:14:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:12.576819 | orchestrator | 2026-04-18 04:14:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:12.576873 | orchestrator | 2026-04-18 04:14:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:15.617478 | orchestrator | 2026-04-18 04:14:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:15.618413 | orchestrator | 2026-04-18 04:14:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:15.618479 | orchestrator | 2026-04-18 04:14:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:18.665567 | orchestrator | 2026-04-18 04:14:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:18.667786 | orchestrator | 2026-04-18 04:14:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:18.667834 | orchestrator | 2026-04-18 04:14:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:21.712030 | orchestrator | 2026-04-18 04:14:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:21.712538 | orchestrator | 2026-04-18 04:14:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:21.713197 | orchestrator | 2026-04-18 04:14:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:24.757374 | orchestrator | 2026-04-18 04:14:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:24.758678 | orchestrator | 2026-04-18 04:14:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:24.758729 | orchestrator | 2026-04-18 04:14:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:27.808798 | orchestrator | 2026-04-18 04:14:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:27.810318 | orchestrator | 2026-04-18 04:14:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:27.810358 | orchestrator | 2026-04-18 04:14:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:30.863940 | orchestrator | 2026-04-18 04:14:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:30.865604 | orchestrator | 2026-04-18 04:14:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:30.865650 | orchestrator | 2026-04-18 04:14:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:33.914506 | orchestrator | 2026-04-18 04:14:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:33.916589 | orchestrator | 2026-04-18 04:14:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:33.916668 | orchestrator | 2026-04-18 04:14:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:36.964506 | orchestrator | 2026-04-18 04:14:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:36.966243 | orchestrator | 2026-04-18 04:14:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:36.966306 | orchestrator | 2026-04-18 04:14:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:40.021559 | orchestrator | 2026-04-18 04:14:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:40.022641 | orchestrator | 2026-04-18 04:14:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:40.022694 | orchestrator | 2026-04-18 04:14:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:43.069253 | orchestrator | 2026-04-18 04:14:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:43.069419 | orchestrator | 2026-04-18 04:14:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:43.069464 | orchestrator | 2026-04-18 04:14:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:46.116738 | orchestrator | 2026-04-18 04:14:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:46.118383 | orchestrator | 2026-04-18 04:14:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:46.118430 | orchestrator | 2026-04-18 04:14:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:49.167695 | orchestrator | 2026-04-18 04:14:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:49.169775 | orchestrator | 2026-04-18 04:14:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:49.169882 | orchestrator | 2026-04-18 04:14:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:52.222666 | orchestrator | 2026-04-18 04:14:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:52.224554 | orchestrator | 2026-04-18 04:14:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:52.224640 | orchestrator | 2026-04-18 04:14:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:55.269320 | orchestrator | 2026-04-18 04:14:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:55.270358 | orchestrator | 2026-04-18 04:14:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:55.270391 | orchestrator | 2026-04-18 04:14:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:14:58.323243 | orchestrator | 2026-04-18 04:14:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:14:58.324463 | orchestrator | 2026-04-18 04:14:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:14:58.324690 | orchestrator | 2026-04-18 04:14:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:01.376263 | orchestrator | 2026-04-18 04:15:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:01.377765 | orchestrator | 2026-04-18 04:15:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:01.377809 | orchestrator | 2026-04-18 04:15:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:04.419404 | orchestrator | 2026-04-18 04:15:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:04.421240 | orchestrator | 2026-04-18 04:15:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:04.421329 | orchestrator | 2026-04-18 04:15:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:07.470625 | orchestrator | 2026-04-18 04:15:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:07.473753 | orchestrator | 2026-04-18 04:15:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:07.473809 | orchestrator | 2026-04-18 04:15:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:10.515898 | orchestrator | 2026-04-18 04:15:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:10.517541 | orchestrator | 2026-04-18 04:15:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:10.517767 | orchestrator | 2026-04-18 04:15:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:13.565691 | orchestrator | 2026-04-18 04:15:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:13.567705 | orchestrator | 2026-04-18 04:15:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:13.567789 | orchestrator | 2026-04-18 04:15:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:16.621413 | orchestrator | 2026-04-18 04:15:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:16.625257 | orchestrator | 2026-04-18 04:15:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:16.625918 | orchestrator | 2026-04-18 04:15:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:19.675174 | orchestrator | 2026-04-18 04:15:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:19.677997 | orchestrator | 2026-04-18 04:15:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:19.678112 | orchestrator | 2026-04-18 04:15:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:22.725767 | orchestrator | 2026-04-18 04:15:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:22.729209 | orchestrator | 2026-04-18 04:15:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:22.729281 | orchestrator | 2026-04-18 04:15:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:25.774617 | orchestrator | 2026-04-18 04:15:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:25.776556 | orchestrator | 2026-04-18 04:15:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:25.776624 | orchestrator | 2026-04-18 04:15:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:28.812461 | orchestrator | 2026-04-18 04:15:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:28.812644 | orchestrator | 2026-04-18 04:15:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:28.812662 | orchestrator | 2026-04-18 04:15:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:31.856544 | orchestrator | 2026-04-18 04:15:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:31.857137 | orchestrator | 2026-04-18 04:15:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:31.857173 | orchestrator | 2026-04-18 04:15:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:34.906757 | orchestrator | 2026-04-18 04:15:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:34.909768 | orchestrator | 2026-04-18 04:15:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:34.909843 | orchestrator | 2026-04-18 04:15:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:37.957010 | orchestrator | 2026-04-18 04:15:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:37.958871 | orchestrator | 2026-04-18 04:15:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:37.958992 | orchestrator | 2026-04-18 04:15:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:40.999901 | orchestrator | 2026-04-18 04:15:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:41.002383 | orchestrator | 2026-04-18 04:15:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:41.002499 | orchestrator | 2026-04-18 04:15:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:44.051806 | orchestrator | 2026-04-18 04:15:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:44.053559 | orchestrator | 2026-04-18 04:15:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:44.053613 | orchestrator | 2026-04-18 04:15:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:47.104897 | orchestrator | 2026-04-18 04:15:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:47.107362 | orchestrator | 2026-04-18 04:15:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:47.107412 | orchestrator | 2026-04-18 04:15:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:50.153496 | orchestrator | 2026-04-18 04:15:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:50.154415 | orchestrator | 2026-04-18 04:15:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:50.154464 | orchestrator | 2026-04-18 04:15:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:53.209379 | orchestrator | 2026-04-18 04:15:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:53.210813 | orchestrator | 2026-04-18 04:15:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:53.211195 | orchestrator | 2026-04-18 04:15:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:56.256117 | orchestrator | 2026-04-18 04:15:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:56.259256 | orchestrator | 2026-04-18 04:15:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:56.259368 | orchestrator | 2026-04-18 04:15:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:15:59.310004 | orchestrator | 2026-04-18 04:15:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:15:59.311269 | orchestrator | 2026-04-18 04:15:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:15:59.311348 | orchestrator | 2026-04-18 04:15:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:02.359376 | orchestrator | 2026-04-18 04:16:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:02.361487 | orchestrator | 2026-04-18 04:16:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:02.361534 | orchestrator | 2026-04-18 04:16:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:05.402691 | orchestrator | 2026-04-18 04:16:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:05.403825 | orchestrator | 2026-04-18 04:16:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:05.403871 | orchestrator | 2026-04-18 04:16:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:08.455903 | orchestrator | 2026-04-18 04:16:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:08.457739 | orchestrator | 2026-04-18 04:16:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:08.457779 | orchestrator | 2026-04-18 04:16:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:11.507918 | orchestrator | 2026-04-18 04:16:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:11.509408 | orchestrator | 2026-04-18 04:16:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:11.509539 | orchestrator | 2026-04-18 04:16:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:14.556014 | orchestrator | 2026-04-18 04:16:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:14.557376 | orchestrator | 2026-04-18 04:16:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:14.557419 | orchestrator | 2026-04-18 04:16:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:17.607669 | orchestrator | 2026-04-18 04:16:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:17.610376 | orchestrator | 2026-04-18 04:16:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:17.610459 | orchestrator | 2026-04-18 04:16:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:20.660905 | orchestrator | 2026-04-18 04:16:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:20.662679 | orchestrator | 2026-04-18 04:16:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:20.662722 | orchestrator | 2026-04-18 04:16:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:23.713267 | orchestrator | 2026-04-18 04:16:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:23.715632 | orchestrator | 2026-04-18 04:16:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:23.715692 | orchestrator | 2026-04-18 04:16:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:26.765533 | orchestrator | 2026-04-18 04:16:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:26.767239 | orchestrator | 2026-04-18 04:16:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:26.767307 | orchestrator | 2026-04-18 04:16:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:29.816464 | orchestrator | 2026-04-18 04:16:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:29.818575 | orchestrator | 2026-04-18 04:16:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:29.818663 | orchestrator | 2026-04-18 04:16:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:32.867158 | orchestrator | 2026-04-18 04:16:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:32.869908 | orchestrator | 2026-04-18 04:16:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:32.869978 | orchestrator | 2026-04-18 04:16:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:35.922404 | orchestrator | 2026-04-18 04:16:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:35.925016 | orchestrator | 2026-04-18 04:16:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:35.925100 | orchestrator | 2026-04-18 04:16:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:38.973611 | orchestrator | 2026-04-18 04:16:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:38.975083 | orchestrator | 2026-04-18 04:16:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:38.975171 | orchestrator | 2026-04-18 04:16:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:42.025895 | orchestrator | 2026-04-18 04:16:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:42.028121 | orchestrator | 2026-04-18 04:16:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:42.028199 | orchestrator | 2026-04-18 04:16:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:45.072731 | orchestrator | 2026-04-18 04:16:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:45.074671 | orchestrator | 2026-04-18 04:16:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:45.074740 | orchestrator | 2026-04-18 04:16:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:48.117997 | orchestrator | 2026-04-18 04:16:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:48.121244 | orchestrator | 2026-04-18 04:16:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:48.121345 | orchestrator | 2026-04-18 04:16:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:51.168828 | orchestrator | 2026-04-18 04:16:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:51.170333 | orchestrator | 2026-04-18 04:16:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:51.170374 | orchestrator | 2026-04-18 04:16:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:54.212801 | orchestrator | 2026-04-18 04:16:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:54.215571 | orchestrator | 2026-04-18 04:16:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:54.215633 | orchestrator | 2026-04-18 04:16:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:16:57.262649 | orchestrator | 2026-04-18 04:16:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:16:57.265428 | orchestrator | 2026-04-18 04:16:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:16:57.265606 | orchestrator | 2026-04-18 04:16:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:00.320495 | orchestrator | 2026-04-18 04:17:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:00.320892 | orchestrator | 2026-04-18 04:17:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:00.320991 | orchestrator | 2026-04-18 04:17:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:03.366723 | orchestrator | 2026-04-18 04:17:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:03.368473 | orchestrator | 2026-04-18 04:17:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:03.368577 | orchestrator | 2026-04-18 04:17:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:06.432492 | orchestrator | 2026-04-18 04:17:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:06.435493 | orchestrator | 2026-04-18 04:17:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:06.435558 | orchestrator | 2026-04-18 04:17:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:09.488517 | orchestrator | 2026-04-18 04:17:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:09.490575 | orchestrator | 2026-04-18 04:17:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:09.490649 | orchestrator | 2026-04-18 04:17:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:12.536395 | orchestrator | 2026-04-18 04:17:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:12.538273 | orchestrator | 2026-04-18 04:17:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:12.538345 | orchestrator | 2026-04-18 04:17:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:15.584484 | orchestrator | 2026-04-18 04:17:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:15.586583 | orchestrator | 2026-04-18 04:17:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:15.586826 | orchestrator | 2026-04-18 04:17:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:18.636795 | orchestrator | 2026-04-18 04:17:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:18.640183 | orchestrator | 2026-04-18 04:17:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:18.640252 | orchestrator | 2026-04-18 04:17:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:21.688372 | orchestrator | 2026-04-18 04:17:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:21.690276 | orchestrator | 2026-04-18 04:17:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:21.690344 | orchestrator | 2026-04-18 04:17:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:24.732894 | orchestrator | 2026-04-18 04:17:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:24.734356 | orchestrator | 2026-04-18 04:17:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:24.734408 | orchestrator | 2026-04-18 04:17:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:27.777775 | orchestrator | 2026-04-18 04:17:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:27.780467 | orchestrator | 2026-04-18 04:17:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:27.780708 | orchestrator | 2026-04-18 04:17:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:30.825224 | orchestrator | 2026-04-18 04:17:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:30.826715 | orchestrator | 2026-04-18 04:17:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:30.826747 | orchestrator | 2026-04-18 04:17:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:33.869575 | orchestrator | 2026-04-18 04:17:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:33.870191 | orchestrator | 2026-04-18 04:17:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:33.870234 | orchestrator | 2026-04-18 04:17:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:36.922745 | orchestrator | 2026-04-18 04:17:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:36.924411 | orchestrator | 2026-04-18 04:17:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:36.924484 | orchestrator | 2026-04-18 04:17:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:39.969041 | orchestrator | 2026-04-18 04:17:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:39.972533 | orchestrator | 2026-04-18 04:17:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:39.972613 | orchestrator | 2026-04-18 04:17:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:43.022463 | orchestrator | 2026-04-18 04:17:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:43.025631 | orchestrator | 2026-04-18 04:17:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:43.025709 | orchestrator | 2026-04-18 04:17:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:46.069069 | orchestrator | 2026-04-18 04:17:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:46.070208 | orchestrator | 2026-04-18 04:17:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:46.070234 | orchestrator | 2026-04-18 04:17:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:49.127543 | orchestrator | 2026-04-18 04:17:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:49.129329 | orchestrator | 2026-04-18 04:17:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:49.129552 | orchestrator | 2026-04-18 04:17:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:52.182832 | orchestrator | 2026-04-18 04:17:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:52.184483 | orchestrator | 2026-04-18 04:17:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:52.184563 | orchestrator | 2026-04-18 04:17:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:55.237395 | orchestrator | 2026-04-18 04:17:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:55.240009 | orchestrator | 2026-04-18 04:17:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:55.240066 | orchestrator | 2026-04-18 04:17:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:17:58.282604 | orchestrator | 2026-04-18 04:17:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:17:58.284445 | orchestrator | 2026-04-18 04:17:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:17:58.284498 | orchestrator | 2026-04-18 04:17:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:01.332392 | orchestrator | 2026-04-18 04:18:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:01.334308 | orchestrator | 2026-04-18 04:18:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:01.334800 | orchestrator | 2026-04-18 04:18:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:04.387498 | orchestrator | 2026-04-18 04:18:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:04.389487 | orchestrator | 2026-04-18 04:18:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:04.389539 | orchestrator | 2026-04-18 04:18:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:07.437152 | orchestrator | 2026-04-18 04:18:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:07.438837 | orchestrator | 2026-04-18 04:18:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:07.438896 | orchestrator | 2026-04-18 04:18:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:10.487263 | orchestrator | 2026-04-18 04:18:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:10.488269 | orchestrator | 2026-04-18 04:18:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:10.488356 | orchestrator | 2026-04-18 04:18:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:13.534386 | orchestrator | 2026-04-18 04:18:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:13.536924 | orchestrator | 2026-04-18 04:18:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:13.536978 | orchestrator | 2026-04-18 04:18:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:16.583678 | orchestrator | 2026-04-18 04:18:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:16.585455 | orchestrator | 2026-04-18 04:18:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:16.585536 | orchestrator | 2026-04-18 04:18:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:19.637691 | orchestrator | 2026-04-18 04:18:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:19.639022 | orchestrator | 2026-04-18 04:18:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:19.639344 | orchestrator | 2026-04-18 04:18:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:22.693081 | orchestrator | 2026-04-18 04:18:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:22.695195 | orchestrator | 2026-04-18 04:18:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:22.695284 | orchestrator | 2026-04-18 04:18:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:25.738336 | orchestrator | 2026-04-18 04:18:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:25.739610 | orchestrator | 2026-04-18 04:18:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:25.739816 | orchestrator | 2026-04-18 04:18:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:28.786871 | orchestrator | 2026-04-18 04:18:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:28.789347 | orchestrator | 2026-04-18 04:18:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:28.789445 | orchestrator | 2026-04-18 04:18:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:31.840499 | orchestrator | 2026-04-18 04:18:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:31.842664 | orchestrator | 2026-04-18 04:18:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:31.842708 | orchestrator | 2026-04-18 04:18:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:34.885985 | orchestrator | 2026-04-18 04:18:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:34.887143 | orchestrator | 2026-04-18 04:18:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:34.887202 | orchestrator | 2026-04-18 04:18:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:37.931185 | orchestrator | 2026-04-18 04:18:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:37.933387 | orchestrator | 2026-04-18 04:18:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:37.933501 | orchestrator | 2026-04-18 04:18:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:40.982631 | orchestrator | 2026-04-18 04:18:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:40.985324 | orchestrator | 2026-04-18 04:18:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:40.985374 | orchestrator | 2026-04-18 04:18:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:44.034277 | orchestrator | 2026-04-18 04:18:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:44.037151 | orchestrator | 2026-04-18 04:18:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:44.037280 | orchestrator | 2026-04-18 04:18:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:47.086126 | orchestrator | 2026-04-18 04:18:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:47.088262 | orchestrator | 2026-04-18 04:18:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:47.088319 | orchestrator | 2026-04-18 04:18:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:50.134540 | orchestrator | 2026-04-18 04:18:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:50.136101 | orchestrator | 2026-04-18 04:18:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:50.136235 | orchestrator | 2026-04-18 04:18:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:53.182638 | orchestrator | 2026-04-18 04:18:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:53.185746 | orchestrator | 2026-04-18 04:18:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:53.185823 | orchestrator | 2026-04-18 04:18:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:56.234274 | orchestrator | 2026-04-18 04:18:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:56.235836 | orchestrator | 2026-04-18 04:18:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:56.235892 | orchestrator | 2026-04-18 04:18:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:18:59.274685 | orchestrator | 2026-04-18 04:18:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:18:59.276236 | orchestrator | 2026-04-18 04:18:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:18:59.276300 | orchestrator | 2026-04-18 04:18:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:02.318862 | orchestrator | 2026-04-18 04:19:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:02.321295 | orchestrator | 2026-04-18 04:19:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:02.321625 | orchestrator | 2026-04-18 04:19:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:05.369518 | orchestrator | 2026-04-18 04:19:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:05.370496 | orchestrator | 2026-04-18 04:19:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:05.370527 | orchestrator | 2026-04-18 04:19:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:08.413294 | orchestrator | 2026-04-18 04:19:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:08.416908 | orchestrator | 2026-04-18 04:19:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:08.417052 | orchestrator | 2026-04-18 04:19:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:11.458471 | orchestrator | 2026-04-18 04:19:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:11.458624 | orchestrator | 2026-04-18 04:19:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:11.458710 | orchestrator | 2026-04-18 04:19:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:14.504780 | orchestrator | 2026-04-18 04:19:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:14.506582 | orchestrator | 2026-04-18 04:19:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:14.507835 | orchestrator | 2026-04-18 04:19:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:17.556004 | orchestrator | 2026-04-18 04:19:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:17.558215 | orchestrator | 2026-04-18 04:19:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:17.558293 | orchestrator | 2026-04-18 04:19:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:20.603773 | orchestrator | 2026-04-18 04:19:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:20.604807 | orchestrator | 2026-04-18 04:19:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:20.604864 | orchestrator | 2026-04-18 04:19:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:23.654821 | orchestrator | 2026-04-18 04:19:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:23.657501 | orchestrator | 2026-04-18 04:19:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:23.657554 | orchestrator | 2026-04-18 04:19:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:26.707262 | orchestrator | 2026-04-18 04:19:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:26.709074 | orchestrator | 2026-04-18 04:19:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:26.709136 | orchestrator | 2026-04-18 04:19:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:29.758501 | orchestrator | 2026-04-18 04:19:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:29.760542 | orchestrator | 2026-04-18 04:19:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:29.760616 | orchestrator | 2026-04-18 04:19:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:32.808673 | orchestrator | 2026-04-18 04:19:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:32.810362 | orchestrator | 2026-04-18 04:19:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:32.810429 | orchestrator | 2026-04-18 04:19:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:35.863409 | orchestrator | 2026-04-18 04:19:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:35.866322 | orchestrator | 2026-04-18 04:19:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:35.866400 | orchestrator | 2026-04-18 04:19:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:38.908821 | orchestrator | 2026-04-18 04:19:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:38.910192 | orchestrator | 2026-04-18 04:19:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:38.910314 | orchestrator | 2026-04-18 04:19:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:41.953303 | orchestrator | 2026-04-18 04:19:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:41.954189 | orchestrator | 2026-04-18 04:19:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:41.954809 | orchestrator | 2026-04-18 04:19:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:44.999277 | orchestrator | 2026-04-18 04:19:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:45.001541 | orchestrator | 2026-04-18 04:19:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:45.001628 | orchestrator | 2026-04-18 04:19:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:48.047563 | orchestrator | 2026-04-18 04:19:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:48.049027 | orchestrator | 2026-04-18 04:19:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:48.049070 | orchestrator | 2026-04-18 04:19:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:51.098225 | orchestrator | 2026-04-18 04:19:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:51.100644 | orchestrator | 2026-04-18 04:19:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:51.100725 | orchestrator | 2026-04-18 04:19:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:54.152886 | orchestrator | 2026-04-18 04:19:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:54.153750 | orchestrator | 2026-04-18 04:19:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:54.153800 | orchestrator | 2026-04-18 04:19:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:19:57.203738 | orchestrator | 2026-04-18 04:19:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:19:57.205267 | orchestrator | 2026-04-18 04:19:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:19:57.205296 | orchestrator | 2026-04-18 04:19:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:00.247847 | orchestrator | 2026-04-18 04:20:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:00.248716 | orchestrator | 2026-04-18 04:20:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:00.248745 | orchestrator | 2026-04-18 04:20:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:03.298896 | orchestrator | 2026-04-18 04:20:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:03.300895 | orchestrator | 2026-04-18 04:20:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:03.300956 | orchestrator | 2026-04-18 04:20:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:06.351375 | orchestrator | 2026-04-18 04:20:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:06.351625 | orchestrator | 2026-04-18 04:20:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:06.351659 | orchestrator | 2026-04-18 04:20:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:09.408615 | orchestrator | 2026-04-18 04:20:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:09.410313 | orchestrator | 2026-04-18 04:20:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:09.410376 | orchestrator | 2026-04-18 04:20:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:12.459407 | orchestrator | 2026-04-18 04:20:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:12.460057 | orchestrator | 2026-04-18 04:20:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:12.460118 | orchestrator | 2026-04-18 04:20:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:15.507663 | orchestrator | 2026-04-18 04:20:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:15.510617 | orchestrator | 2026-04-18 04:20:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:15.510666 | orchestrator | 2026-04-18 04:20:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:18.560222 | orchestrator | 2026-04-18 04:20:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:18.561735 | orchestrator | 2026-04-18 04:20:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:18.561759 | orchestrator | 2026-04-18 04:20:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:21.613945 | orchestrator | 2026-04-18 04:20:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:21.616160 | orchestrator | 2026-04-18 04:20:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:21.616251 | orchestrator | 2026-04-18 04:20:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:24.662365 | orchestrator | 2026-04-18 04:20:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:24.664683 | orchestrator | 2026-04-18 04:20:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:24.664773 | orchestrator | 2026-04-18 04:20:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:27.717676 | orchestrator | 2026-04-18 04:20:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:27.720407 | orchestrator | 2026-04-18 04:20:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:27.720488 | orchestrator | 2026-04-18 04:20:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:30.767464 | orchestrator | 2026-04-18 04:20:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:30.768389 | orchestrator | 2026-04-18 04:20:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:30.768429 | orchestrator | 2026-04-18 04:20:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:33.814842 | orchestrator | 2026-04-18 04:20:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:33.816305 | orchestrator | 2026-04-18 04:20:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:33.816372 | orchestrator | 2026-04-18 04:20:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:36.868539 | orchestrator | 2026-04-18 04:20:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:36.871446 | orchestrator | 2026-04-18 04:20:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:36.872925 | orchestrator | 2026-04-18 04:20:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:39.917236 | orchestrator | 2026-04-18 04:20:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:39.919818 | orchestrator | 2026-04-18 04:20:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:39.919943 | orchestrator | 2026-04-18 04:20:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:42.971742 | orchestrator | 2026-04-18 04:20:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:42.973862 | orchestrator | 2026-04-18 04:20:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:42.974136 | orchestrator | 2026-04-18 04:20:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:46.023884 | orchestrator | 2026-04-18 04:20:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:46.025402 | orchestrator | 2026-04-18 04:20:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:46.025448 | orchestrator | 2026-04-18 04:20:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:49.073274 | orchestrator | 2026-04-18 04:20:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:49.074649 | orchestrator | 2026-04-18 04:20:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:49.074708 | orchestrator | 2026-04-18 04:20:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:52.123989 | orchestrator | 2026-04-18 04:20:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:52.125141 | orchestrator | 2026-04-18 04:20:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:52.125324 | orchestrator | 2026-04-18 04:20:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:55.176065 | orchestrator | 2026-04-18 04:20:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:55.177888 | orchestrator | 2026-04-18 04:20:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:55.177946 | orchestrator | 2026-04-18 04:20:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:20:58.222290 | orchestrator | 2026-04-18 04:20:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:20:58.223835 | orchestrator | 2026-04-18 04:20:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:20:58.223906 | orchestrator | 2026-04-18 04:20:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:01.269088 | orchestrator | 2026-04-18 04:21:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:01.270622 | orchestrator | 2026-04-18 04:21:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:01.270683 | orchestrator | 2026-04-18 04:21:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:04.321077 | orchestrator | 2026-04-18 04:21:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:04.322888 | orchestrator | 2026-04-18 04:21:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:04.322973 | orchestrator | 2026-04-18 04:21:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:07.373763 | orchestrator | 2026-04-18 04:21:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:07.375733 | orchestrator | 2026-04-18 04:21:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:07.375836 | orchestrator | 2026-04-18 04:21:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:10.423547 | orchestrator | 2026-04-18 04:21:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:10.424811 | orchestrator | 2026-04-18 04:21:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:10.424908 | orchestrator | 2026-04-18 04:21:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:13.480749 | orchestrator | 2026-04-18 04:21:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:13.482831 | orchestrator | 2026-04-18 04:21:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:13.482875 | orchestrator | 2026-04-18 04:21:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:16.528064 | orchestrator | 2026-04-18 04:21:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:16.528814 | orchestrator | 2026-04-18 04:21:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:16.529366 | orchestrator | 2026-04-18 04:21:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:19.571664 | orchestrator | 2026-04-18 04:21:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:19.572510 | orchestrator | 2026-04-18 04:21:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:19.572550 | orchestrator | 2026-04-18 04:21:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:22.624111 | orchestrator | 2026-04-18 04:21:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:22.624685 | orchestrator | 2026-04-18 04:21:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:22.624811 | orchestrator | 2026-04-18 04:21:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:25.674893 | orchestrator | 2026-04-18 04:21:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:25.676828 | orchestrator | 2026-04-18 04:21:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:25.676909 | orchestrator | 2026-04-18 04:21:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:28.733565 | orchestrator | 2026-04-18 04:21:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:28.735510 | orchestrator | 2026-04-18 04:21:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:28.735562 | orchestrator | 2026-04-18 04:21:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:31.783719 | orchestrator | 2026-04-18 04:21:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:31.786397 | orchestrator | 2026-04-18 04:21:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:31.786464 | orchestrator | 2026-04-18 04:21:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:34.833421 | orchestrator | 2026-04-18 04:21:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:34.834840 | orchestrator | 2026-04-18 04:21:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:34.834954 | orchestrator | 2026-04-18 04:21:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:37.882928 | orchestrator | 2026-04-18 04:21:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:37.884759 | orchestrator | 2026-04-18 04:21:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:37.884813 | orchestrator | 2026-04-18 04:21:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:40.937726 | orchestrator | 2026-04-18 04:21:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:40.938893 | orchestrator | 2026-04-18 04:21:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:40.938944 | orchestrator | 2026-04-18 04:21:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:43.991329 | orchestrator | 2026-04-18 04:21:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:43.994407 | orchestrator | 2026-04-18 04:21:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:43.994480 | orchestrator | 2026-04-18 04:21:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:47.052736 | orchestrator | 2026-04-18 04:21:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:47.055657 | orchestrator | 2026-04-18 04:21:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:47.055729 | orchestrator | 2026-04-18 04:21:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:50.106792 | orchestrator | 2026-04-18 04:21:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:50.109498 | orchestrator | 2026-04-18 04:21:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:50.109696 | orchestrator | 2026-04-18 04:21:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:53.159655 | orchestrator | 2026-04-18 04:21:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:53.161938 | orchestrator | 2026-04-18 04:21:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:53.161995 | orchestrator | 2026-04-18 04:21:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:56.212411 | orchestrator | 2026-04-18 04:21:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:56.214374 | orchestrator | 2026-04-18 04:21:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:56.214436 | orchestrator | 2026-04-18 04:21:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:21:59.266336 | orchestrator | 2026-04-18 04:21:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:21:59.268066 | orchestrator | 2026-04-18 04:21:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:21:59.268124 | orchestrator | 2026-04-18 04:21:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:02.309583 | orchestrator | 2026-04-18 04:22:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:02.311677 | orchestrator | 2026-04-18 04:22:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:02.311775 | orchestrator | 2026-04-18 04:22:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:05.355291 | orchestrator | 2026-04-18 04:22:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:05.356334 | orchestrator | 2026-04-18 04:22:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:05.356455 | orchestrator | 2026-04-18 04:22:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:08.401546 | orchestrator | 2026-04-18 04:22:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:08.402876 | orchestrator | 2026-04-18 04:22:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:08.403036 | orchestrator | 2026-04-18 04:22:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:11.448734 | orchestrator | 2026-04-18 04:22:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:11.450467 | orchestrator | 2026-04-18 04:22:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:11.450519 | orchestrator | 2026-04-18 04:22:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:14.500147 | orchestrator | 2026-04-18 04:22:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:14.502153 | orchestrator | 2026-04-18 04:22:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:14.502326 | orchestrator | 2026-04-18 04:22:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:17.547889 | orchestrator | 2026-04-18 04:22:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:17.548820 | orchestrator | 2026-04-18 04:22:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:17.548968 | orchestrator | 2026-04-18 04:22:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:20.600339 | orchestrator | 2026-04-18 04:22:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:20.602422 | orchestrator | 2026-04-18 04:22:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:20.602489 | orchestrator | 2026-04-18 04:22:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:23.647515 | orchestrator | 2026-04-18 04:22:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:23.648889 | orchestrator | 2026-04-18 04:22:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:23.648961 | orchestrator | 2026-04-18 04:22:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:26.690489 | orchestrator | 2026-04-18 04:22:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:26.692904 | orchestrator | 2026-04-18 04:22:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:26.693167 | orchestrator | 2026-04-18 04:22:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:29.744269 | orchestrator | 2026-04-18 04:22:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:29.746116 | orchestrator | 2026-04-18 04:22:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:29.746182 | orchestrator | 2026-04-18 04:22:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:32.791756 | orchestrator | 2026-04-18 04:22:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:32.794619 | orchestrator | 2026-04-18 04:22:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:32.794803 | orchestrator | 2026-04-18 04:22:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:35.839624 | orchestrator | 2026-04-18 04:22:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:35.840794 | orchestrator | 2026-04-18 04:22:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:35.841058 | orchestrator | 2026-04-18 04:22:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:38.884819 | orchestrator | 2026-04-18 04:22:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:38.887666 | orchestrator | 2026-04-18 04:22:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:38.888166 | orchestrator | 2026-04-18 04:22:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:41.938795 | orchestrator | 2026-04-18 04:22:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:41.940994 | orchestrator | 2026-04-18 04:22:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:41.941066 | orchestrator | 2026-04-18 04:22:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:44.981857 | orchestrator | 2026-04-18 04:22:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:44.982101 | orchestrator | 2026-04-18 04:22:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:44.982781 | orchestrator | 2026-04-18 04:22:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:48.032183 | orchestrator | 2026-04-18 04:22:48 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:48.034981 | orchestrator | 2026-04-18 04:22:48 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:48.035059 | orchestrator | 2026-04-18 04:22:48 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:51.085059 | orchestrator | 2026-04-18 04:22:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:51.086489 | orchestrator | 2026-04-18 04:22:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:51.086540 | orchestrator | 2026-04-18 04:22:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:54.135130 | orchestrator | 2026-04-18 04:22:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:54.136497 | orchestrator | 2026-04-18 04:22:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:54.136550 | orchestrator | 2026-04-18 04:22:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:22:57.179696 | orchestrator | 2026-04-18 04:22:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:22:57.183687 | orchestrator | 2026-04-18 04:22:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:22:57.183748 | orchestrator | 2026-04-18 04:22:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:00.230990 | orchestrator | 2026-04-18 04:23:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:00.232048 | orchestrator | 2026-04-18 04:23:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:00.232139 | orchestrator | 2026-04-18 04:23:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:03.287702 | orchestrator | 2026-04-18 04:23:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:03.288778 | orchestrator | 2026-04-18 04:23:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:03.288816 | orchestrator | 2026-04-18 04:23:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:06.339445 | orchestrator | 2026-04-18 04:23:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:06.342458 | orchestrator | 2026-04-18 04:23:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:06.342526 | orchestrator | 2026-04-18 04:23:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:09.392934 | orchestrator | 2026-04-18 04:23:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:09.394501 | orchestrator | 2026-04-18 04:23:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:09.395007 | orchestrator | 2026-04-18 04:23:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:12.447813 | orchestrator | 2026-04-18 04:23:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:12.450099 | orchestrator | 2026-04-18 04:23:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:12.450171 | orchestrator | 2026-04-18 04:23:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:15.497697 | orchestrator | 2026-04-18 04:23:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:15.498756 | orchestrator | 2026-04-18 04:23:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:15.498796 | orchestrator | 2026-04-18 04:23:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:18.548778 | orchestrator | 2026-04-18 04:23:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:18.550692 | orchestrator | 2026-04-18 04:23:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:18.550766 | orchestrator | 2026-04-18 04:23:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:21.590140 | orchestrator | 2026-04-18 04:23:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:21.591859 | orchestrator | 2026-04-18 04:23:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:21.591927 | orchestrator | 2026-04-18 04:23:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:24.643615 | orchestrator | 2026-04-18 04:23:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:24.645111 | orchestrator | 2026-04-18 04:23:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:24.645148 | orchestrator | 2026-04-18 04:23:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:27.696329 | orchestrator | 2026-04-18 04:23:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:27.698304 | orchestrator | 2026-04-18 04:23:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:27.698376 | orchestrator | 2026-04-18 04:23:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:30.740929 | orchestrator | 2026-04-18 04:23:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:30.743504 | orchestrator | 2026-04-18 04:23:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:30.743558 | orchestrator | 2026-04-18 04:23:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:33.791966 | orchestrator | 2026-04-18 04:23:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:33.794091 | orchestrator | 2026-04-18 04:23:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:33.794142 | orchestrator | 2026-04-18 04:23:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:36.834839 | orchestrator | 2026-04-18 04:23:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:36.837072 | orchestrator | 2026-04-18 04:23:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:36.837332 | orchestrator | 2026-04-18 04:23:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:39.887868 | orchestrator | 2026-04-18 04:23:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:39.889696 | orchestrator | 2026-04-18 04:23:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:39.890182 | orchestrator | 2026-04-18 04:23:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:42.942320 | orchestrator | 2026-04-18 04:23:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:42.944993 | orchestrator | 2026-04-18 04:23:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:42.945073 | orchestrator | 2026-04-18 04:23:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:45.992015 | orchestrator | 2026-04-18 04:23:45 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:45.998796 | orchestrator | 2026-04-18 04:23:45 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:45.998914 | orchestrator | 2026-04-18 04:23:45 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:49.044107 | orchestrator | 2026-04-18 04:23:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:49.046594 | orchestrator | 2026-04-18 04:23:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:49.046652 | orchestrator | 2026-04-18 04:23:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:52.093768 | orchestrator | 2026-04-18 04:23:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:52.095327 | orchestrator | 2026-04-18 04:23:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:52.095354 | orchestrator | 2026-04-18 04:23:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:55.138283 | orchestrator | 2026-04-18 04:23:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:55.141189 | orchestrator | 2026-04-18 04:23:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:55.141290 | orchestrator | 2026-04-18 04:23:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:23:58.188451 | orchestrator | 2026-04-18 04:23:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:23:58.189674 | orchestrator | 2026-04-18 04:23:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:23:58.189707 | orchestrator | 2026-04-18 04:23:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:01.233511 | orchestrator | 2026-04-18 04:24:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:01.234760 | orchestrator | 2026-04-18 04:24:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:01.234812 | orchestrator | 2026-04-18 04:24:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:04.276862 | orchestrator | 2026-04-18 04:24:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:04.278918 | orchestrator | 2026-04-18 04:24:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:04.279000 | orchestrator | 2026-04-18 04:24:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:07.326107 | orchestrator | 2026-04-18 04:24:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:07.329135 | orchestrator | 2026-04-18 04:24:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:07.329218 | orchestrator | 2026-04-18 04:24:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:10.380784 | orchestrator | 2026-04-18 04:24:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:10.382002 | orchestrator | 2026-04-18 04:24:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:10.382102 | orchestrator | 2026-04-18 04:24:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:13.431015 | orchestrator | 2026-04-18 04:24:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:13.432028 | orchestrator | 2026-04-18 04:24:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:13.432079 | orchestrator | 2026-04-18 04:24:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:16.481678 | orchestrator | 2026-04-18 04:24:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:16.484055 | orchestrator | 2026-04-18 04:24:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:16.484115 | orchestrator | 2026-04-18 04:24:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:19.536028 | orchestrator | 2026-04-18 04:24:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:19.537543 | orchestrator | 2026-04-18 04:24:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:19.537617 | orchestrator | 2026-04-18 04:24:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:22.592736 | orchestrator | 2026-04-18 04:24:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:22.594413 | orchestrator | 2026-04-18 04:24:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:22.594471 | orchestrator | 2026-04-18 04:24:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:25.640902 | orchestrator | 2026-04-18 04:24:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:25.643886 | orchestrator | 2026-04-18 04:24:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:25.643991 | orchestrator | 2026-04-18 04:24:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:28.692802 | orchestrator | 2026-04-18 04:24:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:28.694892 | orchestrator | 2026-04-18 04:24:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:28.695127 | orchestrator | 2026-04-18 04:24:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:31.744448 | orchestrator | 2026-04-18 04:24:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:31.745886 | orchestrator | 2026-04-18 04:24:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:31.745973 | orchestrator | 2026-04-18 04:24:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:34.797141 | orchestrator | 2026-04-18 04:24:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:34.798485 | orchestrator | 2026-04-18 04:24:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:34.798626 | orchestrator | 2026-04-18 04:24:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:37.847742 | orchestrator | 2026-04-18 04:24:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:37.848623 | orchestrator | 2026-04-18 04:24:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:37.848662 | orchestrator | 2026-04-18 04:24:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:40.906318 | orchestrator | 2026-04-18 04:24:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:40.912631 | orchestrator | 2026-04-18 04:24:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:40.912754 | orchestrator | 2026-04-18 04:24:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:43.963900 | orchestrator | 2026-04-18 04:24:43 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:43.965097 | orchestrator | 2026-04-18 04:24:43 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:43.965131 | orchestrator | 2026-04-18 04:24:43 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:47.008794 | orchestrator | 2026-04-18 04:24:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:47.010536 | orchestrator | 2026-04-18 04:24:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:47.010583 | orchestrator | 2026-04-18 04:24:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:50.063170 | orchestrator | 2026-04-18 04:24:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:50.065181 | orchestrator | 2026-04-18 04:24:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:50.065255 | orchestrator | 2026-04-18 04:24:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:53.116265 | orchestrator | 2026-04-18 04:24:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:53.117052 | orchestrator | 2026-04-18 04:24:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:53.117083 | orchestrator | 2026-04-18 04:24:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:56.168075 | orchestrator | 2026-04-18 04:24:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:56.170231 | orchestrator | 2026-04-18 04:24:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:56.170288 | orchestrator | 2026-04-18 04:24:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:24:59.214622 | orchestrator | 2026-04-18 04:24:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:24:59.215182 | orchestrator | 2026-04-18 04:24:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:24:59.215331 | orchestrator | 2026-04-18 04:24:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:02.263025 | orchestrator | 2026-04-18 04:25:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:02.265016 | orchestrator | 2026-04-18 04:25:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:02.265241 | orchestrator | 2026-04-18 04:25:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:05.315371 | orchestrator | 2026-04-18 04:25:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:05.316909 | orchestrator | 2026-04-18 04:25:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:05.316959 | orchestrator | 2026-04-18 04:25:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:08.363161 | orchestrator | 2026-04-18 04:25:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:08.365382 | orchestrator | 2026-04-18 04:25:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:08.365447 | orchestrator | 2026-04-18 04:25:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:11.406154 | orchestrator | 2026-04-18 04:25:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:11.408461 | orchestrator | 2026-04-18 04:25:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:11.408696 | orchestrator | 2026-04-18 04:25:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:14.449467 | orchestrator | 2026-04-18 04:25:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:14.451115 | orchestrator | 2026-04-18 04:25:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:14.451154 | orchestrator | 2026-04-18 04:25:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:17.503748 | orchestrator | 2026-04-18 04:25:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:17.505129 | orchestrator | 2026-04-18 04:25:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:17.505178 | orchestrator | 2026-04-18 04:25:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:20.547899 | orchestrator | 2026-04-18 04:25:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:20.549424 | orchestrator | 2026-04-18 04:25:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:20.549463 | orchestrator | 2026-04-18 04:25:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:23.593708 | orchestrator | 2026-04-18 04:25:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:23.594152 | orchestrator | 2026-04-18 04:25:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:23.594174 | orchestrator | 2026-04-18 04:25:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:26.644761 | orchestrator | 2026-04-18 04:25:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:26.647163 | orchestrator | 2026-04-18 04:25:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:26.647308 | orchestrator | 2026-04-18 04:25:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:29.691276 | orchestrator | 2026-04-18 04:25:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:29.691669 | orchestrator | 2026-04-18 04:25:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:29.691688 | orchestrator | 2026-04-18 04:25:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:32.737228 | orchestrator | 2026-04-18 04:25:32 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:32.739336 | orchestrator | 2026-04-18 04:25:32 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:32.739448 | orchestrator | 2026-04-18 04:25:32 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:35.787007 | orchestrator | 2026-04-18 04:25:35 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:35.788900 | orchestrator | 2026-04-18 04:25:35 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:35.788963 | orchestrator | 2026-04-18 04:25:35 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:38.837885 | orchestrator | 2026-04-18 04:25:38 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:38.839077 | orchestrator | 2026-04-18 04:25:38 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:38.839148 | orchestrator | 2026-04-18 04:25:38 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:41.880836 | orchestrator | 2026-04-18 04:25:41 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:41.882151 | orchestrator | 2026-04-18 04:25:41 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:41.882393 | orchestrator | 2026-04-18 04:25:41 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:44.932539 | orchestrator | 2026-04-18 04:25:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:44.933254 | orchestrator | 2026-04-18 04:25:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:44.933302 | orchestrator | 2026-04-18 04:25:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:47.979042 | orchestrator | 2026-04-18 04:25:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:47.980525 | orchestrator | 2026-04-18 04:25:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:47.980607 | orchestrator | 2026-04-18 04:25:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:51.032375 | orchestrator | 2026-04-18 04:25:51 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:51.033914 | orchestrator | 2026-04-18 04:25:51 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:51.033972 | orchestrator | 2026-04-18 04:25:51 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:54.081389 | orchestrator | 2026-04-18 04:25:54 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:54.083344 | orchestrator | 2026-04-18 04:25:54 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:54.083416 | orchestrator | 2026-04-18 04:25:54 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:25:57.128747 | orchestrator | 2026-04-18 04:25:57 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:25:57.130153 | orchestrator | 2026-04-18 04:25:57 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:25:57.130240 | orchestrator | 2026-04-18 04:25:57 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:26:00.179534 | orchestrator | 2026-04-18 04:26:00 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:26:00.181337 | orchestrator | 2026-04-18 04:26:00 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:26:00.181387 | orchestrator | 2026-04-18 04:26:00 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:26:03.231731 | orchestrator | 2026-04-18 04:26:03 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:26:03.232542 | orchestrator | 2026-04-18 04:26:03 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:26:03.232606 | orchestrator | 2026-04-18 04:26:03 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:26:06.282891 | orchestrator | 2026-04-18 04:26:06 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:26:06.285406 | orchestrator | 2026-04-18 04:26:06 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:26:06.285469 | orchestrator | 2026-04-18 04:26:06 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:26:09.327765 | orchestrator | 2026-04-18 04:26:09 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:26:09.329991 | orchestrator | 2026-04-18 04:26:09 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:26:09.330097 | orchestrator | 2026-04-18 04:26:09 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:26:12.374688 | orchestrator | 2026-04-18 04:26:12 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:26:12.378165 | orchestrator | 2026-04-18 04:26:12 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:26:12.378241 | orchestrator | 2026-04-18 04:26:12 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:26:15.422960 | orchestrator | 2026-04-18 04:26:15 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:26:15.425367 | orchestrator | 2026-04-18 04:26:15 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:26:15.425415 | orchestrator | 2026-04-18 04:26:15 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:26:18.472325 | orchestrator | 2026-04-18 04:26:18 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:26:18.473283 | orchestrator | 2026-04-18 04:26:18 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:26:18.473475 | orchestrator | 2026-04-18 04:26:18 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:26:21.525531 | orchestrator | 2026-04-18 04:26:21 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:26:21.527244 | orchestrator | 2026-04-18 04:26:21 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:26:21.527272 | orchestrator | 2026-04-18 04:26:21 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:28:24.685006 | orchestrator | 2026-04-18 04:28:24 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:28:24.685094 | orchestrator | 2026-04-18 04:28:24 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:28:24.685104 | orchestrator | 2026-04-18 04:28:24 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:28:27.733451 | orchestrator | 2026-04-18 04:28:27 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:28:27.735195 | orchestrator | 2026-04-18 04:28:27 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:28:27.735302 | orchestrator | 2026-04-18 04:28:27 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:28:30.784230 | orchestrator | 2026-04-18 04:28:30 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:28:30.788893 | orchestrator | 2026-04-18 04:28:30 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:28:30.788970 | orchestrator | 2026-04-18 04:28:30 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:28:33.831545 | orchestrator | 2026-04-18 04:28:33 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:28:33.834179 | orchestrator | 2026-04-18 04:28:33 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:28:33.834256 | orchestrator | 2026-04-18 04:28:33 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:28:36.884594 | orchestrator | 2026-04-18 04:28:36 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:28:36.886974 | orchestrator | 2026-04-18 04:28:36 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:28:36.887045 | orchestrator | 2026-04-18 04:28:36 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:28:39.941089 | orchestrator | 2026-04-18 04:28:39 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:28:39.944805 | orchestrator | 2026-04-18 04:28:39 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:28:39.944939 | orchestrator | 2026-04-18 04:28:39 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:28:42.993599 | orchestrator | 2026-04-18 04:28:42 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:28:42.995927 | orchestrator | 2026-04-18 04:28:42 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:28:42.996014 | orchestrator | 2026-04-18 04:28:42 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:28:46.053362 | orchestrator | 2026-04-18 04:28:46 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:28:46.057383 | orchestrator | 2026-04-18 04:28:46 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:28:46.057463 | orchestrator | 2026-04-18 04:28:46 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:28:49.101712 | orchestrator | 2026-04-18 04:28:49 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:28:49.103489 | orchestrator | 2026-04-18 04:28:49 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:28:49.103551 | orchestrator | 2026-04-18 04:28:49 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:28:52.152775 | orchestrator | 2026-04-18 04:28:52 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:28:52.154800 | orchestrator | 2026-04-18 04:28:52 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:28:52.154857 | orchestrator | 2026-04-18 04:28:52 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:28:55.205924 | orchestrator | 2026-04-18 04:28:55 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:28:55.208276 | orchestrator | 2026-04-18 04:28:55 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:28:55.208394 | orchestrator | 2026-04-18 04:28:55 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:28:58.260723 | orchestrator | 2026-04-18 04:28:58 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:28:58.262175 | orchestrator | 2026-04-18 04:28:58 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:28:58.262210 | orchestrator | 2026-04-18 04:28:58 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:01.306169 | orchestrator | 2026-04-18 04:29:01 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:01.307844 | orchestrator | 2026-04-18 04:29:01 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:01.307916 | orchestrator | 2026-04-18 04:29:01 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:04.355838 | orchestrator | 2026-04-18 04:29:04 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:04.357115 | orchestrator | 2026-04-18 04:29:04 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:04.357151 | orchestrator | 2026-04-18 04:29:04 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:07.403944 | orchestrator | 2026-04-18 04:29:07 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:07.405538 | orchestrator | 2026-04-18 04:29:07 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:07.405663 | orchestrator | 2026-04-18 04:29:07 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:10.460416 | orchestrator | 2026-04-18 04:29:10 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:10.461357 | orchestrator | 2026-04-18 04:29:10 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:10.461392 | orchestrator | 2026-04-18 04:29:10 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:13.509219 | orchestrator | 2026-04-18 04:29:13 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:13.511144 | orchestrator | 2026-04-18 04:29:13 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:13.511272 | orchestrator | 2026-04-18 04:29:13 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:16.559201 | orchestrator | 2026-04-18 04:29:16 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:16.560885 | orchestrator | 2026-04-18 04:29:16 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:16.560980 | orchestrator | 2026-04-18 04:29:16 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:19.607845 | orchestrator | 2026-04-18 04:29:19 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:19.610664 | orchestrator | 2026-04-18 04:29:19 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:19.610736 | orchestrator | 2026-04-18 04:29:19 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:22.661396 | orchestrator | 2026-04-18 04:29:22 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:22.663235 | orchestrator | 2026-04-18 04:29:22 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:22.663310 | orchestrator | 2026-04-18 04:29:22 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:25.712618 | orchestrator | 2026-04-18 04:29:25 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:25.714560 | orchestrator | 2026-04-18 04:29:25 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:25.714668 | orchestrator | 2026-04-18 04:29:25 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:28.756732 | orchestrator | 2026-04-18 04:29:28 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:28.759000 | orchestrator | 2026-04-18 04:29:28 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:28.759231 | orchestrator | 2026-04-18 04:29:28 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:31.805966 | orchestrator | 2026-04-18 04:29:31 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:31.808650 | orchestrator | 2026-04-18 04:29:31 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:31.808743 | orchestrator | 2026-04-18 04:29:31 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:34.862082 | orchestrator | 2026-04-18 04:29:34 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:34.863574 | orchestrator | 2026-04-18 04:29:34 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:34.863616 | orchestrator | 2026-04-18 04:29:34 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:37.910870 | orchestrator | 2026-04-18 04:29:37 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:37.912803 | orchestrator | 2026-04-18 04:29:37 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:37.912865 | orchestrator | 2026-04-18 04:29:37 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:40.964010 | orchestrator | 2026-04-18 04:29:40 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:40.965309 | orchestrator | 2026-04-18 04:29:40 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:40.965428 | orchestrator | 2026-04-18 04:29:40 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:44.017665 | orchestrator | 2026-04-18 04:29:44 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:44.019808 | orchestrator | 2026-04-18 04:29:44 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:44.019884 | orchestrator | 2026-04-18 04:29:44 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:47.067688 | orchestrator | 2026-04-18 04:29:47 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:47.069037 | orchestrator | 2026-04-18 04:29:47 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:47.069091 | orchestrator | 2026-04-18 04:29:47 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:50.111740 | orchestrator | 2026-04-18 04:29:50 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:50.114390 | orchestrator | 2026-04-18 04:29:50 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:50.114506 | orchestrator | 2026-04-18 04:29:50 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:53.160051 | orchestrator | 2026-04-18 04:29:53 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:53.162454 | orchestrator | 2026-04-18 04:29:53 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:53.162523 | orchestrator | 2026-04-18 04:29:53 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:56.210600 | orchestrator | 2026-04-18 04:29:56 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:56.212623 | orchestrator | 2026-04-18 04:29:56 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:56.212721 | orchestrator | 2026-04-18 04:29:56 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:29:59.266509 | orchestrator | 2026-04-18 04:29:59 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:29:59.267939 | orchestrator | 2026-04-18 04:29:59 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:29:59.267981 | orchestrator | 2026-04-18 04:29:59 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:30:02.307753 | orchestrator | 2026-04-18 04:30:02 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:30:02.309561 | orchestrator | 2026-04-18 04:30:02 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:30:02.309625 | orchestrator | 2026-04-18 04:30:02 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:30:05.360213 | orchestrator | 2026-04-18 04:30:05 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:30:05.365006 | orchestrator | 2026-04-18 04:30:05 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:30:05.365088 | orchestrator | 2026-04-18 04:30:05 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:30:08.415293 | orchestrator | 2026-04-18 04:30:08 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:30:08.418849 | orchestrator | 2026-04-18 04:30:08 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:30:08.419118 | orchestrator | 2026-04-18 04:30:08 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:30:11.478540 | orchestrator | 2026-04-18 04:30:11 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:30:11.479087 | orchestrator | 2026-04-18 04:30:11 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:30:11.479118 | orchestrator | 2026-04-18 04:30:11 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:30:14.532584 | orchestrator | 2026-04-18 04:30:14 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:30:14.534452 | orchestrator | 2026-04-18 04:30:14 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:30:14.534513 | orchestrator | 2026-04-18 04:30:14 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:30:17.584903 | orchestrator | 2026-04-18 04:30:17 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:30:17.588114 | orchestrator | 2026-04-18 04:30:17 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:30:17.588181 | orchestrator | 2026-04-18 04:30:17 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:30:20.635044 | orchestrator | 2026-04-18 04:30:20 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:30:20.636820 | orchestrator | 2026-04-18 04:30:20 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:30:20.636865 | orchestrator | 2026-04-18 04:30:20 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:30:23.684577 | orchestrator | 2026-04-18 04:30:23 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:30:23.689829 | orchestrator | 2026-04-18 04:30:23 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:30:23.689981 | orchestrator | 2026-04-18 04:30:23 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:30:26.744135 | orchestrator | 2026-04-18 04:30:26 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:30:26.746529 | orchestrator | 2026-04-18 04:30:26 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:30:26.746611 | orchestrator | 2026-04-18 04:30:26 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:30:29.793083 | orchestrator | 2026-04-18 04:30:29 | INFO  | Task 9f5a403d-8430-46cf-aff0-0e7b22c6e70e is in state STARTED 2026-04-18 04:30:29.796146 | orchestrator | 2026-04-18 04:30:29 | INFO  | Task 4233de3e-3508-4122-beb3-868538e67502 is in state STARTED 2026-04-18 04:30:29.796713 | orchestrator | 2026-04-18 04:30:29 | INFO  | Wait 1 second(s) until the next check 2026-04-18 04:30:31.817152 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-18 04:30:31.822731 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-18 04:30:32.580244 | 2026-04-18 04:30:32.580415 | PLAY [Post output play] 2026-04-18 04:30:32.603271 | 2026-04-18 04:30:32.603623 | LOOP [stage-output : Register sources] 2026-04-18 04:30:32.680390 | 2026-04-18 04:30:32.680733 | TASK [stage-output : Check sudo] 2026-04-18 04:30:33.527038 | orchestrator | sudo: a password is required 2026-04-18 04:30:33.721498 | orchestrator | ok: Runtime: 0:00:00.012268 2026-04-18 04:30:33.737555 | 2026-04-18 04:30:33.737732 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-18 04:30:33.780268 | 2026-04-18 04:30:33.780494 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-18 04:30:33.866280 | orchestrator | ok 2026-04-18 04:30:33.874363 | 2026-04-18 04:30:33.874498 | LOOP [stage-output : Ensure target folders exist] 2026-04-18 04:30:34.389047 | orchestrator | ok: "docs" 2026-04-18 04:30:34.389362 | 2026-04-18 04:30:34.686313 | orchestrator | ok: "artifacts" 2026-04-18 04:30:34.978545 | orchestrator | ok: "logs" 2026-04-18 04:30:35.000769 | 2026-04-18 04:30:35.001039 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-18 04:30:35.052215 | 2026-04-18 04:30:35.052553 | TASK [stage-output : Make all log files readable] 2026-04-18 04:30:35.376404 | orchestrator | ok 2026-04-18 04:30:35.387712 | 2026-04-18 04:30:35.387883 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-18 04:30:35.423495 | orchestrator | skipping: Conditional result was False 2026-04-18 04:30:35.440172 | 2026-04-18 04:30:35.440370 | TASK [stage-output : Discover log files for compression] 2026-04-18 04:30:35.464823 | orchestrator | skipping: Conditional result was False 2026-04-18 04:30:35.476502 | 2026-04-18 04:30:35.476658 | LOOP [stage-output : Archive everything from logs] 2026-04-18 04:30:35.520780 | 2026-04-18 04:30:35.520990 | PLAY [Post cleanup play] 2026-04-18 04:30:35.530372 | 2026-04-18 04:30:35.530485 | TASK [Set cloud fact (Zuul deployment)] 2026-04-18 04:30:35.588376 | orchestrator | ok 2026-04-18 04:30:35.599987 | 2026-04-18 04:30:35.600103 | TASK [Set cloud fact (local deployment)] 2026-04-18 04:30:35.634048 | orchestrator | skipping: Conditional result was False 2026-04-18 04:30:35.650148 | 2026-04-18 04:30:35.650302 | TASK [Clean the cloud environment] 2026-04-18 04:30:36.328295 | orchestrator | 2026-04-18 04:30:36 - clean up servers 2026-04-18 04:30:37.226297 | orchestrator | 2026-04-18 04:30:37 - testbed-manager 2026-04-18 04:30:37.309820 | orchestrator | 2026-04-18 04:30:37 - testbed-node-5 2026-04-18 04:30:37.398132 | orchestrator | 2026-04-18 04:30:37 - testbed-node-1 2026-04-18 04:30:37.488149 | orchestrator | 2026-04-18 04:30:37 - testbed-node-4 2026-04-18 04:30:37.587509 | orchestrator | 2026-04-18 04:30:37 - testbed-node-0 2026-04-18 04:30:37.671983 | orchestrator | 2026-04-18 04:30:37 - testbed-node-3 2026-04-18 04:30:37.760531 | orchestrator | 2026-04-18 04:30:37 - testbed-node-2 2026-04-18 04:30:37.853448 | orchestrator | 2026-04-18 04:30:37 - clean up keypairs 2026-04-18 04:30:37.872225 | orchestrator | 2026-04-18 04:30:37 - testbed 2026-04-18 04:30:37.898097 | orchestrator | 2026-04-18 04:30:37 - wait for servers to be gone 2026-04-18 04:30:53.238235 | orchestrator | 2026-04-18 04:30:53 - clean up ports 2026-04-18 04:30:53.449609 | orchestrator | 2026-04-18 04:30:53 - 034a5580-51ad-4e77-85b3-d4b9d47a2e6a 2026-04-18 04:30:53.773322 | orchestrator | 2026-04-18 04:30:53 - 06918f51-651a-4ed1-a5b9-928415adc4bb 2026-04-18 04:30:54.050406 | orchestrator | 2026-04-18 04:30:54 - 45d67c3e-4ebb-4b89-a807-af036e70eb48 2026-04-18 04:30:54.352745 | orchestrator | 2026-04-18 04:30:54 - b630787c-e830-4913-a354-0e60ad669b8e 2026-04-18 04:30:54.869060 | orchestrator | 2026-04-18 04:30:54 - bce17210-f5bc-42d3-a7cf-67eacc27275b 2026-04-18 04:30:55.183949 | orchestrator | 2026-04-18 04:30:55 - d95991d0-9da1-4cd7-a5b0-ccd42896502f 2026-04-18 04:30:55.442471 | orchestrator | 2026-04-18 04:30:55 - f656cb47-9ea5-4f5f-b768-803c7d9cdc9c 2026-04-18 04:30:55.714781 | orchestrator | 2026-04-18 04:30:55 - clean up volumes 2026-04-18 04:30:55.850886 | orchestrator | 2026-04-18 04:30:55 - testbed-volume-4-node-base 2026-04-18 04:30:55.888194 | orchestrator | 2026-04-18 04:30:55 - testbed-volume-3-node-base 2026-04-18 04:30:55.930602 | orchestrator | 2026-04-18 04:30:55 - testbed-volume-0-node-base 2026-04-18 04:30:55.974712 | orchestrator | 2026-04-18 04:30:55 - testbed-volume-1-node-base 2026-04-18 04:30:56.016481 | orchestrator | 2026-04-18 04:30:56 - testbed-volume-2-node-base 2026-04-18 04:30:56.059948 | orchestrator | 2026-04-18 04:30:56 - testbed-volume-5-node-base 2026-04-18 04:30:56.104809 | orchestrator | 2026-04-18 04:30:56 - testbed-volume-manager-base 2026-04-18 04:30:56.151779 | orchestrator | 2026-04-18 04:30:56 - testbed-volume-1-node-4 2026-04-18 04:30:56.201216 | orchestrator | 2026-04-18 04:30:56 - testbed-volume-5-node-5 2026-04-18 04:30:56.249211 | orchestrator | 2026-04-18 04:30:56 - testbed-volume-0-node-3 2026-04-18 04:30:56.291216 | orchestrator | 2026-04-18 04:30:56 - testbed-volume-6-node-3 2026-04-18 04:30:56.338164 | orchestrator | 2026-04-18 04:30:56 - testbed-volume-8-node-5 2026-04-18 04:30:56.401715 | orchestrator | 2026-04-18 04:30:56 - testbed-volume-3-node-3 2026-04-18 04:30:56.450341 | orchestrator | 2026-04-18 04:30:56 - testbed-volume-4-node-4 2026-04-18 04:30:56.500531 | orchestrator | 2026-04-18 04:30:56 - testbed-volume-2-node-5 2026-04-18 04:30:56.550726 | orchestrator | 2026-04-18 04:30:56 - testbed-volume-7-node-4 2026-04-18 04:30:56.606473 | orchestrator | 2026-04-18 04:30:56 - disconnect routers 2026-04-18 04:30:56.756689 | orchestrator | 2026-04-18 04:30:56 - testbed 2026-04-18 04:30:58.775047 | orchestrator | 2026-04-18 04:30:58 - clean up subnets 2026-04-18 04:30:58.838122 | orchestrator | 2026-04-18 04:30:58 - subnet-testbed-management 2026-04-18 04:30:59.155198 | orchestrator | 2026-04-18 04:30:59 - clean up networks 2026-04-18 04:30:59.375188 | orchestrator | 2026-04-18 04:30:59 - net-testbed-management 2026-04-18 04:30:59.707518 | orchestrator | 2026-04-18 04:30:59 - clean up security groups 2026-04-18 04:30:59.750750 | orchestrator | 2026-04-18 04:30:59 - testbed-management 2026-04-18 04:30:59.895748 | orchestrator | 2026-04-18 04:30:59 - testbed-node 2026-04-18 04:31:00.032382 | orchestrator | 2026-04-18 04:31:00 - clean up floating ips 2026-04-18 04:31:00.067392 | orchestrator | 2026-04-18 04:31:00 - 81.163.192.97 2026-04-18 04:31:00.501009 | orchestrator | 2026-04-18 04:31:00 - clean up routers 2026-04-18 04:31:00.632736 | orchestrator | 2026-04-18 04:31:00 - testbed 2026-04-18 04:31:02.216893 | orchestrator | ok: Runtime: 0:00:25.776186 2026-04-18 04:31:02.221326 | 2026-04-18 04:31:02.221497 | PLAY RECAP 2026-04-18 04:31:02.221622 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-18 04:31:02.221684 | 2026-04-18 04:31:02.370132 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-18 04:31:02.371507 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-18 04:31:03.130422 | 2026-04-18 04:31:03.130589 | PLAY [Cleanup play] 2026-04-18 04:31:03.146581 | 2026-04-18 04:31:03.146718 | TASK [Set cloud fact (Zuul deployment)] 2026-04-18 04:31:03.189129 | orchestrator | ok 2026-04-18 04:31:03.196031 | 2026-04-18 04:31:03.196162 | TASK [Set cloud fact (local deployment)] 2026-04-18 04:31:03.230189 | orchestrator | skipping: Conditional result was False 2026-04-18 04:31:03.239707 | 2026-04-18 04:31:03.239822 | TASK [Clean the cloud environment] 2026-04-18 04:31:04.460649 | orchestrator | 2026-04-18 04:31:04 - clean up servers 2026-04-18 04:31:05.138070 | orchestrator | 2026-04-18 04:31:05 - clean up keypairs 2026-04-18 04:31:05.152593 | orchestrator | 2026-04-18 04:31:05 - wait for servers to be gone 2026-04-18 04:31:05.197049 | orchestrator | 2026-04-18 04:31:05 - clean up ports 2026-04-18 04:31:05.279961 | orchestrator | 2026-04-18 04:31:05 - clean up volumes 2026-04-18 04:31:05.363439 | orchestrator | 2026-04-18 04:31:05 - disconnect routers 2026-04-18 04:31:05.399318 | orchestrator | 2026-04-18 04:31:05 - clean up subnets 2026-04-18 04:31:05.448103 | orchestrator | 2026-04-18 04:31:05 - clean up networks 2026-04-18 04:31:05.627259 | orchestrator | 2026-04-18 04:31:05 - clean up security groups 2026-04-18 04:31:05.659899 | orchestrator | 2026-04-18 04:31:05 - clean up floating ips 2026-04-18 04:31:05.711724 | orchestrator | 2026-04-18 04:31:05 - clean up routers 2026-04-18 04:31:06.275514 | orchestrator | ok: Runtime: 0:00:01.714111 2026-04-18 04:31:06.278125 | 2026-04-18 04:31:06.278259 | PLAY RECAP 2026-04-18 04:31:06.278340 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-18 04:31:06.278378 | 2026-04-18 04:31:06.454915 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-18 04:31:06.456568 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-18 04:31:07.289243 | 2026-04-18 04:31:07.289412 | PLAY [Base post-fetch] 2026-04-18 04:31:07.306015 | 2026-04-18 04:31:07.306150 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-18 04:31:07.361534 | orchestrator | skipping: Conditional result was False 2026-04-18 04:31:07.372592 | 2026-04-18 04:31:07.372848 | TASK [fetch-output : Set log path for single node] 2026-04-18 04:31:07.419809 | orchestrator | ok 2026-04-18 04:31:07.428528 | 2026-04-18 04:31:07.428672 | LOOP [fetch-output : Ensure local output dirs] 2026-04-18 04:31:07.976113 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/1d4be2a3a6a642618b6b9325a3780e9a/work/logs" 2026-04-18 04:31:08.265528 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1d4be2a3a6a642618b6b9325a3780e9a/work/artifacts" 2026-04-18 04:31:08.590254 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1d4be2a3a6a642618b6b9325a3780e9a/work/docs" 2026-04-18 04:31:08.605939 | 2026-04-18 04:31:08.606203 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-18 04:31:09.476240 | orchestrator | changed: .d..t...... ./ 2026-04-18 04:31:09.476644 | orchestrator | changed: All items complete 2026-04-18 04:31:09.476707 | 2026-04-18 04:31:10.166060 | orchestrator | changed: .d..t...... ./ 2026-04-18 04:31:10.930297 | orchestrator | changed: .d..t...... ./ 2026-04-18 04:31:10.955340 | 2026-04-18 04:31:10.955488 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-18 04:31:11.004122 | orchestrator | skipping: Conditional result was False 2026-04-18 04:31:11.007748 | orchestrator | skipping: Conditional result was False 2026-04-18 04:31:11.024907 | 2026-04-18 04:31:11.025025 | PLAY RECAP 2026-04-18 04:31:11.025082 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-18 04:31:11.025113 | 2026-04-18 04:31:11.172837 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-18 04:31:11.175864 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-18 04:31:11.955088 | 2026-04-18 04:31:11.955250 | PLAY [Base post] 2026-04-18 04:31:11.969888 | 2026-04-18 04:31:11.970070 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-18 04:31:12.950297 | orchestrator | changed 2026-04-18 04:31:12.959846 | 2026-04-18 04:31:12.960004 | PLAY RECAP 2026-04-18 04:31:12.960080 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-18 04:31:12.960148 | 2026-04-18 04:31:13.092839 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-18 04:31:13.093906 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-18 04:31:13.893017 | 2026-04-18 04:31:13.893202 | PLAY [Base post-logs] 2026-04-18 04:31:13.904347 | 2026-04-18 04:31:13.904498 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-18 04:31:14.370429 | localhost | changed 2026-04-18 04:31:14.384812 | 2026-04-18 04:31:14.385027 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-18 04:31:14.415927 | localhost | ok 2026-04-18 04:31:14.422316 | 2026-04-18 04:31:14.422515 | TASK [Set zuul-log-path fact] 2026-04-18 04:31:14.442369 | localhost | ok 2026-04-18 04:31:14.460803 | 2026-04-18 04:31:14.460995 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-18 04:31:14.488340 | localhost | ok 2026-04-18 04:31:14.495804 | 2026-04-18 04:31:14.495979 | TASK [upload-logs : Create log directories] 2026-04-18 04:31:14.994287 | localhost | changed 2026-04-18 04:31:14.998193 | 2026-04-18 04:31:14.998317 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-18 04:31:15.501081 | localhost -> localhost | ok: Runtime: 0:00:00.007266 2026-04-18 04:31:15.515033 | 2026-04-18 04:31:15.515237 | TASK [upload-logs : Upload logs to log server] 2026-04-18 04:31:16.092007 | localhost | Output suppressed because no_log was given 2026-04-18 04:31:16.094985 | 2026-04-18 04:31:16.095585 | LOOP [upload-logs : Compress console log and json output] 2026-04-18 04:31:16.178506 | localhost | skipping: Conditional result was False 2026-04-18 04:31:16.184009 | localhost | skipping: Conditional result was False 2026-04-18 04:31:16.196505 | 2026-04-18 04:31:16.196650 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-18 04:31:16.256151 | localhost | skipping: Conditional result was False 2026-04-18 04:31:16.256612 | 2026-04-18 04:31:16.262741 | localhost | skipping: Conditional result was False 2026-04-18 04:31:16.267726 | 2026-04-18 04:31:16.267860 | LOOP [upload-logs : Upload console log and json output]