2025-07-24 00:00:08.733319 | Job console starting 2025-07-24 00:00:08.742890 | Updating git repos 2025-07-24 00:00:08.820674 | Cloning repos into workspace 2025-07-24 00:00:09.023063 | Restoring repo states 2025-07-24 00:00:09.056614 | Merging changes 2025-07-24 00:00:09.056636 | Checking out repos 2025-07-24 00:00:09.523600 | Preparing playbooks 2025-07-24 00:00:10.223871 | Running Ansible setup 2025-07-24 00:00:15.766852 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-24 00:00:17.119079 | 2025-07-24 00:00:17.119251 | PLAY [Base pre] 2025-07-24 00:00:17.143733 | 2025-07-24 00:00:17.143922 | TASK [Setup log path fact] 2025-07-24 00:00:17.165124 | orchestrator | ok 2025-07-24 00:00:17.181246 | 2025-07-24 00:00:17.181353 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-24 00:00:17.253257 | orchestrator | ok 2025-07-24 00:00:17.270284 | 2025-07-24 00:00:17.270400 | TASK [emit-job-header : Print job information] 2025-07-24 00:00:17.352791 | # Job Information 2025-07-24 00:00:17.352972 | Ansible Version: 2.16.14 2025-07-24 00:00:17.353006 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-07-24 00:00:17.353061 | Pipeline: periodic-midnight 2025-07-24 00:00:17.353086 | Executor: 521e9411259a 2025-07-24 00:00:17.353104 | Triggered by: https://github.com/osism/testbed 2025-07-24 00:00:17.353123 | Event ID: 2fcc5346a7364f299ba99a305c9ecff1 2025-07-24 00:00:17.367534 | 2025-07-24 00:00:17.367636 | LOOP [emit-job-header : Print node information] 2025-07-24 00:00:17.766233 | orchestrator | ok: 2025-07-24 00:00:17.766374 | orchestrator | # Node Information 2025-07-24 00:00:17.766402 | orchestrator | Inventory Hostname: orchestrator 2025-07-24 00:00:17.766423 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-24 00:00:17.766441 | orchestrator | Username: zuul-testbed02 2025-07-24 00:00:17.766458 | orchestrator | Distro: Debian 12.11 2025-07-24 00:00:17.766477 | orchestrator | Provider: static-testbed 2025-07-24 00:00:17.766501 | orchestrator | Region: 2025-07-24 00:00:17.766546 | orchestrator | Label: testbed-orchestrator 2025-07-24 00:00:17.766695 | orchestrator | Product Name: OpenStack Nova 2025-07-24 00:00:17.766732 | orchestrator | Interface IP: 81.163.193.140 2025-07-24 00:00:17.787260 | 2025-07-24 00:00:17.787358 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-24 00:00:19.017437 | orchestrator -> localhost | changed 2025-07-24 00:00:19.023980 | 2025-07-24 00:00:19.024094 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-24 00:00:21.484968 | orchestrator -> localhost | changed 2025-07-24 00:00:21.519120 | 2025-07-24 00:00:21.519250 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-24 00:00:22.040049 | orchestrator -> localhost | ok 2025-07-24 00:00:22.046083 | 2025-07-24 00:00:22.046189 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-24 00:00:22.096244 | orchestrator | ok 2025-07-24 00:00:22.146711 | orchestrator | included: /var/lib/zuul/builds/2423a3765b4b44bd9960365058545dbd/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-24 00:00:22.181537 | 2025-07-24 00:00:22.181637 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-24 00:00:25.362570 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-24 00:00:25.362769 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/2423a3765b4b44bd9960365058545dbd/work/2423a3765b4b44bd9960365058545dbd_id_rsa 2025-07-24 00:00:25.362807 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/2423a3765b4b44bd9960365058545dbd/work/2423a3765b4b44bd9960365058545dbd_id_rsa.pub 2025-07-24 00:00:25.362932 | orchestrator -> localhost | The key fingerprint is: 2025-07-24 00:00:25.362964 | orchestrator -> localhost | SHA256:w5ta1eCtr790iCoXcx/sWLr47BlCI9oI9oXV/QJc6XY zuul-build-sshkey 2025-07-24 00:00:25.362988 | orchestrator -> localhost | The key's randomart image is: 2025-07-24 00:00:25.363021 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-24 00:00:25.363044 | orchestrator -> localhost | | .. | 2025-07-24 00:00:25.363065 | orchestrator -> localhost | | o o. | 2025-07-24 00:00:25.363085 | orchestrator -> localhost | | . +.o | 2025-07-24 00:00:25.363106 | orchestrator -> localhost | | o . oo=E | 2025-07-24 00:00:25.363126 | orchestrator -> localhost | | o . o S.+o+ | 2025-07-24 00:00:25.363163 | orchestrator -> localhost | | . o = oo*.++. | 2025-07-24 00:00:25.363184 | orchestrator -> localhost | | + . =++*o.. | 2025-07-24 00:00:25.363204 | orchestrator -> localhost | | .o.=o=o. | 2025-07-24 00:00:25.363225 | orchestrator -> localhost | | .oooB++. | 2025-07-24 00:00:25.363246 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-24 00:00:25.363305 | orchestrator -> localhost | ok: Runtime: 0:00:01.902894 2025-07-24 00:00:25.370491 | 2025-07-24 00:00:25.370603 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-24 00:00:25.413524 | orchestrator | ok 2025-07-24 00:00:25.422324 | orchestrator | included: /var/lib/zuul/builds/2423a3765b4b44bd9960365058545dbd/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-24 00:00:25.429937 | 2025-07-24 00:00:25.430026 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-24 00:00:25.452769 | orchestrator | skipping: Conditional result was False 2025-07-24 00:00:25.459953 | 2025-07-24 00:00:25.460050 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-24 00:00:26.373281 | orchestrator | changed 2025-07-24 00:00:26.378309 | 2025-07-24 00:00:26.378389 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-24 00:00:26.839846 | orchestrator | ok 2025-07-24 00:00:26.853468 | 2025-07-24 00:00:26.853574 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-24 00:00:27.308611 | orchestrator | ok 2025-07-24 00:00:27.313531 | 2025-07-24 00:00:27.313616 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-24 00:00:27.764371 | orchestrator | ok 2025-07-24 00:00:27.787403 | 2025-07-24 00:00:27.787505 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-24 00:00:27.832717 | orchestrator | skipping: Conditional result was False 2025-07-24 00:00:27.839214 | 2025-07-24 00:00:27.839304 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-24 00:00:28.628355 | orchestrator -> localhost | changed 2025-07-24 00:00:28.642367 | 2025-07-24 00:00:28.642469 | TASK [add-build-sshkey : Add back temp key] 2025-07-24 00:00:29.159677 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/2423a3765b4b44bd9960365058545dbd/work/2423a3765b4b44bd9960365058545dbd_id_rsa (zuul-build-sshkey) 2025-07-24 00:00:29.159863 | orchestrator -> localhost | ok: Runtime: 0:00:00.023496 2025-07-24 00:00:29.165804 | 2025-07-24 00:00:29.165886 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-24 00:00:29.656090 | orchestrator | ok 2025-07-24 00:00:29.661117 | 2025-07-24 00:00:29.661211 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-24 00:00:29.698414 | orchestrator | skipping: Conditional result was False 2025-07-24 00:00:29.753614 | 2025-07-24 00:00:29.753715 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-24 00:00:30.160083 | orchestrator | ok 2025-07-24 00:00:30.175335 | 2025-07-24 00:00:30.183747 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-24 00:00:30.211571 | orchestrator | ok 2025-07-24 00:00:30.235188 | 2025-07-24 00:00:30.235292 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-24 00:00:30.747487 | orchestrator -> localhost | ok 2025-07-24 00:00:30.753356 | 2025-07-24 00:00:30.753474 | TASK [validate-host : Collect information about the host] 2025-07-24 00:00:32.132393 | orchestrator | ok 2025-07-24 00:00:32.150643 | 2025-07-24 00:00:32.152955 | TASK [validate-host : Sanitize hostname] 2025-07-24 00:00:32.231121 | orchestrator | ok 2025-07-24 00:00:32.241888 | 2025-07-24 00:00:32.242218 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-24 00:00:33.501628 | orchestrator -> localhost | changed 2025-07-24 00:00:33.506669 | 2025-07-24 00:00:33.506806 | TASK [validate-host : Collect information about zuul worker] 2025-07-24 00:00:34.019375 | orchestrator | ok 2025-07-24 00:00:34.023862 | 2025-07-24 00:00:34.023956 | TASK [validate-host : Write out all zuul information for each host] 2025-07-24 00:00:34.961401 | orchestrator -> localhost | changed 2025-07-24 00:00:34.970313 | 2025-07-24 00:00:34.970401 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-24 00:00:35.240159 | orchestrator | ok 2025-07-24 00:00:35.245001 | 2025-07-24 00:00:35.245084 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-24 00:01:15.078907 | orchestrator | changed: 2025-07-24 00:01:15.079099 | orchestrator | .d..t...... src/ 2025-07-24 00:01:15.079161 | orchestrator | .d..t...... src/github.com/ 2025-07-24 00:01:15.079187 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-24 00:01:15.079208 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-24 00:01:15.079229 | orchestrator | RedHat.yml 2025-07-24 00:01:15.097603 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-24 00:01:15.097621 | orchestrator | RedHat.yml 2025-07-24 00:01:15.097673 | orchestrator | = 1.53.0"... 2025-07-24 00:01:26.990587 | orchestrator | 00:01:26.990 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-07-24 00:01:27.453970 | orchestrator | 00:01:27.453 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-07-24 00:01:28.405029 | orchestrator | 00:01:28.404 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-07-24 00:01:28.783719 | orchestrator | 00:01:28.783 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-24 00:01:29.323230 | orchestrator | 00:01:29.323 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-24 00:01:29.797567 | orchestrator | 00:01:29.797 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-24 00:01:30.540875 | orchestrator | 00:01:30.540 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-24 00:01:30.540955 | orchestrator | 00:01:30.540 STDOUT terraform: Providers are signed by their developers. 2025-07-24 00:01:30.541286 | orchestrator | 00:01:30.540 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-24 00:01:30.541393 | orchestrator | 00:01:30.541 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-24 00:01:30.541501 | orchestrator | 00:01:30.541 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-24 00:01:30.541665 | orchestrator | 00:01:30.541 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-24 00:01:30.541810 | orchestrator | 00:01:30.541 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-24 00:01:30.541929 | orchestrator | 00:01:30.541 STDOUT terraform: you run "tofu init" in the future. 2025-07-24 00:01:30.542005 | orchestrator | 00:01:30.541 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-24 00:01:30.542135 | orchestrator | 00:01:30.542 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-24 00:01:30.542243 | orchestrator | 00:01:30.542 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-24 00:01:30.542264 | orchestrator | 00:01:30.542 STDOUT terraform: should now work. 2025-07-24 00:01:30.542362 | orchestrator | 00:01:30.542 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-24 00:01:30.542472 | orchestrator | 00:01:30.542 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-24 00:01:30.542546 | orchestrator | 00:01:30.542 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-24 00:01:30.658415 | orchestrator | 00:01:30.656 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-24 00:01:30.658743 | orchestrator | 00:01:30.656 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-24 00:01:30.850519 | orchestrator | 00:01:30.850 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-24 00:01:30.850575 | orchestrator | 00:01:30.850 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-24 00:01:30.850586 | orchestrator | 00:01:30.850 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-24 00:01:30.850592 | orchestrator | 00:01:30.850 STDOUT terraform: for this configuration. 2025-07-24 00:01:31.005242 | orchestrator | 00:01:31.005 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-24 00:01:31.005392 | orchestrator | 00:01:31.005 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-24 00:01:31.102104 | orchestrator | 00:01:31.101 STDOUT terraform: ci.auto.tfvars 2025-07-24 00:01:31.585988 | orchestrator | 00:01:31.585 STDOUT terraform: default_custom.tf 2025-07-24 00:01:31.732007 | orchestrator | 00:01:31.730 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-24 00:01:32.771588 | orchestrator | 00:01:32.767 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-24 00:01:33.340065 | orchestrator | 00:01:33.339 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-24 00:01:33.660953 | orchestrator | 00:01:33.660 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-24 00:01:33.661032 | orchestrator | 00:01:33.660 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-24 00:01:33.661041 | orchestrator | 00:01:33.660 STDOUT terraform:  + create 2025-07-24 00:01:33.661048 | orchestrator | 00:01:33.660 STDOUT terraform:  <= read (data resources) 2025-07-24 00:01:33.661055 | orchestrator | 00:01:33.660 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-24 00:01:33.661150 | orchestrator | 00:01:33.661 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-24 00:01:33.661175 | orchestrator | 00:01:33.661 STDOUT terraform:  # (config refers to values not yet known) 2025-07-24 00:01:33.661215 | orchestrator | 00:01:33.661 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-24 00:01:33.661257 | orchestrator | 00:01:33.661 STDOUT terraform:  + checksum = (known after apply) 2025-07-24 00:01:33.661291 | orchestrator | 00:01:33.661 STDOUT terraform:  + created_at = (known after apply) 2025-07-24 00:01:33.661329 | orchestrator | 00:01:33.661 STDOUT terraform:  + file = (known after apply) 2025-07-24 00:01:33.661364 | orchestrator | 00:01:33.661 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.661416 | orchestrator | 00:01:33.661 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.661481 | orchestrator | 00:01:33.661 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-24 00:01:33.661531 | orchestrator | 00:01:33.661 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-24 00:01:33.661577 | orchestrator | 00:01:33.661 STDOUT terraform:  + most_recent = true 2025-07-24 00:01:33.661604 | orchestrator | 00:01:33.661 STDOUT terraform:  + name = (known after apply) 2025-07-24 00:01:33.661639 | orchestrator | 00:01:33.661 STDOUT terraform:  + protected = (known after apply) 2025-07-24 00:01:33.661680 | orchestrator | 00:01:33.661 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.661711 | orchestrator | 00:01:33.661 STDOUT terraform:  + schema = (known after apply) 2025-07-24 00:01:33.661754 | orchestrator | 00:01:33.661 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-24 00:01:33.661781 | orchestrator | 00:01:33.661 STDOUT terraform:  + tags = (known after apply) 2025-07-24 00:01:33.661809 | orchestrator | 00:01:33.661 STDOUT terraform:  + updated_at = (known after apply) 2025-07-24 00:01:33.661825 | orchestrator | 00:01:33.661 STDOUT terraform:  } 2025-07-24 00:01:33.661909 | orchestrator | 00:01:33.661 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-24 00:01:33.661949 | orchestrator | 00:01:33.661 STDOUT terraform:  # (config refers to values not yet known) 2025-07-24 00:01:33.661981 | orchestrator | 00:01:33.661 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-24 00:01:33.662034 | orchestrator | 00:01:33.661 STDOUT terraform:  + checksum = (known after apply) 2025-07-24 00:01:33.662072 | orchestrator | 00:01:33.662 STDOUT terraform:  + created_at = (known after apply) 2025-07-24 00:01:33.662126 | orchestrator | 00:01:33.662 STDOUT terraform:  + file = (known after apply) 2025-07-24 00:01:33.662147 | orchestrator | 00:01:33.662 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.662174 | orchestrator | 00:01:33.662 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.662218 | orchestrator | 00:01:33.662 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-24 00:01:33.662244 | orchestrator | 00:01:33.662 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-24 00:01:33.662268 | orchestrator | 00:01:33.662 STDOUT terraform:  + most_recent = true 2025-07-24 00:01:33.662311 | orchestrator | 00:01:33.662 STDOUT terraform:  + name = (known after apply) 2025-07-24 00:01:33.662335 | orchestrator | 00:01:33.662 STDOUT terraform:  + protected = (known after apply) 2025-07-24 00:01:33.662368 | orchestrator | 00:01:33.662 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.662410 | orchestrator | 00:01:33.662 STDOUT terraform:  + schema = (known after apply) 2025-07-24 00:01:33.662436 | orchestrator | 00:01:33.662 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-24 00:01:33.662479 | orchestrator | 00:01:33.662 STDOUT terraform:  + tags = (known after apply) 2025-07-24 00:01:33.662503 | orchestrator | 00:01:33.662 STDOUT terraform:  + updated_at = (known after apply) 2025-07-24 00:01:33.662522 | orchestrator | 00:01:33.662 STDOUT terraform:  } 2025-07-24 00:01:33.662610 | orchestrator | 00:01:33.662 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-24 00:01:33.662658 | orchestrator | 00:01:33.662 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-24 00:01:33.662689 | orchestrator | 00:01:33.662 STDOUT terraform:  + content = (known after apply) 2025-07-24 00:01:33.662740 | orchestrator | 00:01:33.662 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-24 00:01:33.662771 | orchestrator | 00:01:33.662 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-24 00:01:33.662826 | orchestrator | 00:01:33.662 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-24 00:01:33.662900 | orchestrator | 00:01:33.662 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-24 00:01:33.662927 | orchestrator | 00:01:33.662 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-24 00:01:33.662964 | orchestrator | 00:01:33.662 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-24 00:01:33.662999 | orchestrator | 00:01:33.662 STDOUT terraform:  + directory_permission = "0777" 2025-07-24 00:01:33.663020 | orchestrator | 00:01:33.662 STDOUT terraform:  + file_permission = "0644" 2025-07-24 00:01:33.663060 | orchestrator | 00:01:33.663 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-24 00:01:33.663101 | orchestrator | 00:01:33.663 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.663119 | orchestrator | 00:01:33.663 STDOUT terraform:  } 2025-07-24 00:01:33.663151 | orchestrator | 00:01:33.663 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-24 00:01:33.663182 | orchestrator | 00:01:33.663 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-24 00:01:33.663222 | orchestrator | 00:01:33.663 STDOUT terraform:  + content = (known after apply) 2025-07-24 00:01:33.663263 | orchestrator | 00:01:33.663 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-24 00:01:33.663302 | orchestrator | 00:01:33.663 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-24 00:01:33.663342 | orchestrator | 00:01:33.663 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-24 00:01:33.663382 | orchestrator | 00:01:33.663 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-24 00:01:33.663423 | orchestrator | 00:01:33.663 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-24 00:01:33.663470 | orchestrator | 00:01:33.663 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-24 00:01:33.663487 | orchestrator | 00:01:33.663 STDOUT terraform:  + directory_permission = "0777" 2025-07-24 00:01:33.663516 | orchestrator | 00:01:33.663 STDOUT terraform:  + file_permission = "0644" 2025-07-24 00:01:33.663551 | orchestrator | 00:01:33.663 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-24 00:01:33.663594 | orchestrator | 00:01:33.663 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.663611 | orchestrator | 00:01:33.663 STDOUT terraform:  } 2025-07-24 00:01:33.663639 | orchestrator | 00:01:33.663 STDOUT terraform:  # local_file.inventory will be created 2025-07-24 00:01:33.663668 | orchestrator | 00:01:33.663 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-24 00:01:33.663708 | orchestrator | 00:01:33.663 STDOUT terraform:  + content = (known after apply) 2025-07-24 00:01:33.663748 | orchestrator | 00:01:33.663 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-24 00:01:33.663788 | orchestrator | 00:01:33.663 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-24 00:01:33.663828 | orchestrator | 00:01:33.663 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-24 00:01:33.663879 | orchestrator | 00:01:33.663 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-24 00:01:33.663919 | orchestrator | 00:01:33.663 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-24 00:01:33.663961 | orchestrator | 00:01:33.663 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-24 00:01:33.663988 | orchestrator | 00:01:33.663 STDOUT terraform:  + directory_permission = "0777" 2025-07-24 00:01:33.664017 | orchestrator | 00:01:33.663 STDOUT terraform:  + file_permission = "0644" 2025-07-24 00:01:33.664059 | orchestrator | 00:01:33.664 STDOUT terraform:  + filename = "inventory.ci" 2025-07-24 00:01:33.664093 | orchestrator | 00:01:33.664 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.664108 | orchestrator | 00:01:33.664 STDOUT terraform:  } 2025-07-24 00:01:33.664149 | orchestrator | 00:01:33.664 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-24 00:01:33.664179 | orchestrator | 00:01:33.664 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-24 00:01:33.664214 | orchestrator | 00:01:33.664 STDOUT terraform:  + content = (sensitive value) 2025-07-24 00:01:33.664254 | orchestrator | 00:01:33.664 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-24 00:01:33.664293 | orchestrator | 00:01:33.664 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-24 00:01:33.664333 | orchestrator | 00:01:33.664 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-24 00:01:33.664372 | orchestrator | 00:01:33.664 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-24 00:01:33.664413 | orchestrator | 00:01:33.664 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-24 00:01:33.664453 | orchestrator | 00:01:33.664 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-24 00:01:33.664480 | orchestrator | 00:01:33.664 STDOUT terraform:  + directory_permission = "0700" 2025-07-24 00:01:33.664507 | orchestrator | 00:01:33.664 STDOUT terraform:  + file_permission = "0600" 2025-07-24 00:01:33.664550 | orchestrator | 00:01:33.664 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-24 00:01:33.664588 | orchestrator | 00:01:33.664 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.664612 | orchestrator | 00:01:33.664 STDOUT terraform:  } 2025-07-24 00:01:33.664649 | orchestrator | 00:01:33.664 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-24 00:01:33.664683 | orchestrator | 00:01:33.664 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-24 00:01:33.664717 | orchestrator | 00:01:33.664 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.664724 | orchestrator | 00:01:33.664 STDOUT terraform:  } 2025-07-24 00:01:33.664776 | orchestrator | 00:01:33.664 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-24 00:01:33.664829 | orchestrator | 00:01:33.664 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-24 00:01:33.664890 | orchestrator | 00:01:33.664 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.664908 | orchestrator | 00:01:33.664 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.664947 | orchestrator | 00:01:33.664 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.664989 | orchestrator | 00:01:33.664 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.665030 | orchestrator | 00:01:33.664 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.665082 | orchestrator | 00:01:33.665 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-24 00:01:33.665132 | orchestrator | 00:01:33.665 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.665150 | orchestrator | 00:01:33.665 STDOUT terraform:  + size = 80 2025-07-24 00:01:33.665174 | orchestrator | 00:01:33.665 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.665201 | orchestrator | 00:01:33.665 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.665217 | orchestrator | 00:01:33.665 STDOUT terraform:  } 2025-07-24 00:01:33.665271 | orchestrator | 00:01:33.665 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-24 00:01:33.665322 | orchestrator | 00:01:33.665 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-24 00:01:33.665363 | orchestrator | 00:01:33.665 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.665391 | orchestrator | 00:01:33.665 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.665433 | orchestrator | 00:01:33.665 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.665473 | orchestrator | 00:01:33.665 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.665513 | orchestrator | 00:01:33.665 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.665581 | orchestrator | 00:01:33.665 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-24 00:01:33.665652 | orchestrator | 00:01:33.665 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.665682 | orchestrator | 00:01:33.665 STDOUT terraform:  + size = 80 2025-07-24 00:01:33.665713 | orchestrator | 00:01:33.665 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.665743 | orchestrator | 00:01:33.665 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.665759 | orchestrator | 00:01:33.665 STDOUT terraform:  } 2025-07-24 00:01:33.665812 | orchestrator | 00:01:33.665 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-24 00:01:33.665892 | orchestrator | 00:01:33.665 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-24 00:01:33.665931 | orchestrator | 00:01:33.665 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.665961 | orchestrator | 00:01:33.665 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.666003 | orchestrator | 00:01:33.665 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.666062 | orchestrator | 00:01:33.666 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.666102 | orchestrator | 00:01:33.666 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.666158 | orchestrator | 00:01:33.666 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-24 00:01:33.666196 | orchestrator | 00:01:33.666 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.666276 | orchestrator | 00:01:33.666 STDOUT terraform:  + size = 80 2025-07-24 00:01:33.666282 | orchestrator | 00:01:33.666 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.666286 | orchestrator | 00:01:33.666 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.666294 | orchestrator | 00:01:33.666 STDOUT terraform:  } 2025-07-24 00:01:33.666343 | orchestrator | 00:01:33.666 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-24 00:01:33.666405 | orchestrator | 00:01:33.666 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-24 00:01:33.666449 | orchestrator | 00:01:33.666 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.666456 | orchestrator | 00:01:33.666 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.666515 | orchestrator | 00:01:33.666 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.666525 | orchestrator | 00:01:33.666 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.666588 | orchestrator | 00:01:33.666 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.666657 | orchestrator | 00:01:33.666 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-24 00:01:33.666663 | orchestrator | 00:01:33.666 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.666681 | orchestrator | 00:01:33.666 STDOUT terraform:  + size = 80 2025-07-24 00:01:33.666705 | orchestrator | 00:01:33.666 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.666758 | orchestrator | 00:01:33.666 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.666767 | orchestrator | 00:01:33.666 STDOUT terraform:  } 2025-07-24 00:01:33.666796 | orchestrator | 00:01:33.666 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-24 00:01:33.666860 | orchestrator | 00:01:33.666 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-24 00:01:33.666912 | orchestrator | 00:01:33.666 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.666921 | orchestrator | 00:01:33.666 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.666995 | orchestrator | 00:01:33.666 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.667005 | orchestrator | 00:01:33.666 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.667041 | orchestrator | 00:01:33.666 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.667092 | orchestrator | 00:01:33.667 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-24 00:01:33.667132 | orchestrator | 00:01:33.667 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.667157 | orchestrator | 00:01:33.667 STDOUT terraform:  + size = 80 2025-07-24 00:01:33.667209 | orchestrator | 00:01:33.667 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.667218 | orchestrator | 00:01:33.667 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.667222 | orchestrator | 00:01:33.667 STDOUT terraform:  } 2025-07-24 00:01:33.667280 | orchestrator | 00:01:33.667 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-24 00:01:33.667318 | orchestrator | 00:01:33.667 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-24 00:01:33.667381 | orchestrator | 00:01:33.667 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.667386 | orchestrator | 00:01:33.667 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.667424 | orchestrator | 00:01:33.667 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.667464 | orchestrator | 00:01:33.667 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.667511 | orchestrator | 00:01:33.667 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.667577 | orchestrator | 00:01:33.667 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-24 00:01:33.667617 | orchestrator | 00:01:33.667 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.667623 | orchestrator | 00:01:33.667 STDOUT terraform:  + size = 80 2025-07-24 00:01:33.667675 | orchestrator | 00:01:33.667 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.667684 | orchestrator | 00:01:33.667 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.667690 | orchestrator | 00:01:33.667 STDOUT terraform:  } 2025-07-24 00:01:33.667773 | orchestrator | 00:01:33.667 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-24 00:01:33.667780 | orchestrator | 00:01:33.667 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-24 00:01:33.667827 | orchestrator | 00:01:33.667 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.667922 | orchestrator | 00:01:33.667 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.667937 | orchestrator | 00:01:33.667 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.667942 | orchestrator | 00:01:33.667 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.667982 | orchestrator | 00:01:33.667 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.668044 | orchestrator | 00:01:33.667 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-24 00:01:33.668101 | orchestrator | 00:01:33.668 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.668107 | orchestrator | 00:01:33.668 STDOUT terraform:  + size = 80 2025-07-24 00:01:33.668121 | orchestrator | 00:01:33.668 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.668136 | orchestrator | 00:01:33.668 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.668145 | orchestrator | 00:01:33.668 STDOUT terraform:  } 2025-07-24 00:01:33.668208 | orchestrator | 00:01:33.668 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-24 00:01:33.668249 | orchestrator | 00:01:33.668 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-24 00:01:33.668330 | orchestrator | 00:01:33.668 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.668335 | orchestrator | 00:01:33.668 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.668341 | orchestrator | 00:01:33.668 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.668393 | orchestrator | 00:01:33.668 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.668432 | orchestrator | 00:01:33.668 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-24 00:01:33.668483 | orchestrator | 00:01:33.668 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.668492 | orchestrator | 00:01:33.668 STDOUT terraform:  + size = 20 2025-07-24 00:01:33.668531 | orchestrator | 00:01:33.668 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.668568 | orchestrator | 00:01:33.668 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.668573 | orchestrator | 00:01:33.668 STDOUT terraform:  } 2025-07-24 00:01:33.668636 | orchestrator | 00:01:33.668 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-24 00:01:33.668647 | orchestrator | 00:01:33.668 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-24 00:01:33.668689 | orchestrator | 00:01:33.668 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.668719 | orchestrator | 00:01:33.668 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.668775 | orchestrator | 00:01:33.668 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.668800 | orchestrator | 00:01:33.668 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.668894 | orchestrator | 00:01:33.668 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-24 00:01:33.668905 | orchestrator | 00:01:33.668 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.668934 | orchestrator | 00:01:33.668 STDOUT terraform:  + size = 20 2025-07-24 00:01:33.668947 | orchestrator | 00:01:33.668 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.668984 | orchestrator | 00:01:33.668 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.668994 | orchestrator | 00:01:33.668 STDOUT terraform:  } 2025-07-24 00:01:33.669050 | orchestrator | 00:01:33.668 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-24 00:01:33.669098 | orchestrator | 00:01:33.669 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-24 00:01:33.669166 | orchestrator | 00:01:33.669 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.669176 | orchestrator | 00:01:33.669 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.669192 | orchestrator | 00:01:33.669 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.669242 | orchestrator | 00:01:33.669 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.669315 | orchestrator | 00:01:33.669 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-24 00:01:33.669323 | orchestrator | 00:01:33.669 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.669328 | orchestrator | 00:01:33.669 STDOUT terraform:  + size = 20 2025-07-24 00:01:33.669380 | orchestrator | 00:01:33.669 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.669390 | orchestrator | 00:01:33.669 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.669395 | orchestrator | 00:01:33.669 STDOUT terraform:  } 2025-07-24 00:01:33.669453 | orchestrator | 00:01:33.669 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-24 00:01:33.669496 | orchestrator | 00:01:33.669 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-24 00:01:33.669547 | orchestrator | 00:01:33.669 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.669576 | orchestrator | 00:01:33.669 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.669642 | orchestrator | 00:01:33.669 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.669648 | orchestrator | 00:01:33.669 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.669703 | orchestrator | 00:01:33.669 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-24 00:01:33.669739 | orchestrator | 00:01:33.669 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.669746 | orchestrator | 00:01:33.669 STDOUT terraform:  + size = 20 2025-07-24 00:01:33.669782 | orchestrator | 00:01:33.669 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.669792 | orchestrator | 00:01:33.669 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.669827 | orchestrator | 00:01:33.669 STDOUT terraform:  } 2025-07-24 00:01:33.669917 | orchestrator | 00:01:33.669 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-24 00:01:33.669987 | orchestrator | 00:01:33.669 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-24 00:01:33.669997 | orchestrator | 00:01:33.669 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.670753 | orchestrator | 00:01:33.669 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.670794 | orchestrator | 00:01:33.670 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.670836 | orchestrator | 00:01:33.670 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.670927 | orchestrator | 00:01:33.670 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-24 00:01:33.670971 | orchestrator | 00:01:33.670 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.670997 | orchestrator | 00:01:33.670 STDOUT terraform:  + size = 20 2025-07-24 00:01:33.671028 | orchestrator | 00:01:33.670 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.671056 | orchestrator | 00:01:33.671 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.671073 | orchestrator | 00:01:33.671 STDOUT terraform:  } 2025-07-24 00:01:33.671128 | orchestrator | 00:01:33.671 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-24 00:01:33.671177 | orchestrator | 00:01:33.671 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-24 00:01:33.671219 | orchestrator | 00:01:33.671 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.671247 | orchestrator | 00:01:33.671 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.671294 | orchestrator | 00:01:33.671 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.671332 | orchestrator | 00:01:33.671 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.671375 | orchestrator | 00:01:33.671 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-24 00:01:33.671425 | orchestrator | 00:01:33.671 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.671483 | orchestrator | 00:01:33.671 STDOUT terraform:  + size = 20 2025-07-24 00:01:33.671492 | orchestrator | 00:01:33.671 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.671497 | orchestrator | 00:01:33.671 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.671501 | orchestrator | 00:01:33.671 STDOUT terraform:  } 2025-07-24 00:01:33.671553 | orchestrator | 00:01:33.671 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-24 00:01:33.671604 | orchestrator | 00:01:33.671 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-24 00:01:33.671642 | orchestrator | 00:01:33.671 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.671678 | orchestrator | 00:01:33.671 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.671713 | orchestrator | 00:01:33.671 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.671793 | orchestrator | 00:01:33.671 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.671801 | orchestrator | 00:01:33.671 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-24 00:01:33.671830 | orchestrator | 00:01:33.671 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.671879 | orchestrator | 00:01:33.671 STDOUT terraform:  + size = 20 2025-07-24 00:01:33.671887 | orchestrator | 00:01:33.671 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.671914 | orchestrator | 00:01:33.671 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.671922 | orchestrator | 00:01:33.671 STDOUT terraform:  } 2025-07-24 00:01:33.671979 | orchestrator | 00:01:33.671 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-24 00:01:33.672027 | orchestrator | 00:01:33.671 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-24 00:01:33.672067 | orchestrator | 00:01:33.672 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.672094 | orchestrator | 00:01:33.672 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.672135 | orchestrator | 00:01:33.672 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.672189 | orchestrator | 00:01:33.672 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.672257 | orchestrator | 00:01:33.672 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-24 00:01:33.672272 | orchestrator | 00:01:33.672 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.672292 | orchestrator | 00:01:33.672 STDOUT terraform:  + size = 20 2025-07-24 00:01:33.672298 | orchestrator | 00:01:33.672 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.672334 | orchestrator | 00:01:33.672 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.672365 | orchestrator | 00:01:33.672 STDOUT terraform:  } 2025-07-24 00:01:33.672419 | orchestrator | 00:01:33.672 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-24 00:01:33.672441 | orchestrator | 00:01:33.672 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-24 00:01:33.672465 | orchestrator | 00:01:33.672 STDOUT terraform:  + attachment = (known after apply) 2025-07-24 00:01:33.672488 | orchestrator | 00:01:33.672 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.672548 | orchestrator | 00:01:33.672 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.672585 | orchestrator | 00:01:33.672 STDOUT terraform:  + metadata = (known after apply) 2025-07-24 00:01:33.672614 | orchestrator | 00:01:33.672 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-24 00:01:33.672661 | orchestrator | 00:01:33.672 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.672677 | orchestrator | 00:01:33.672 STDOUT terraform:  + size = 20 2025-07-24 00:01:33.672705 | orchestrator | 00:01:33.672 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-24 00:01:33.672754 | orchestrator | 00:01:33.672 STDOUT terraform:  + volume_type = "ssd" 2025-07-24 00:01:33.672759 | orchestrator | 00:01:33.672 STDOUT terraform:  } 2025-07-24 00:01:33.672925 | orchestrator | 00:01:33.672 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-24 00:01:33.672974 | orchestrator | 00:01:33.672 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-24 00:01:33.673010 | orchestrator | 00:01:33.672 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-24 00:01:33.673048 | orchestrator | 00:01:33.672 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-24 00:01:33.673081 | orchestrator | 00:01:33.673 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-24 00:01:33.673121 | orchestrator | 00:01:33.673 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.673147 | orchestrator | 00:01:33.673 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.673172 | orchestrator | 00:01:33.673 STDOUT terraform:  + config_drive = true 2025-07-24 00:01:33.673209 | orchestrator | 00:01:33.673 STDOUT terraform:  + created = (known after apply) 2025-07-24 00:01:33.673245 | orchestrator | 00:01:33.673 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-24 00:01:33.673276 | orchestrator | 00:01:33.673 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-24 00:01:33.673377 | orchestrator | 00:01:33.673 STDOUT terraform:  + force_delete = false 2025-07-24 00:01:33.673396 | orchestrator | 00:01:33.673 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-24 00:01:33.673425 | orchestrator | 00:01:33.673 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.673447 | orchestrator | 00:01:33.673 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.673460 | orchestrator | 00:01:33.673 STDOUT terraform:  + image_name = (known after apply) 2025-07-24 00:01:33.673466 | orchestrator | 00:01:33.673 STDOUT terraform:  + key_pair = "testbed" 2025-07-24 00:01:33.673536 | orchestrator | 00:01:33.673 STDOUT terraform:  + name = "testbed-manager" 2025-07-24 00:01:33.673556 | orchestrator | 00:01:33.673 STDOUT terraform:  + power_state = "active" 2025-07-24 00:01:33.673615 | orchestrator | 00:01:33.673 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.673620 | orchestrator | 00:01:33.673 STDOUT terraform:  + security_groups = (known after apply) 2025-07-24 00:01:33.673624 | orchestrator | 00:01:33.673 STDOUT terraform:  + stop_before_destroy = false 2025-07-24 00:01:33.673659 | orchestrator | 00:01:33.673 STDOUT terraform:  + updated = (known after apply) 2025-07-24 00:01:33.673678 | orchestrator | 00:01:33.673 STDOUT terraform:  + user_data = (sensitive value) 2025-07-24 00:01:33.673753 | orchestrator | 00:01:33.673 STDOUT terraform:  + block_device { 2025-07-24 00:01:33.673777 | orchestrator | 00:01:33.673 STDOUT terraform:  + boot_index = 0 2025-07-24 00:01:33.673808 | orchestrator | 00:01:33.673 STDOUT terraform:  + delete_on_termination = false 2025-07-24 00:01:33.673827 | orchestrator | 00:01:33.673 STDOUT terraform:  + destination_type = "volume" 2025-07-24 00:01:33.673858 | orchestrator | 00:01:33.673 STDOUT terraform:  + multiattach = false 2025-07-24 00:01:33.673864 | orchestrator | 00:01:33.673 STDOUT terraform:  + source_type = "volume" 2025-07-24 00:01:33.673901 | orchestrator | 00:01:33.673 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.673918 | orchestrator | 00:01:33.673 STDOUT terraform:  } 2025-07-24 00:01:33.673935 | orchestrator | 00:01:33.673 STDOUT terraform:  + network { 2025-07-24 00:01:33.673959 | orchestrator | 00:01:33.673 STDOUT terraform:  + access_network = false 2025-07-24 00:01:33.673994 | orchestrator | 00:01:33.673 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-24 00:01:33.674050 | orchestrator | 00:01:33.673 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-24 00:01:33.674084 | orchestrator | 00:01:33.674 STDOUT terraform:  + mac = (known after apply) 2025-07-24 00:01:33.674122 | orchestrator | 00:01:33.674 STDOUT terraform:  + name = (known after apply) 2025-07-24 00:01:33.674159 | orchestrator | 00:01:33.674 STDOUT terraform:  + port = (known after apply) 2025-07-24 00:01:33.674195 | orchestrator | 00:01:33.674 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.674203 | orchestrator | 00:01:33.674 STDOUT terraform:  } 2025-07-24 00:01:33.674223 | orchestrator | 00:01:33.674 STDOUT terraform:  } 2025-07-24 00:01:33.674274 | orchestrator | 00:01:33.674 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-24 00:01:33.674322 | orchestrator | 00:01:33.674 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-24 00:01:33.674414 | orchestrator | 00:01:33.674 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-24 00:01:33.674430 | orchestrator | 00:01:33.674 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-24 00:01:33.674435 | orchestrator | 00:01:33.674 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-24 00:01:33.674505 | orchestrator | 00:01:33.674 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.674510 | orchestrator | 00:01:33.674 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.674549 | orchestrator | 00:01:33.674 STDOUT terraform:  + config_drive = true 2025-07-24 00:01:33.674595 | orchestrator | 00:01:33.674 STDOUT terraform:  + created = (known after apply) 2025-07-24 00:01:33.674600 | orchestrator | 00:01:33.674 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-24 00:01:33.674621 | orchestrator | 00:01:33.674 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-24 00:01:33.674650 | orchestrator | 00:01:33.674 STDOUT terraform:  + force_delete = false 2025-07-24 00:01:33.674714 | orchestrator | 00:01:33.674 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-24 00:01:33.674737 | orchestrator | 00:01:33.674 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.674743 | orchestrator | 00:01:33.674 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.674789 | orchestrator | 00:01:33.674 STDOUT terraform:  + image_name = (known after apply) 2025-07-24 00:01:33.674814 | orchestrator | 00:01:33.674 STDOUT terraform:  + key_pair = "testbed" 2025-07-24 00:01:33.674898 | orchestrator | 00:01:33.674 STDOUT terraform:  + name = "testbed-node-0" 2025-07-24 00:01:33.674906 | orchestrator | 00:01:33.674 STDOUT terraform:  + power_state = "active" 2025-07-24 00:01:33.674945 | orchestrator | 00:01:33.674 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.674987 | orchestrator | 00:01:33.674 STDOUT terraform:  + security_groups = (known after apply) 2025-07-24 00:01:33.675011 | orchestrator | 00:01:33.674 STDOUT terraform:  + stop_before_destroy = false 2025-07-24 00:01:33.675069 | orchestrator | 00:01:33.675 STDOUT terraform:  + updated = (known after apply) 2025-07-24 00:01:33.675108 | orchestrator | 00:01:33.675 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-24 00:01:33.675177 | orchestrator | 00:01:33.675 STDOUT terraform:  + block_device { 2025-07-24 00:01:33.675204 | orchestrator | 00:01:33.675 STDOUT terraform:  + boot_index = 0 2025-07-24 00:01:33.675216 | orchestrator | 00:01:33.675 STDOUT terraform:  + delete_on_termination = false 2025-07-24 00:01:33.675222 | orchestrator | 00:01:33.675 STDOUT terraform:  + destination_type = "volume" 2025-07-24 00:01:33.675246 | orchestrator | 00:01:33.675 STDOUT terraform:  + multiattach = false 2025-07-24 00:01:33.675261 | orchestrator | 00:01:33.675 STDOUT terraform:  + source_type = "volume" 2025-07-24 00:01:33.675304 | orchestrator | 00:01:33.675 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.675428 | orchestrator | 00:01:33.675 STDOUT terraform:  } 2025-07-24 00:01:33.675435 | orchestrator | 00:01:33.675 STDOUT terraform:  + network { 2025-07-24 00:01:33.675439 | orchestrator | 00:01:33.675 STDOUT terraform:  + access_network = false 2025-07-24 00:01:33.675447 | orchestrator | 00:01:33.675 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-24 00:01:33.675452 | orchestrator | 00:01:33.675 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-24 00:01:33.675479 | orchestrator | 00:01:33.675 STDOUT terraform:  + mac = (known after apply) 2025-07-24 00:01:33.675573 | orchestrator | 00:01:33.675 STDOUT terraform:  + name = (known after apply) 2025-07-24 00:01:33.675587 | orchestrator | 00:01:33.675 STDOUT terraform:  + port = (known after apply) 2025-07-24 00:01:33.675607 | orchestrator | 00:01:33.675 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.675619 | orchestrator | 00:01:33.675 STDOUT terraform:  } 2025-07-24 00:01:33.675623 | orchestrator | 00:01:33.675 STDOUT terraform:  } 2025-07-24 00:01:33.675629 | orchestrator | 00:01:33.675 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-24 00:01:33.675661 | orchestrator | 00:01:33.675 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-24 00:01:33.675676 | orchestrator | 00:01:33.675 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-24 00:01:33.675730 | orchestrator | 00:01:33.675 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-24 00:01:33.675778 | orchestrator | 00:01:33.675 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-24 00:01:33.675793 | orchestrator | 00:01:33.675 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.676003 | orchestrator | 00:01:33.675 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.676010 | orchestrator | 00:01:33.675 STDOUT terraform:  + config_drive = true 2025-07-24 00:01:33.676016 | orchestrator | 00:01:33.675 STDOUT terraform:  + created = (known after apply) 2025-07-24 00:01:33.676043 | orchestrator | 00:01:33.675 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-24 00:01:33.676055 | orchestrator | 00:01:33.675 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-24 00:01:33.676066 | orchestrator | 00:01:33.675 STDOUT terraform:  + force_delete = false 2025-07-24 00:01:33.676096 | orchestrator | 00:01:33.676 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-24 00:01:33.676128 | orchestrator | 00:01:33.676 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.676135 | orchestrator | 00:01:33.676 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.676170 | orchestrator | 00:01:33.676 STDOUT terraform:  + image_name = (known after apply) 2025-07-24 00:01:33.676210 | orchestrator | 00:01:33.676 STDOUT terraform:  + key_pair = "testbed" 2025-07-24 00:01:33.676222 | orchestrator | 00:01:33.676 STDOUT terraform:  + name = "testbed-node-1" 2025-07-24 00:01:33.676241 | orchestrator | 00:01:33.676 STDOUT terraform:  + power_state = "active" 2025-07-24 00:01:33.676297 | orchestrator | 00:01:33.676 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.676388 | orchestrator | 00:01:33.676 STDOUT terraform:  + security_groups = (known after apply) 2025-07-24 00:01:33.676442 | orchestrator | 00:01:33.676 STDOUT terraform:  + stop_before_destroy = false 2025-07-24 00:01:33.676448 | orchestrator | 00:01:33.676 STDOUT terraform:  + updated = (known after apply) 2025-07-24 00:01:33.676512 | orchestrator | 00:01:33.676 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-24 00:01:33.676568 | orchestrator | 00:01:33.676 STDOUT terraform:  + block_device { 2025-07-24 00:01:33.676643 | orchestrator | 00:01:33.676 STDOUT terraform:  + boot_index = 0 2025-07-24 00:01:33.676648 | orchestrator | 00:01:33.676 STDOUT terraform:  + delete_on_termination = false 2025-07-24 00:01:33.676653 | orchestrator | 00:01:33.676 STDOUT terraform:  + destination_type = "volume" 2025-07-24 00:01:33.676710 | orchestrator | 00:01:33.676 STDOUT terraform:  + multiattach = false 2025-07-24 00:01:33.676714 | orchestrator | 00:01:33.676 STDOUT terraform:  + source_type = "volume" 2025-07-24 00:01:33.676798 | orchestrator | 00:01:33.676 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.676816 | orchestrator | 00:01:33.676 STDOUT terraform:  } 2025-07-24 00:01:33.676838 | orchestrator | 00:01:33.676 STDOUT terraform:  + network { 2025-07-24 00:01:33.676861 | orchestrator | 00:01:33.676 STDOUT terraform:  + access_network = false 2025-07-24 00:01:33.676865 | orchestrator | 00:01:33.676 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-24 00:01:33.676886 | orchestrator | 00:01:33.676 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-24 00:01:33.676908 | orchestrator | 00:01:33.676 STDOUT terraform:  + mac = (known after apply) 2025-07-24 00:01:33.676952 | orchestrator | 00:01:33.676 STDOUT terraform:  + name = (known after apply) 2025-07-24 00:01:33.676971 | orchestrator | 00:01:33.676 STDOUT terraform:  + port = (known after apply) 2025-07-24 00:01:33.677046 | orchestrator | 00:01:33.676 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.677080 | orchestrator | 00:01:33.676 STDOUT terraform:  } 2025-07-24 00:01:33.677086 | orchestrator | 00:01:33.677 STDOUT terraform:  } 2025-07-24 00:01:33.677090 | orchestrator | 00:01:33.677 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-24 00:01:33.677132 | orchestrator | 00:01:33.677 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-24 00:01:33.677174 | orchestrator | 00:01:33.677 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-24 00:01:33.677189 | orchestrator | 00:01:33.677 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-24 00:01:33.677248 | orchestrator | 00:01:33.677 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-24 00:01:33.677308 | orchestrator | 00:01:33.677 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.677312 | orchestrator | 00:01:33.677 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.677324 | orchestrator | 00:01:33.677 STDOUT terraform:  + config_drive = true 2025-07-24 00:01:33.677356 | orchestrator | 00:01:33.677 STDOUT terraform:  + created = (known after apply) 2025-07-24 00:01:33.677420 | orchestrator | 00:01:33.677 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-24 00:01:33.677474 | orchestrator | 00:01:33.677 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-24 00:01:33.677495 | orchestrator | 00:01:33.677 STDOUT terraform:  + force_delete = false 2025-07-24 00:01:33.677523 | orchestrator | 00:01:33.677 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-24 00:01:33.677563 | orchestrator | 00:01:33.677 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.677586 | orchestrator | 00:01:33.677 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.677651 | orchestrator | 00:01:33.677 STDOUT terraform:  + image_name = (known after apply) 2025-07-24 00:01:33.677718 | orchestrator | 00:01:33.677 STDOUT terraform:  + key_pair = "testbed" 2025-07-24 00:01:33.677741 | orchestrator | 00:01:33.677 STDOUT terraform:  + name = "testbed-node-2" 2025-07-24 00:01:33.677774 | orchestrator | 00:01:33.677 STDOUT terraform:  + power_state = "active" 2025-07-24 00:01:33.677839 | orchestrator | 00:01:33.677 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.677885 | orchestrator | 00:01:33.677 STDOUT terraform:  + security_groups = (known after apply) 2025-07-24 00:01:33.677953 | orchestrator | 00:01:33.677 STDOUT terraform:  + stop_before_destroy = false 2025-07-24 00:01:33.677971 | orchestrator | 00:01:33.677 STDOUT terraform:  + updated = (known after apply) 2025-07-24 00:01:33.678069 | orchestrator | 00:01:33.677 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-24 00:01:33.678082 | orchestrator | 00:01:33.678 STDOUT terraform:  + block_device { 2025-07-24 00:01:33.678103 | orchestrator | 00:01:33.678 STDOUT terraform:  + boot_index = 0 2025-07-24 00:01:33.678107 | orchestrator | 00:01:33.678 STDOUT terraform:  + delete_on_termination = false 2025-07-24 00:01:33.678132 | orchestrator | 00:01:33.678 STDOUT terraform:  + destination_type = "volume" 2025-07-24 00:01:33.678228 | orchestrator | 00:01:33.678 STDOUT terraform:  + multiattach = false 2025-07-24 00:01:33.678236 | orchestrator | 00:01:33.678 STDOUT terraform:  + source_type = "volume" 2025-07-24 00:01:33.678280 | orchestrator | 00:01:33.678 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.678323 | orchestrator | 00:01:33.678 STDOUT terraform:  } 2025-07-24 00:01:33.678356 | orchestrator | 00:01:33.678 STDOUT terraform:  + network { 2025-07-24 00:01:33.678374 | orchestrator | 00:01:33.678 STDOUT terraform:  + access_network = false 2025-07-24 00:01:33.678380 | orchestrator | 00:01:33.678 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-24 00:01:33.678384 | orchestrator | 00:01:33.678 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-24 00:01:33.678471 | orchestrator | 00:01:33.678 STDOUT terraform:  + mac = (known after apply) 2025-07-24 00:01:33.678495 | orchestrator | 00:01:33.678 STDOUT terraform:  + name = (known after apply) 2025-07-24 00:01:33.678501 | orchestrator | 00:01:33.678 STDOUT terraform:  + port = (known after apply) 2025-07-24 00:01:33.678579 | orchestrator | 00:01:33.678 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.678610 | orchestrator | 00:01:33.678 STDOUT terraform:  } 2025-07-24 00:01:33.678647 | orchestrator | 00:01:33.678 STDOUT terraform:  } 2025-07-24 00:01:33.678659 | orchestrator | 00:01:33.678 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-24 00:01:33.678688 | orchestrator | 00:01:33.678 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-24 00:01:33.678693 | orchestrator | 00:01:33.678 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-24 00:01:33.678697 | orchestrator | 00:01:33.678 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-24 00:01:33.678740 | orchestrator | 00:01:33.678 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-24 00:01:33.678771 | orchestrator | 00:01:33.678 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.678884 | orchestrator | 00:01:33.678 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.678891 | orchestrator | 00:01:33.678 STDOUT terraform:  + config_drive = true 2025-07-24 00:01:33.678895 | orchestrator | 00:01:33.678 STDOUT terraform:  + created = (known after apply) 2025-07-24 00:01:33.678964 | orchestrator | 00:01:33.678 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-24 00:01:33.679022 | orchestrator | 00:01:33.678 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-24 00:01:33.679103 | orchestrator | 00:01:33.678 STDOUT terraform:  + force_delete = false 2025-07-24 00:01:33.679108 | orchestrator | 00:01:33.679 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-24 00:01:33.679127 | orchestrator | 00:01:33.679 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.679132 | orchestrator | 00:01:33.679 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.679187 | orchestrator | 00:01:33.679 STDOUT terraform:  + image_name = (known after apply) 2025-07-24 00:01:33.679224 | orchestrator | 00:01:33.679 STDOUT terraform:  + key_pair = "testbed" 2025-07-24 00:01:33.679245 | orchestrator | 00:01:33.679 STDOUT terraform:  + name = "testbed-node-3" 2025-07-24 00:01:33.679298 | orchestrator | 00:01:33.679 STDOUT terraform:  + power_state = "active" 2025-07-24 00:01:33.679371 | orchestrator | 00:01:33.679 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.679411 | orchestrator | 00:01:33.679 STDOUT terraform:  + security_groups = (known after apply) 2025-07-24 00:01:33.679416 | orchestrator | 00:01:33.679 STDOUT terraform:  + stop_before_destroy = false 2025-07-24 00:01:33.679551 | orchestrator | 00:01:33.679 STDOUT terraform:  + updated = (known after apply) 2025-07-24 00:01:33.679599 | orchestrator | 00:01:33.679 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-24 00:01:33.679608 | orchestrator | 00:01:33.679 STDOUT terraform:  + block_device { 2025-07-24 00:01:33.679612 | orchestrator | 00:01:33.679 STDOUT terraform:  + boot_index = 0 2025-07-24 00:01:33.679616 | orchestrator | 00:01:33.679 STDOUT terraform:  + delete_on_termination = false 2025-07-24 00:01:33.679620 | orchestrator | 00:01:33.679 STDOUT terraform:  + destination_type = "volume" 2025-07-24 00:01:33.679646 | orchestrator | 00:01:33.679 STDOUT terraform:  + multiattach = false 2025-07-24 00:01:33.679696 | orchestrator | 00:01:33.679 STDOUT terraform:  + source_type = "volume" 2025-07-24 00:01:33.679899 | orchestrator | 00:01:33.679 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.679909 | orchestrator | 00:01:33.679 STDOUT terraform:  } 2025-07-24 00:01:33.679913 | orchestrator | 00:01:33.679 STDOUT terraform:  + network { 2025-07-24 00:01:33.679917 | orchestrator | 00:01:33.679 STDOUT terraform:  + access_network = false 2025-07-24 00:01:33.679921 | orchestrator | 00:01:33.679 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-24 00:01:33.679926 | orchestrator | 00:01:33.679 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-24 00:01:33.679930 | orchestrator | 00:01:33.679 STDOUT terraform:  + mac = (known after apply) 2025-07-24 00:01:33.679933 | orchestrator | 00:01:33.679 STDOUT terraform:  + name = (known after apply) 2025-07-24 00:01:33.679953 | orchestrator | 00:01:33.679 STDOUT terraform:  + port = (known after apply) 2025-07-24 00:01:33.679965 | orchestrator | 00:01:33.679 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.679969 | orchestrator | 00:01:33.679 STDOUT terraform:  } 2025-07-24 00:01:33.679975 | orchestrator | 00:01:33.679 STDOUT terraform:  } 2025-07-24 00:01:33.679979 | orchestrator | 00:01:33.679 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-24 00:01:33.680082 | orchestrator | 00:01:33.679 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-24 00:01:33.680137 | orchestrator | 00:01:33.680 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-24 00:01:33.680150 | orchestrator | 00:01:33.680 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-24 00:01:33.680163 | orchestrator | 00:01:33.680 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-24 00:01:33.680199 | orchestrator | 00:01:33.680 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.680248 | orchestrator | 00:01:33.680 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.680268 | orchestrator | 00:01:33.680 STDOUT terraform:  + config_drive = true 2025-07-24 00:01:33.680302 | orchestrator | 00:01:33.680 STDOUT terraform:  + created = (known after apply) 2025-07-24 00:01:33.680337 | orchestrator | 00:01:33.680 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-24 00:01:33.680345 | orchestrator | 00:01:33.680 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-24 00:01:33.680411 | orchestrator | 00:01:33.680 STDOUT terraform:  + force_delete = false 2025-07-24 00:01:33.680452 | orchestrator | 00:01:33.680 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-24 00:01:33.680489 | orchestrator | 00:01:33.680 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.680546 | orchestrator | 00:01:33.680 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.680554 | orchestrator | 00:01:33.680 STDOUT terraform:  + image_name = (known after apply) 2025-07-24 00:01:33.680558 | orchestrator | 00:01:33.680 STDOUT terraform:  + key_pair = "testbed" 2025-07-24 00:01:33.680562 | orchestrator | 00:01:33.680 STDOUT terraform:  + name = "testbed-node-4" 2025-07-24 00:01:33.680635 | orchestrator | 00:01:33.680 STDOUT terraform:  + power_state = "active" 2025-07-24 00:01:33.680642 | orchestrator | 00:01:33.680 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.680664 | orchestrator | 00:01:33.680 STDOUT terraform:  + security_groups = (known after apply) 2025-07-24 00:01:33.680694 | orchestrator | 00:01:33.680 STDOUT terraform:  + stop_before_destroy = false 2025-07-24 00:01:33.680700 | orchestrator | 00:01:33.680 STDOUT terraform:  + updated = (known after apply) 2025-07-24 00:01:33.680770 | orchestrator | 00:01:33.680 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-24 00:01:33.680791 | orchestrator | 00:01:33.680 STDOUT terraform:  + block_device { 2025-07-24 00:01:33.680797 | orchestrator | 00:01:33.680 STDOUT terraform:  + boot_index = 0 2025-07-24 00:01:33.680967 | orchestrator | 00:01:33.680 STDOUT terraform:  + delete_on_termination = false 2025-07-24 00:01:33.681016 | orchestrator | 00:01:33.680 STDOUT terraform:  + destination_type = "volume" 2025-07-24 00:01:33.681028 | orchestrator | 00:01:33.680 STDOUT terraform:  + multiattach = false 2025-07-24 00:01:33.681049 | orchestrator | 00:01:33.680 STDOUT terraform:  + source_type = "volume" 2025-07-24 00:01:33.681053 | orchestrator | 00:01:33.680 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.681066 | orchestrator | 00:01:33.681 STDOUT terraform:  } 2025-07-24 00:01:33.681079 | orchestrator | 00:01:33.681 STDOUT terraform:  + network { 2025-07-24 00:01:33.681083 | orchestrator | 00:01:33.681 STDOUT terraform:  + access_network = false 2025-07-24 00:01:33.681088 | orchestrator | 00:01:33.681 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-24 00:01:33.681127 | orchestrator | 00:01:33.681 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-24 00:01:33.681156 | orchestrator | 00:01:33.681 STDOUT terraform:  + mac = (known after apply) 2025-07-24 00:01:33.681246 | orchestrator | 00:01:33.681 STDOUT terraform:  + name = (known after apply) 2025-07-24 00:01:33.681251 | orchestrator | 00:01:33.681 STDOUT terraform:  + port = (known after apply) 2025-07-24 00:01:33.681255 | orchestrator | 00:01:33.681 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.681317 | orchestrator | 00:01:33.681 STDOUT terraform:  } 2025-07-24 00:01:33.681341 | orchestrator | 00:01:33.681 STDOUT terraform:  } 2025-07-24 00:01:33.681353 | orchestrator | 00:01:33.681 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-24 00:01:33.681367 | orchestrator | 00:01:33.681 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-24 00:01:33.681412 | orchestrator | 00:01:33.681 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-24 00:01:33.681426 | orchestrator | 00:01:33.681 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-24 00:01:33.681460 | orchestrator | 00:01:33.681 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-24 00:01:33.681498 | orchestrator | 00:01:33.681 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.681516 | orchestrator | 00:01:33.681 STDOUT terraform:  + availability_zone = "nova" 2025-07-24 00:01:33.681531 | orchestrator | 00:01:33.681 STDOUT terraform:  + config_drive = true 2025-07-24 00:01:33.681574 | orchestrator | 00:01:33.681 STDOUT terraform:  + created = (known after apply) 2025-07-24 00:01:33.681598 | orchestrator | 00:01:33.681 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-24 00:01:33.681649 | orchestrator | 00:01:33.681 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-24 00:01:33.681673 | orchestrator | 00:01:33.681 STDOUT terraform:  + force_delete = false 2025-07-24 00:01:33.681700 | orchestrator | 00:01:33.681 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-24 00:01:33.681790 | orchestrator | 00:01:33.681 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.681796 | orchestrator | 00:01:33.681 STDOUT terraform:  + image_id = (known after apply) 2025-07-24 00:01:33.681801 | orchestrator | 00:01:33.681 STDOUT terraform:  + image_name = (known after apply) 2025-07-24 00:01:33.681828 | orchestrator | 00:01:33.681 STDOUT terraform:  + key_pair = "testbed" 2025-07-24 00:01:33.681900 | orchestrator | 00:01:33.681 STDOUT terraform:  + name = "testbed-node-5" 2025-07-24 00:01:33.681912 | orchestrator | 00:01:33.681 STDOUT terraform:  + power_state = "active" 2025-07-24 00:01:33.681944 | orchestrator | 00:01:33.681 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.681949 | orchestrator | 00:01:33.681 STDOUT terraform:  + security_groups = (known after apply) 2025-07-24 00:01:33.682005 | orchestrator | 00:01:33.681 STDOUT terraform:  + stop_before_destroy = false 2025-07-24 00:01:33.682050 | orchestrator | 00:01:33.681 STDOUT terraform:  + updated = (known after apply) 2025-07-24 00:01:33.682074 | orchestrator | 00:01:33.682 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-24 00:01:33.682102 | orchestrator | 00:01:33.682 STDOUT terraform:  + block_device { 2025-07-24 00:01:33.682152 | orchestrator | 00:01:33.682 STDOUT terraform:  + boot_index = 0 2025-07-24 00:01:33.682227 | orchestrator | 00:01:33.682 STDOUT terraform:  + delete_on_termination = false 2025-07-24 00:01:33.682241 | orchestrator | 00:01:33.682 STDOUT terraform:  + destination_type = "volume" 2025-07-24 00:01:33.682253 | orchestrator | 00:01:33.682 STDOUT terraform:  + multiattach = false 2025-07-24 00:01:33.682265 | orchestrator | 00:01:33.682 STDOUT terraform:  + source_type = "volume" 2025-07-24 00:01:33.682279 | orchestrator | 00:01:33.682 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.682293 | orchestrator | 00:01:33.682 STDOUT terraform:  } 2025-07-24 00:01:33.682324 | orchestrator | 00:01:33.682 STDOUT terraform:  + network { 2025-07-24 00:01:33.682363 | orchestrator | 00:01:33.682 STDOUT terraform:  + access_network = false 2025-07-24 00:01:33.682368 | orchestrator | 00:01:33.682 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-24 00:01:33.682372 | orchestrator | 00:01:33.682 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-24 00:01:33.682377 | orchestrator | 00:01:33.682 STDOUT terraform:  + mac = (known after apply) 2025-07-24 00:01:33.682437 | orchestrator | 00:01:33.682 STDOUT terraform:  + name = (known after apply) 2025-07-24 00:01:33.682471 | orchestrator | 00:01:33.682 STDOUT terraform:  + port = (known after apply) 2025-07-24 00:01:33.682496 | orchestrator | 00:01:33.682 STDOUT terraform:  + uuid = (known after apply) 2025-07-24 00:01:33.682539 | orchestrator | 00:01:33.682 STDOUT terraform:  } 2025-07-24 00:01:33.682560 | orchestrator | 00:01:33.682 STDOUT terraform:  } 2025-07-24 00:01:33.682586 | orchestrator | 00:01:33.682 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-24 00:01:33.682606 | orchestrator | 00:01:33.682 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-24 00:01:33.682611 | orchestrator | 00:01:33.682 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-24 00:01:33.682615 | orchestrator | 00:01:33.682 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.682639 | orchestrator | 00:01:33.682 STDOUT terraform:  + name = "testbed" 2025-07-24 00:01:33.682730 | orchestrator | 00:01:33.682 STDOUT terraform:  + private_key = (sensitive value) 2025-07-24 00:01:33.682735 | orchestrator | 00:01:33.682 STDOUT terraform:  + public_key = (known after apply) 2025-07-24 00:01:33.682739 | orchestrator | 00:01:33.682 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.682786 | orchestrator | 00:01:33.682 STDOUT terraform:  + user_id = (known after apply) 2025-07-24 00:01:33.682821 | orchestrator | 00:01:33.682 STDOUT terraform:  } 2025-07-24 00:01:33.682857 | orchestrator | 00:01:33.682 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-24 00:01:33.682862 | orchestrator | 00:01:33.682 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-24 00:01:33.682873 | orchestrator | 00:01:33.682 STDOUT terraform:  + device = (known after apply) 2025-07-24 00:01:33.682893 | orchestrator | 00:01:33.682 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.682911 | orchestrator | 00:01:33.682 STDOUT terraform:  + instance_id = (known after apply) 2025-07-24 00:01:33.682963 | orchestrator | 00:01:33.682 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.682986 | orchestrator | 00:01:33.682 STDOUT terraform:  + volume_id = (known after apply) 2025-07-24 00:01:33.682997 | orchestrator | 00:01:33.682 STDOUT terraform:  } 2025-07-24 00:01:33.683030 | orchestrator | 00:01:33.682 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-24 00:01:33.683105 | orchestrator | 00:01:33.683 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-24 00:01:33.683150 | orchestrator | 00:01:33.683 STDOUT terraform:  + device = (known after apply) 2025-07-24 00:01:33.683171 | orchestrator | 00:01:33.683 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.683177 | orchestrator | 00:01:33.683 STDOUT terraform:  + instance_id = (known after apply) 2025-07-24 00:01:33.683181 | orchestrator | 00:01:33.683 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.683222 | orchestrator | 00:01:33.683 STDOUT terraform:  + volume_id = (known after apply) 2025-07-24 00:01:33.683235 | orchestrator | 00:01:33.683 STDOUT terraform:  } 2025-07-24 00:01:33.683274 | orchestrator | 00:01:33.683 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-24 00:01:33.683344 | orchestrator | 00:01:33.683 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-24 00:01:33.683369 | orchestrator | 00:01:33.683 STDOUT terraform:  + device = (known after apply) 2025-07-24 00:01:33.683394 | orchestrator | 00:01:33.683 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.683468 | orchestrator | 00:01:33.683 STDOUT terraform:  + instance_id = (known after apply) 2025-07-24 00:01:33.683487 | orchestrator | 00:01:33.683 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.683493 | orchestrator | 00:01:33.683 STDOUT terraform:  + volume_id = (known after apply) 2025-07-24 00:01:33.683497 | orchestrator | 00:01:33.683 STDOUT terraform:  } 2025-07-24 00:01:33.683503 | orchestrator | 00:01:33.683 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-24 00:01:33.683561 | orchestrator | 00:01:33.683 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-24 00:01:33.683588 | orchestrator | 00:01:33.683 STDOUT terraform:  + device = (known after apply) 2025-07-24 00:01:33.683628 | orchestrator | 00:01:33.683 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.683634 | orchestrator | 00:01:33.683 STDOUT terraform:  + instance_id = (known after apply) 2025-07-24 00:01:33.683686 | orchestrator | 00:01:33.683 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.683736 | orchestrator | 00:01:33.683 STDOUT terraform:  + volume_id = (known after apply) 2025-07-24 00:01:33.683763 | orchestrator | 00:01:33.683 STDOUT terraform:  } 2025-07-24 00:01:33.683783 | orchestrator | 00:01:33.683 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-24 00:01:33.683801 | orchestrator | 00:01:33.683 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-24 00:01:33.683870 | orchestrator | 00:01:33.683 STDOUT terraform:  + device = (known after apply) 2025-07-24 00:01:33.683971 | orchestrator | 00:01:33.683 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.683984 | orchestrator | 00:01:33.683 STDOUT terraform:  + instance_id = (known after apply) 2025-07-24 00:01:33.683988 | orchestrator | 00:01:33.683 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.684008 | orchestrator | 00:01:33.683 STDOUT terraform:  + volume_id = (known after apply) 2025-07-24 00:01:33.684012 | orchestrator | 00:01:33.683 STDOUT terraform:  } 2025-07-24 00:01:33.684016 | orchestrator | 00:01:33.683 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-24 00:01:33.684092 | orchestrator | 00:01:33.683 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-24 00:01:33.684097 | orchestrator | 00:01:33.684 STDOUT terraform:  + device = (known after apply) 2025-07-24 00:01:33.684110 | orchestrator | 00:01:33.684 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.684137 | orchestrator | 00:01:33.684 STDOUT terraform:  + instance_id = (known after apply) 2025-07-24 00:01:33.684204 | orchestrator | 00:01:33.684 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.684233 | orchestrator | 00:01:33.684 STDOUT terraform:  + volume_id = (known after apply) 2025-07-24 00:01:33.684245 | orchestrator | 00:01:33.684 STDOUT terraform:  } 2025-07-24 00:01:33.684258 | orchestrator | 00:01:33.684 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-24 00:01:33.684291 | orchestrator | 00:01:33.684 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-24 00:01:33.684325 | orchestrator | 00:01:33.684 STDOUT terraform:  + device = (known after apply) 2025-07-24 00:01:33.684371 | orchestrator | 00:01:33.684 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.684376 | orchestrator | 00:01:33.684 STDOUT terraform:  + instance_id = (known after apply) 2025-07-24 00:01:33.684409 | orchestrator | 00:01:33.684 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.684423 | orchestrator | 00:01:33.684 STDOUT terraform:  + volume_id = (known after apply) 2025-07-24 00:01:33.684468 | orchestrator | 00:01:33.684 STDOUT terraform:  } 2025-07-24 00:01:33.684484 | orchestrator | 00:01:33.684 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-24 00:01:33.684510 | orchestrator | 00:01:33.684 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-24 00:01:33.684540 | orchestrator | 00:01:33.684 STDOUT terraform:  + device = (known after apply) 2025-07-24 00:01:33.684568 | orchestrator | 00:01:33.684 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.684595 | orchestrator | 00:01:33.684 STDOUT terraform:  + instance_id = (known after apply) 2025-07-24 00:01:33.684630 | orchestrator | 00:01:33.684 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.684642 | orchestrator | 00:01:33.684 STDOUT terraform:  + volume_id = (known after apply) 2025-07-24 00:01:33.684684 | orchestrator | 00:01:33.684 STDOUT terraform:  } 2025-07-24 00:01:33.684699 | orchestrator | 00:01:33.684 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-24 00:01:33.684747 | orchestrator | 00:01:33.684 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-24 00:01:33.684767 | orchestrator | 00:01:33.684 STDOUT terraform:  + device = (known after apply) 2025-07-24 00:01:33.684800 | orchestrator | 00:01:33.684 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.684917 | orchestrator | 00:01:33.684 STDOUT terraform:  + instance_id = (known after apply) 2025-07-24 00:01:33.684958 | orchestrator | 00:01:33.684 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.684992 | orchestrator | 00:01:33.684 STDOUT terraform:  + volume_id = (known after apply) 2025-07-24 00:01:33.685003 | orchestrator | 00:01:33.684 STDOUT terraform:  } 2025-07-24 00:01:33.685019 | orchestrator | 00:01:33.684 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-24 00:01:33.685025 | orchestrator | 00:01:33.684 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-24 00:01:33.685029 | orchestrator | 00:01:33.684 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-24 00:01:33.685034 | orchestrator | 00:01:33.685 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-24 00:01:33.685069 | orchestrator | 00:01:33.685 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.685095 | orchestrator | 00:01:33.685 STDOUT terraform:  + port_id = (known after apply) 2025-07-24 00:01:33.685101 | orchestrator | 00:01:33.685 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.685124 | orchestrator | 00:01:33.685 STDOUT terraform:  } 2025-07-24 00:01:33.685183 | orchestrator | 00:01:33.685 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-24 00:01:33.685229 | orchestrator | 00:01:33.685 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-24 00:01:33.685234 | orchestrator | 00:01:33.685 STDOUT terraform:  + address = (known after apply) 2025-07-24 00:01:33.685294 | orchestrator | 00:01:33.685 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.685335 | orchestrator | 00:01:33.685 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-24 00:01:33.685348 | orchestrator | 00:01:33.685 STDOUT terraform:  + dns_name = (known after apply) 2025-07-24 00:01:33.685353 | orchestrator | 00:01:33.685 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-24 00:01:33.685357 | orchestrator | 00:01:33.685 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.685361 | orchestrator | 00:01:33.685 STDOUT terraform:  + pool = "public" 2025-07-24 00:01:33.685365 | orchestrator | 00:01:33.685 STDOUT terraform:  + port_id = (known after apply) 2025-07-24 00:01:33.685391 | orchestrator | 00:01:33.685 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.685396 | orchestrator | 00:01:33.685 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-24 00:01:33.685468 | orchestrator | 00:01:33.685 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.685498 | orchestrator | 00:01:33.685 STDOUT terraform:  } 2025-07-24 00:01:33.685511 | orchestrator | 00:01:33.685 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-24 00:01:33.685535 | orchestrator | 00:01:33.685 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-24 00:01:33.685541 | orchestrator | 00:01:33.685 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-24 00:01:33.685593 | orchestrator | 00:01:33.685 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.685652 | orchestrator | 00:01:33.685 STDOUT terraform:  + availability_zone_hints = [ 2025-07-24 00:01:33.685656 | orchestrator | 00:01:33.685 STDOUT terraform:  + "nova", 2025-07-24 00:01:33.685660 | orchestrator | 00:01:33.685 STDOUT terraform:  ] 2025-07-24 00:01:33.685664 | orchestrator | 00:01:33.685 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-24 00:01:33.685669 | orchestrator | 00:01:33.685 STDOUT terraform:  + external = (known after apply) 2025-07-24 00:01:33.685695 | orchestrator | 00:01:33.685 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.685733 | orchestrator | 00:01:33.685 STDOUT terraform:  + mtu = (known after apply) 2025-07-24 00:01:33.685769 | orchestrator | 00:01:33.685 STDOUT terraform:  + name = "net-testbed-management" 2025-07-24 00:01:33.685851 | orchestrator | 00:01:33.685 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-24 00:01:33.685857 | orchestrator | 00:01:33.685 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-24 00:01:33.685879 | orchestrator | 00:01:33.685 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.685907 | orchestrator | 00:01:33.685 STDOUT terraform:  + shared = (known after apply) 2025-07-24 00:01:33.685929 | orchestrator | 00:01:33.685 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.685974 | orchestrator | 00:01:33.685 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-24 00:01:33.685995 | orchestrator | 00:01:33.685 STDOUT terraform:  + segments (known after apply) 2025-07-24 00:01:33.686106 | orchestrator | 00:01:33.685 STDOUT terraform:  } 2025-07-24 00:01:33.686121 | orchestrator | 00:01:33.685 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-24 00:01:33.686138 | orchestrator | 00:01:33.686 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-24 00:01:33.686143 | orchestrator | 00:01:33.686 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-24 00:01:33.686149 | orchestrator | 00:01:33.686 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-24 00:01:33.686173 | orchestrator | 00:01:33.686 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-24 00:01:33.686221 | orchestrator | 00:01:33.686 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.686262 | orchestrator | 00:01:33.686 STDOUT terraform:  + device_id = (known after apply) 2025-07-24 00:01:33.686299 | orchestrator | 00:01:33.686 STDOUT terraform:  + device_owner = (known after apply) 2025-07-24 00:01:33.686331 | orchestrator | 00:01:33.686 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-24 00:01:33.686337 | orchestrator | 00:01:33.686 STDOUT terraform:  + dns_name = (known after apply) 2025-07-24 00:01:33.686401 | orchestrator | 00:01:33.686 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.686427 | orchestrator | 00:01:33.686 STDOUT terraform:  + mac_address = (known after apply) 2025-07-24 00:01:33.686458 | orchestrator | 00:01:33.686 STDOUT terraform:  + network_id = (known after apply) 2025-07-24 00:01:33.687314 | orchestrator | 00:01:33.686 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-24 00:01:33.687325 | orchestrator | 00:01:33.687 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-24 00:01:33.687331 | orchestrator | 00:01:33.687 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.687362 | orchestrator | 00:01:33.687 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-24 00:01:33.687410 | orchestrator | 00:01:33.687 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.687493 | orchestrator | 00:01:33.687 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.687514 | orchestrator | 00:01:33.687 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-24 00:01:33.687564 | orchestrator | 00:01:33.687 STDOUT terraform:  } 2025-07-24 00:01:33.687579 | orchestrator | 00:01:33.687 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.687583 | orchestrator | 00:01:33.687 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-24 00:01:33.687587 | orchestrator | 00:01:33.687 STDOUT terraform:  } 2025-07-24 00:01:33.687591 | orchestrator | 00:01:33.687 STDOUT terraform:  + binding (known after apply) 2025-07-24 00:01:33.687594 | orchestrator | 00:01:33.687 STDOUT terraform:  + fixed_ip { 2025-07-24 00:01:33.687617 | orchestrator | 00:01:33.687 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-24 00:01:33.687658 | orchestrator | 00:01:33.687 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-24 00:01:33.687678 | orchestrator | 00:01:33.687 STDOUT terraform:  } 2025-07-24 00:01:33.687683 | orchestrator | 00:01:33.687 STDOUT terraform:  } 2025-07-24 00:01:33.687687 | orchestrator | 00:01:33.687 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-24 00:01:33.687691 | orchestrator | 00:01:33.687 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-24 00:01:33.687711 | orchestrator | 00:01:33.687 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-24 00:01:33.687738 | orchestrator | 00:01:33.687 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-24 00:01:33.687812 | orchestrator | 00:01:33.687 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-24 00:01:33.687880 | orchestrator | 00:01:33.687 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.687885 | orchestrator | 00:01:33.687 STDOUT terraform:  + device_id = (known after apply) 2025-07-24 00:01:33.687913 | orchestrator | 00:01:33.687 STDOUT terraform:  + device_owner = (known after apply) 2025-07-24 00:01:33.687920 | orchestrator | 00:01:33.687 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-24 00:01:33.687947 | orchestrator | 00:01:33.687 STDOUT terraform:  + dns_name = (known after apply) 2025-07-24 00:01:33.687966 | orchestrator | 00:01:33.687 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.688044 | orchestrator | 00:01:33.687 STDOUT terraform:  + mac_address = (known after apply) 2025-07-24 00:01:33.688097 | orchestrator | 00:01:33.687 STDOUT terraform:  + network_id = (known after apply) 2025-07-24 00:01:33.688139 | orchestrator | 00:01:33.688 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-24 00:01:33.688152 | orchestrator | 00:01:33.688 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-24 00:01:33.688163 | orchestrator | 00:01:33.688 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.688195 | orchestrator | 00:01:33.688 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-24 00:01:33.688262 | orchestrator | 00:01:33.688 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.688266 | orchestrator | 00:01:33.688 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.688283 | orchestrator | 00:01:33.688 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-24 00:01:33.688303 | orchestrator | 00:01:33.688 STDOUT terraform:  } 2025-07-24 00:01:33.688336 | orchestrator | 00:01:33.688 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.688393 | orchestrator | 00:01:33.688 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-24 00:01:33.688407 | orchestrator | 00:01:33.688 STDOUT terraform:  } 2025-07-24 00:01:33.688450 | orchestrator | 00:01:33.688 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.688476 | orchestrator | 00:01:33.688 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-24 00:01:33.688480 | orchestrator | 00:01:33.688 STDOUT terraform:  } 2025-07-24 00:01:33.688484 | orchestrator | 00:01:33.688 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.688488 | orchestrator | 00:01:33.688 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-24 00:01:33.688492 | orchestrator | 00:01:33.688 STDOUT terraform:  } 2025-07-24 00:01:33.688521 | orchestrator | 00:01:33.688 STDOUT terraform:  + binding (known after apply) 2025-07-24 00:01:33.688534 | orchestrator | 00:01:33.688 STDOUT terraform:  + fixed_ip { 2025-07-24 00:01:33.688547 | orchestrator | 00:01:33.688 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-24 00:01:33.688578 | orchestrator | 00:01:33.688 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-24 00:01:33.688582 | orchestrator | 00:01:33.688 STDOUT terraform:  } 2025-07-24 00:01:33.688586 | orchestrator | 00:01:33.688 STDOUT terraform:  } 2025-07-24 00:01:33.688590 | orchestrator | 00:01:33.688 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-24 00:01:33.688611 | orchestrator | 00:01:33.688 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-24 00:01:33.688704 | orchestrator | 00:01:33.688 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-24 00:01:33.688719 | orchestrator | 00:01:33.688 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-24 00:01:33.688740 | orchestrator | 00:01:33.688 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-24 00:01:33.688746 | orchestrator | 00:01:33.688 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.688749 | orchestrator | 00:01:33.688 STDOUT terraform:  + device_id = (known after apply) 2025-07-24 00:01:33.688753 | orchestrator | 00:01:33.688 STDOUT terraform:  + device_owner = (known after apply) 2025-07-24 00:01:33.688757 | orchestrator | 00:01:33.688 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-24 00:01:33.688783 | orchestrator | 00:01:33.688 STDOUT terraform:  + dns_name = (known after apply) 2025-07-24 00:01:33.688804 | orchestrator | 00:01:33.688 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.688869 | orchestrator | 00:01:33.688 STDOUT terraform:  + mac_address = (known after apply) 2025-07-24 00:01:33.688922 | orchestrator | 00:01:33.688 STDOUT terraform:  + network_id = (known after apply) 2025-07-24 00:01:33.689002 | orchestrator | 00:01:33.688 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-24 00:01:33.689006 | orchestrator | 00:01:33.688 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-24 00:01:33.689010 | orchestrator | 00:01:33.688 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.689015 | orchestrator | 00:01:33.688 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-24 00:01:33.689050 | orchestrator | 00:01:33.689 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.689072 | orchestrator | 00:01:33.689 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.689106 | orchestrator | 00:01:33.689 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-24 00:01:33.689127 | orchestrator | 00:01:33.689 STDOUT terraform:  } 2025-07-24 00:01:33.689145 | orchestrator | 00:01:33.689 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.689151 | orchestrator | 00:01:33.689 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-24 00:01:33.689154 | orchestrator | 00:01:33.689 STDOUT terraform:  } 2025-07-24 00:01:33.689176 | orchestrator | 00:01:33.689 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.689229 | orchestrator | 00:01:33.689 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-24 00:01:33.689243 | orchestrator | 00:01:33.689 STDOUT terraform:  } 2025-07-24 00:01:33.689247 | orchestrator | 00:01:33.689 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.689253 | orchestrator | 00:01:33.689 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-24 00:01:33.689287 | orchestrator | 00:01:33.689 STDOUT terraform:  } 2025-07-24 00:01:33.689293 | orchestrator | 00:01:33.689 STDOUT terraform:  + binding (known after apply) 2025-07-24 00:01:33.689297 | orchestrator | 00:01:33.689 STDOUT terraform:  + fixed_ip { 2025-07-24 00:01:33.689335 | orchestrator | 00:01:33.689 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-24 00:01:33.689367 | orchestrator | 00:01:33.689 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-24 00:01:33.689401 | orchestrator | 00:01:33.689 STDOUT terraform:  } 2025-07-24 00:01:33.689418 | orchestrator | 00:01:33.689 STDOUT terraform:  } 2025-07-24 00:01:33.689450 | orchestrator | 00:01:33.689 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-24 00:01:33.689477 | orchestrator | 00:01:33.689 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-24 00:01:33.689534 | orchestrator | 00:01:33.689 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-24 00:01:33.689546 | orchestrator | 00:01:33.689 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-24 00:01:33.689554 | orchestrator | 00:01:33.689 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-24 00:01:33.689559 | orchestrator | 00:01:33.689 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.689585 | orchestrator | 00:01:33.689 STDOUT terraform:  + device_id = (known after apply) 2025-07-24 00:01:33.689628 | orchestrator | 00:01:33.689 STDOUT terraform:  + device_owner = (known after apply) 2025-07-24 00:01:33.689666 | orchestrator | 00:01:33.689 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-24 00:01:33.689681 | orchestrator | 00:01:33.689 STDOUT terraform:  + dns_name = (known after apply) 2025-07-24 00:01:33.689701 | orchestrator | 00:01:33.689 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.689725 | orchestrator | 00:01:33.689 STDOUT terraform:  + mac_address = (known after apply) 2025-07-24 00:01:33.689761 | orchestrator | 00:01:33.689 STDOUT terraform:  + network_id = (known after apply) 2025-07-24 00:01:33.689830 | orchestrator | 00:01:33.689 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-24 00:01:33.689889 | orchestrator | 00:01:33.689 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-24 00:01:33.689952 | orchestrator | 00:01:33.689 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.689957 | orchestrator | 00:01:33.689 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-24 00:01:33.689962 | orchestrator | 00:01:33.689 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.689966 | orchestrator | 00:01:33.689 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.689970 | orchestrator | 00:01:33.689 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-24 00:01:33.689994 | orchestrator | 00:01:33.689 STDOUT terraform:  } 2025-07-24 00:01:33.690030 | orchestrator | 00:01:33.689 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.690035 | orchestrator | 00:01:33.689 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-24 00:01:33.690039 | orchestrator | 00:01:33.689 STDOUT terraform:  } 2025-07-24 00:01:33.690059 | orchestrator | 00:01:33.690 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.690100 | orchestrator | 00:01:33.690 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-24 00:01:33.690126 | orchestrator | 00:01:33.690 STDOUT terraform:  } 2025-07-24 00:01:33.690138 | orchestrator | 00:01:33.690 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.690143 | orchestrator | 00:01:33.690 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-24 00:01:33.690148 | orchestrator | 00:01:33.690 STDOUT terraform:  } 2025-07-24 00:01:33.690183 | orchestrator | 00:01:33.690 STDOUT terraform:  + binding (known after apply) 2025-07-24 00:01:33.690205 | orchestrator | 00:01:33.690 STDOUT terraform:  + fixed_ip { 2025-07-24 00:01:33.690219 | orchestrator | 00:01:33.690 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-24 00:01:33.690224 | orchestrator | 00:01:33.690 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-24 00:01:33.690230 | orchestrator | 00:01:33.690 STDOUT terraform:  } 2025-07-24 00:01:33.690279 | orchestrator | 00:01:33.690 STDOUT terraform:  } 2025-07-24 00:01:33.690332 | orchestrator | 00:01:33.690 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-24 00:01:33.690343 | orchestrator | 00:01:33.690 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-24 00:01:33.690384 | orchestrator | 00:01:33.690 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-24 00:01:33.690409 | orchestrator | 00:01:33.690 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-24 00:01:33.690435 | orchestrator | 00:01:33.690 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-24 00:01:33.690441 | orchestrator | 00:01:33.690 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.690476 | orchestrator | 00:01:33.690 STDOUT terraform:  + device_id = (known after apply) 2025-07-24 00:01:33.690517 | orchestrator | 00:01:33.690 STDOUT terraform:  + device_owner = (known after apply) 2025-07-24 00:01:33.690546 | orchestrator | 00:01:33.690 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-24 00:01:33.690567 | orchestrator | 00:01:33.690 STDOUT terraform:  + dns_name = (known after apply) 2025-07-24 00:01:33.690628 | orchestrator | 00:01:33.690 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.690635 | orchestrator | 00:01:33.690 STDOUT terraform:  + mac_address = (known after apply) 2025-07-24 00:01:33.690691 | orchestrator | 00:01:33.690 STDOUT terraform:  + network_id = (known after apply) 2025-07-24 00:01:33.690727 | orchestrator | 00:01:33.690 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-24 00:01:33.690759 | orchestrator | 00:01:33.690 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-24 00:01:33.690828 | orchestrator | 00:01:33.690 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.690902 | orchestrator | 00:01:33.690 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-24 00:01:33.690909 | orchestrator | 00:01:33.690 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.690915 | orchestrator | 00:01:33.690 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.690920 | orchestrator | 00:01:33.690 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-24 00:01:33.690931 | orchestrator | 00:01:33.690 STDOUT terraform:  } 2025-07-24 00:01:33.690952 | orchestrator | 00:01:33.690 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.691010 | orchestrator | 00:01:33.690 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-24 00:01:33.691016 | orchestrator | 00:01:33.690 STDOUT terraform:  } 2025-07-24 00:01:33.691020 | orchestrator | 00:01:33.690 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.691033 | orchestrator | 00:01:33.691 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-24 00:01:33.691093 | orchestrator | 00:01:33.691 STDOUT terraform:  } 2025-07-24 00:01:33.691098 | orchestrator | 00:01:33.691 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.691101 | orchestrator | 00:01:33.691 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-24 00:01:33.691105 | orchestrator | 00:01:33.691 STDOUT terraform:  } 2025-07-24 00:01:33.691136 | orchestrator | 00:01:33.691 STDOUT terraform:  + binding (known after apply) 2025-07-24 00:01:33.691148 | orchestrator | 00:01:33.691 STDOUT terraform:  + fixed_ip { 2025-07-24 00:01:33.691193 | orchestrator | 00:01:33.691 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-24 00:01:33.691199 | orchestrator | 00:01:33.691 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-24 00:01:33.691222 | orchestrator | 00:01:33.691 STDOUT terraform:  } 2025-07-24 00:01:33.691243 | orchestrator | 00:01:33.691 STDOUT terraform:  } 2025-07-24 00:01:33.691249 | orchestrator | 00:01:33.691 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-24 00:01:33.691253 | orchestrator | 00:01:33.691 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-24 00:01:33.691295 | orchestrator | 00:01:33.691 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-24 00:01:33.691319 | orchestrator | 00:01:33.691 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-24 00:01:33.691358 | orchestrator | 00:01:33.691 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-24 00:01:33.691391 | orchestrator | 00:01:33.691 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.691422 | orchestrator | 00:01:33.691 STDOUT terraform:  + device_id = (known after apply) 2025-07-24 00:01:33.691494 | orchestrator | 00:01:33.691 STDOUT terraform:  + device_owner = (known after apply) 2025-07-24 00:01:33.691508 | orchestrator | 00:01:33.691 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-24 00:01:33.691558 | orchestrator | 00:01:33.691 STDOUT terraform:  + dns_name = (known after apply) 2025-07-24 00:01:33.691613 | orchestrator | 00:01:33.691 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.691655 | orchestrator | 00:01:33.691 STDOUT terraform:  + mac_address = (known after apply) 2025-07-24 00:01:33.691680 | orchestrator | 00:01:33.691 STDOUT terraform:  + network_id = (known after apply) 2025-07-24 00:01:33.691712 | orchestrator | 00:01:33.691 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-24 00:01:33.691729 | orchestrator | 00:01:33.691 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-24 00:01:33.691733 | orchestrator | 00:01:33.691 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.691762 | orchestrator | 00:01:33.691 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-24 00:01:33.691784 | orchestrator | 00:01:33.691 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.691830 | orchestrator | 00:01:33.691 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.691898 | orchestrator | 00:01:33.691 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-24 00:01:33.691913 | orchestrator | 00:01:33.691 STDOUT terraform:  } 2025-07-24 00:01:33.691933 | orchestrator | 00:01:33.691 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.691964 | orchestrator | 00:01:33.691 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-24 00:01:33.691976 | orchestrator | 00:01:33.691 STDOUT terraform:  } 2025-07-24 00:01:33.691982 | orchestrator | 00:01:33.691 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.691985 | orchestrator | 00:01:33.691 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-24 00:01:33.691989 | orchestrator | 00:01:33.691 STDOUT terraform:  } 2025-07-24 00:01:33.691993 | orchestrator | 00:01:33.691 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.691998 | orchestrator | 00:01:33.691 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-24 00:01:33.692002 | orchestrator | 00:01:33.691 STDOUT terraform:  } 2025-07-24 00:01:33.692034 | orchestrator | 00:01:33.691 STDOUT terraform:  + binding (known after apply) 2025-07-24 00:01:33.692078 | orchestrator | 00:01:33.692 STDOUT terraform:  + fixed_ip { 2025-07-24 00:01:33.692082 | orchestrator | 00:01:33.692 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-24 00:01:33.692086 | orchestrator | 00:01:33.692 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-24 00:01:33.692091 | orchestrator | 00:01:33.692 STDOUT terraform:  } 2025-07-24 00:01:33.692095 | orchestrator | 00:01:33.692 STDOUT terraform:  } 2025-07-24 00:01:33.692151 | orchestrator | 00:01:33.692 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-24 00:01:33.692188 | orchestrator | 00:01:33.692 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-24 00:01:33.692202 | orchestrator | 00:01:33.692 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-24 00:01:33.692239 | orchestrator | 00:01:33.692 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-24 00:01:33.692275 | orchestrator | 00:01:33.692 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-24 00:01:33.692336 | orchestrator | 00:01:33.692 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.692349 | orchestrator | 00:01:33.692 STDOUT terraform:  + device_id = (known after apply) 2025-07-24 00:01:33.692366 | orchestrator | 00:01:33.692 STDOUT terraform:  + device_owner = (known after apply) 2025-07-24 00:01:33.692419 | orchestrator | 00:01:33.692 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-24 00:01:33.692486 | orchestrator | 00:01:33.692 STDOUT terraform:  + dns_name = (known after apply) 2025-07-24 00:01:33.692491 | orchestrator | 00:01:33.692 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.692525 | orchestrator | 00:01:33.692 STDOUT terraform:  + mac_address = (known after apply) 2025-07-24 00:01:33.692530 | orchestrator | 00:01:33.692 STDOUT terraform:  + network_id = (known after apply) 2025-07-24 00:01:33.692558 | orchestrator | 00:01:33.692 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-24 00:01:33.692612 | orchestrator | 00:01:33.692 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-24 00:01:33.692648 | orchestrator | 00:01:33.692 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.692686 | orchestrator | 00:01:33.692 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-24 00:01:33.692690 | orchestrator | 00:01:33.692 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.692694 | orchestrator | 00:01:33.692 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.692712 | orchestrator | 00:01:33.692 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-24 00:01:33.692766 | orchestrator | 00:01:33.692 STDOUT terraform:  } 2025-07-24 00:01:33.692771 | orchestrator | 00:01:33.692 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.692775 | orchestrator | 00:01:33.692 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-24 00:01:33.692780 | orchestrator | 00:01:33.692 STDOUT terraform:  } 2025-07-24 00:01:33.692785 | orchestrator | 00:01:33.692 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.692817 | orchestrator | 00:01:33.692 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-24 00:01:33.692821 | orchestrator | 00:01:33.692 STDOUT terraform:  } 2025-07-24 00:01:33.692852 | orchestrator | 00:01:33.692 STDOUT terraform:  + allowed_address_pairs { 2025-07-24 00:01:33.692874 | orchestrator | 00:01:33.692 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-24 00:01:33.692879 | orchestrator | 00:01:33.692 STDOUT terraform:  } 2025-07-24 00:01:33.692909 | orchestrator | 00:01:33.692 STDOUT terraform:  + binding (known after apply) 2025-07-24 00:01:33.692914 | orchestrator | 00:01:33.692 STDOUT terraform:  + fixed_ip { 2025-07-24 00:01:33.692960 | orchestrator | 00:01:33.692 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-24 00:01:33.692974 | orchestrator | 00:01:33.692 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-24 00:01:33.692980 | orchestrator | 00:01:33.692 STDOUT terraform:  } 2025-07-24 00:01:33.692983 | orchestrator | 00:01:33.692 STDOUT terraform:  } 2025-07-24 00:01:33.693019 | orchestrator | 00:01:33.692 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-24 00:01:33.693085 | orchestrator | 00:01:33.693 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-24 00:01:33.693091 | orchestrator | 00:01:33.693 STDOUT terraform:  + force_destroy = false 2025-07-24 00:01:33.693097 | orchestrator | 00:01:33.693 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.693142 | orchestrator | 00:01:33.693 STDOUT terraform:  + port_id = (known after apply) 2025-07-24 00:01:33.693176 | orchestrator | 00:01:33.693 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.693193 | orchestrator | 00:01:33.693 STDOUT terraform:  + router_id = (known after apply) 2025-07-24 00:01:33.693197 | orchestrator | 00:01:33.693 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-24 00:01:33.693201 | orchestrator | 00:01:33.693 STDOUT terraform:  } 2025-07-24 00:01:33.693223 | orchestrator | 00:01:33.693 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-24 00:01:33.693270 | orchestrator | 00:01:33.693 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-24 00:01:33.693297 | orchestrator | 00:01:33.693 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-24 00:01:33.693328 | orchestrator | 00:01:33.693 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.693360 | orchestrator | 00:01:33.693 STDOUT terraform:  + availability_zone_hints = [ 2025-07-24 00:01:33.693376 | orchestrator | 00:01:33.693 STDOUT terraform:  + "nova", 2025-07-24 00:01:33.693390 | orchestrator | 00:01:33.693 STDOUT terraform:  ] 2025-07-24 00:01:33.693395 | orchestrator | 00:01:33.693 STDOUT terraform:  + distributed = (known after apply) 2025-07-24 00:01:33.693454 | orchestrator | 00:01:33.693 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-24 00:01:33.693504 | orchestrator | 00:01:33.693 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-24 00:01:33.693523 | orchestrator | 00:01:33.693 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-24 00:01:33.693555 | orchestrator | 00:01:33.693 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.693576 | orchestrator | 00:01:33.693 STDOUT terraform:  + name = "testbed" 2025-07-24 00:01:33.693596 | orchestrator | 00:01:33.693 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.693649 | orchestrator | 00:01:33.693 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.693665 | orchestrator | 00:01:33.693 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-24 00:01:33.693678 | orchestrator | 00:01:33.693 STDOUT terraform:  } 2025-07-24 00:01:33.693768 | orchestrator | 00:01:33.693 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-24 00:01:33.693782 | orchestrator | 00:01:33.693 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-24 00:01:33.693788 | orchestrator | 00:01:33.693 STDOUT terraform:  + description = "ssh" 2025-07-24 00:01:33.693835 | orchestrator | 00:01:33.693 STDOUT terraform:  + direction = "ingress" 2025-07-24 00:01:33.693891 | orchestrator | 00:01:33.693 STDOUT terraform:  + ethertype = "IPv4" 2025-07-24 00:01:33.693916 | orchestrator | 00:01:33.693 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.693920 | orchestrator | 00:01:33.693 STDOUT terraform:  + port_range_max = 22 2025-07-24 00:01:33.693933 | orchestrator | 00:01:33.693 STDOUT terraform:  + port_range_min = 22 2025-07-24 00:01:33.693980 | orchestrator | 00:01:33.693 STDOUT terraform:  + protocol = "tcp" 2025-07-24 00:01:33.693985 | orchestrator | 00:01:33.693 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.694050 | orchestrator | 00:01:33.693 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-24 00:01:33.694073 | orchestrator | 00:01:33.693 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-24 00:01:33.694087 | orchestrator | 00:01:33.694 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-24 00:01:33.694153 | orchestrator | 00:01:33.694 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-24 00:01:33.694167 | orchestrator | 00:01:33.694 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.694171 | orchestrator | 00:01:33.694 STDOUT terraform:  } 2025-07-24 00:01:33.694231 | orchestrator | 00:01:33.694 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-24 00:01:33.694261 | orchestrator | 00:01:33.694 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-24 00:01:33.694277 | orchestrator | 00:01:33.694 STDOUT terraform:  + description = "wireguard" 2025-07-24 00:01:33.694368 | orchestrator | 00:01:33.694 STDOUT terraform:  + direction = "ingress" 2025-07-24 00:01:33.694375 | orchestrator | 00:01:33.694 STDOUT terraform:  + ethertype = "IPv4" 2025-07-24 00:01:33.694380 | orchestrator | 00:01:33.694 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.694385 | orchestrator | 00:01:33.694 STDOUT terraform:  + port_range_max = 51820 2025-07-24 00:01:33.694442 | orchestrator | 00:01:33.694 STDOUT terraform:  + port_range_min = 51820 2025-07-24 00:01:33.694447 | orchestrator | 00:01:33.694 STDOUT terraform:  + protocol = "udp" 2025-07-24 00:01:33.694452 | orchestrator | 00:01:33.694 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.694515 | orchestrator | 00:01:33.694 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-24 00:01:33.694520 | orchestrator | 00:01:33.694 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-24 00:01:33.694525 | orchestrator | 00:01:33.694 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-24 00:01:33.694607 | orchestrator | 00:01:33.694 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-24 00:01:33.694649 | orchestrator | 00:01:33.694 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.694654 | orchestrator | 00:01:33.694 STDOUT terraform:  } 2025-07-24 00:01:33.694658 | orchestrator | 00:01:33.694 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-24 00:01:33.694737 | orchestrator | 00:01:33.694 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-24 00:01:33.694772 | orchestrator | 00:01:33.694 STDOUT terraform:  + direction = "ingress" 2025-07-24 00:01:33.694795 | orchestrator | 00:01:33.694 STDOUT terraform:  + ethertype = "IPv4" 2025-07-24 00:01:33.694806 | orchestrator | 00:01:33.694 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.694810 | orchestrator | 00:01:33.694 STDOUT terraform:  + protocol = "tcp" 2025-07-24 00:01:33.694880 | orchestrator | 00:01:33.694 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.694885 | orchestrator | 00:01:33.694 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-24 00:01:33.694898 | orchestrator | 00:01:33.694 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-24 00:01:33.694945 | orchestrator | 00:01:33.694 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-24 00:01:33.694972 | orchestrator | 00:01:33.694 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-24 00:01:33.695012 | orchestrator | 00:01:33.694 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.695016 | orchestrator | 00:01:33.694 STDOUT terraform:  } 2025-07-24 00:01:33.695112 | orchestrator | 00:01:33.694 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-24 00:01:33.695174 | orchestrator | 00:01:33.695 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-24 00:01:33.695189 | orchestrator | 00:01:33.695 STDOUT terraform:  + direction = "ingress" 2025-07-24 00:01:33.695193 | orchestrator | 00:01:33.695 STDOUT terraform:  + ethertype = "IPv4" 2025-07-24 00:01:33.695206 | orchestrator | 00:01:33.695 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.695247 | orchestrator | 00:01:33.695 STDOUT terraform:  + protocol = "udp" 2025-07-24 00:01:33.695292 | orchestrator | 00:01:33.695 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.695332 | orchestrator | 00:01:33.695 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-24 00:01:33.695346 | orchestrator | 00:01:33.695 STDOUT terraform:  + remot 2025-07-24 00:01:33.695421 | orchestrator | 00:01:33.695 STDOUT terraform: e_group_id = (known after apply) 2025-07-24 00:01:33.695457 | orchestrator | 00:01:33.695 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-24 00:01:33.695505 | orchestrator | 00:01:33.695 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-24 00:01:33.695527 | orchestrator | 00:01:33.695 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.695541 | orchestrator | 00:01:33.695 STDOUT terraform:  } 2025-07-24 00:01:33.695605 | orchestrator | 00:01:33.695 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-24 00:01:33.695664 | orchestrator | 00:01:33.695 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-24 00:01:33.695684 | orchestrator | 00:01:33.695 STDOUT terraform:  + direction = "ingress" 2025-07-24 00:01:33.695697 | orchestrator | 00:01:33.695 STDOUT terraform:  + ethertype = "IPv4" 2025-07-24 00:01:33.695731 | orchestrator | 00:01:33.695 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.695743 | orchestrator | 00:01:33.695 STDOUT terraform:  + protocol = "icmp" 2025-07-24 00:01:33.695837 | orchestrator | 00:01:33.695 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.695862 | orchestrator | 00:01:33.695 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-24 00:01:33.695866 | orchestrator | 00:01:33.695 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-24 00:01:33.695943 | orchestrator | 00:01:33.695 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-24 00:01:33.695965 | orchestrator | 00:01:33.695 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-24 00:01:33.695984 | orchestrator | 00:01:33.695 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.695996 | orchestrator | 00:01:33.695 STDOUT terraform:  } 2025-07-24 00:01:33.696001 | orchestrator | 00:01:33.695 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-24 00:01:33.696066 | orchestrator | 00:01:33.695 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-24 00:01:33.696079 | orchestrator | 00:01:33.696 STDOUT terraform:  + direction = "ingress" 2025-07-24 00:01:33.696084 | orchestrator | 00:01:33.696 STDOUT terraform:  + ethertype = "IPv4" 2025-07-24 00:01:33.696117 | orchestrator | 00:01:33.696 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.696149 | orchestrator | 00:01:33.696 STDOUT terraform:  + protocol = "tcp" 2025-07-24 00:01:33.696259 | orchestrator | 00:01:33.696 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.696343 | orchestrator | 00:01:33.696 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-24 00:01:33.696372 | orchestrator | 00:01:33.696 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-24 00:01:33.696415 | orchestrator | 00:01:33.696 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-24 00:01:33.696428 | orchestrator | 00:01:33.696 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-24 00:01:33.696447 | orchestrator | 00:01:33.696 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.696451 | orchestrator | 00:01:33.696 STDOUT terraform:  } 2025-07-24 00:01:33.696464 | orchestrator | 00:01:33.696 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-24 00:01:33.696565 | orchestrator | 00:01:33.696 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-24 00:01:33.696599 | orchestrator | 00:01:33.696 STDOUT terraform:  + direction = "ingress" 2025-07-24 00:01:33.696619 | orchestrator | 00:01:33.696 STDOUT terraform:  + ethertype = "IPv4" 2025-07-24 00:01:33.696631 | orchestrator | 00:01:33.696 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.696635 | orchestrator | 00:01:33.696 STDOUT terraform:  + protocol = "udp" 2025-07-24 00:01:33.696640 | orchestrator | 00:01:33.696 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.696673 | orchestrator | 00:01:33.696 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-24 00:01:33.696707 | orchestrator | 00:01:33.696 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-24 00:01:33.696773 | orchestrator | 00:01:33.696 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-24 00:01:33.696801 | orchestrator | 00:01:33.696 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-24 00:01:33.696831 | orchestrator | 00:01:33.696 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.696835 | orchestrator | 00:01:33.696 STDOUT terraform:  } 2025-07-24 00:01:33.696978 | orchestrator | 00:01:33.696 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-24 00:01:33.696985 | orchestrator | 00:01:33.696 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-24 00:01:33.697007 | orchestrator | 00:01:33.696 STDOUT terraform:  + direction = "ingress" 2025-07-24 00:01:33.697027 | orchestrator | 00:01:33.696 STDOUT terraform:  + ethertype = "IPv4" 2025-07-24 00:01:33.697085 | orchestrator | 00:01:33.696 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.697089 | orchestrator | 00:01:33.696 STDOUT terraform:  + protocol = "icmp" 2025-07-24 00:01:33.697093 | orchestrator | 00:01:33.697 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.697097 | orchestrator | 00:01:33.697 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-24 00:01:33.697102 | orchestrator | 00:01:33.697 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-24 00:01:33.697137 | orchestrator | 00:01:33.697 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-24 00:01:33.697166 | orchestrator | 00:01:33.697 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-24 00:01:33.697219 | orchestrator | 00:01:33.697 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.697224 | orchestrator | 00:01:33.697 STDOUT terraform:  } 2025-07-24 00:01:33.697269 | orchestrator | 00:01:33.697 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-24 00:01:33.697322 | orchestrator | 00:01:33.697 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-24 00:01:33.697348 | orchestrator | 00:01:33.697 STDOUT terraform:  + description = "vrrp" 2025-07-24 00:01:33.697417 | orchestrator | 00:01:33.697 STDOUT terraform:  + direction = "ingress" 2025-07-24 00:01:33.697430 | orchestrator | 00:01:33.697 STDOUT terraform:  + ethertype = "IPv4" 2025-07-24 00:01:33.697435 | orchestrator | 00:01:33.697 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.697440 | orchestrator | 00:01:33.697 STDOUT terraform:  + protocol = "112" 2025-07-24 00:01:33.697444 | orchestrator | 00:01:33.697 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.697492 | orchestrator | 00:01:33.697 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-24 00:01:33.697510 | orchestrator | 00:01:33.697 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-24 00:01:33.697578 | orchestrator | 00:01:33.697 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-24 00:01:33.697623 | orchestrator | 00:01:33.697 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-24 00:01:33.697654 | orchestrator | 00:01:33.697 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.697699 | orchestrator | 00:01:33.697 STDOUT terraform:  } 2025-07-24 00:01:33.697712 | orchestrator | 00:01:33.697 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-24 00:01:33.697718 | orchestrator | 00:01:33.697 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-24 00:01:33.697722 | orchestrator | 00:01:33.697 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.697749 | orchestrator | 00:01:33.697 STDOUT terraform:  + description = "management security group" 2025-07-24 00:01:33.697755 | orchestrator | 00:01:33.697 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.697808 | orchestrator | 00:01:33.697 STDOUT terraform:  + name = "testbed-management" 2025-07-24 00:01:33.697834 | orchestrator | 00:01:33.697 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.697838 | orchestrator | 00:01:33.697 STDOUT terraform:  + stateful = (known after apply) 2025-07-24 00:01:33.697883 | orchestrator | 00:01:33.697 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.697888 | orchestrator | 00:01:33.697 STDOUT terraform:  } 2025-07-24 00:01:33.697923 | orchestrator | 00:01:33.697 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-24 00:01:33.698087 | orchestrator | 00:01:33.697 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-24 00:01:33.698102 | orchestrator | 00:01:33.697 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.698106 | orchestrator | 00:01:33.697 STDOUT terraform:  + description = "node security group" 2025-07-24 00:01:33.698110 | orchestrator | 00:01:33.698 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.698139 | orchestrator | 00:01:33.698 STDOUT terraform:  + name = "testbed-node" 2025-07-24 00:01:33.698151 | orchestrator | 00:01:33.698 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.698155 | orchestrator | 00:01:33.698 STDOUT terraform:  + stateful = (known after apply) 2025-07-24 00:01:33.698160 | orchestrator | 00:01:33.698 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.698164 | orchestrator | 00:01:33.698 STDOUT terraform:  } 2025-07-24 00:01:33.698211 | orchestrator | 00:01:33.698 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-24 00:01:33.698272 | orchestrator | 00:01:33.698 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-24 00:01:33.698286 | orchestrator | 00:01:33.698 STDOUT terraform:  + all_tags = (known after apply) 2025-07-24 00:01:33.698314 | orchestrator | 00:01:33.698 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-24 00:01:33.698319 | orchestrator | 00:01:33.698 STDOUT terraform:  + dns_nameservers = [ 2025-07-24 00:01:33.698350 | orchestrator | 00:01:33.698 STDOUT terraform:  + "8.8.8.8", 2025-07-24 00:01:33.698387 | orchestrator | 00:01:33.698 STDOUT terraform:  + "9.9.9.9", 2025-07-24 00:01:33.698411 | orchestrator | 00:01:33.698 STDOUT terraform:  ] 2025-07-24 00:01:33.698417 | orchestrator | 00:01:33.698 STDOUT terraform:  + enable_dhcp = true 2025-07-24 00:01:33.698447 | orchestrator | 00:01:33.698 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-24 00:01:33.698452 | orchestrator | 00:01:33.698 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.698465 | orchestrator | 00:01:33.698 STDOUT terraform:  + ip_version = 4 2025-07-24 00:01:33.698478 | orchestrator | 00:01:33.698 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-24 00:01:33.698503 | orchestrator | 00:01:33.698 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-24 00:01:33.698542 | orchestrator | 00:01:33.698 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-24 00:01:33.698547 | orchestrator | 00:01:33.698 STDOUT terraform:  + network_id = (known after apply) 2025-07-24 00:01:33.698605 | orchestrator | 00:01:33.698 STDOUT terraform:  + no_gateway = false 2025-07-24 00:01:33.698609 | orchestrator | 00:01:33.698 STDOUT terraform:  + region = (known after apply) 2025-07-24 00:01:33.698613 | orchestrator | 00:01:33.698 STDOUT terraform:  + service_types = (known after apply) 2025-07-24 00:01:33.698628 | orchestrator | 00:01:33.698 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-24 00:01:33.698648 | orchestrator | 00:01:33.698 STDOUT terraform:  + allocation_pool { 2025-07-24 00:01:33.698662 | orchestrator | 00:01:33.698 STDOUT terraform:  + end = "192.168.31.250" 2025-07-24 00:01:33.698688 | orchestrator | 00:01:33.698 STDOUT terraform:  + start = "192.168.31.200" 2025-07-24 00:01:33.698703 | orchestrator | 00:01:33.698 STDOUT terraform:  } 2025-07-24 00:01:33.698716 | orchestrator | 00:01:33.698 STDOUT terraform:  } 2025-07-24 00:01:33.698722 | orchestrator | 00:01:33.698 STDOUT terraform:  # terraform_data.image will be created 2025-07-24 00:01:33.698749 | orchestrator | 00:01:33.698 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-24 00:01:33.698778 | orchestrator | 00:01:33.698 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.698784 | orchestrator | 00:01:33.698 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-24 00:01:33.698827 | orchestrator | 00:01:33.698 STDOUT terraform:  + output = (known after apply) 2025-07-24 00:01:33.698896 | orchestrator | 00:01:33.698 STDOUT terraform:  } 2025-07-24 00:01:33.698901 | orchestrator | 00:01:33.698 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-24 00:01:33.698941 | orchestrator | 00:01:33.698 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-24 00:01:33.698946 | orchestrator | 00:01:33.698 STDOUT terraform:  + id = (known after apply) 2025-07-24 00:01:33.698950 | orchestrator | 00:01:33.698 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-24 00:01:33.698954 | orchestrator | 00:01:33.698 STDOUT terraform:  + output = (known after apply) 2025-07-24 00:01:33.698987 | orchestrator | 00:01:33.698 STDOUT terraform:  } 2025-07-24 00:01:33.698993 | orchestrator | 00:01:33.698 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-24 00:01:33.699002 | orchestrator | 00:01:33.698 STDOUT terraform: Changes to Outputs: 2025-07-24 00:01:33.699033 | orchestrator | 00:01:33.698 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-24 00:01:33.699048 | orchestrator | 00:01:33.698 STDOUT terraform:  + private_key = (sensitive value) 2025-07-24 00:01:33.872206 | orchestrator | 00:01:33.872 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-24 00:01:33.940007 | orchestrator | 00:01:33.935 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=fd663d66-f038-cee7-6071-b46a171e2f5a] 2025-07-24 00:01:33.940102 | orchestrator | 00:01:33.936 STDOUT terraform: terraform_data.image: Creating... 2025-07-24 00:01:33.940116 | orchestrator | 00:01:33.936 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=0bc6227b-a923-f452-3350-1768fbc34494] 2025-07-24 00:01:33.950460 | orchestrator | 00:01:33.950 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-24 00:01:33.950931 | orchestrator | 00:01:33.950 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-24 00:01:33.974339 | orchestrator | 00:01:33.974 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-24 00:01:33.975056 | orchestrator | 00:01:33.974 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-24 00:01:33.975800 | orchestrator | 00:01:33.975 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-24 00:01:33.976692 | orchestrator | 00:01:33.976 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-24 00:01:33.977633 | orchestrator | 00:01:33.977 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-24 00:01:33.980719 | orchestrator | 00:01:33.980 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-24 00:01:33.980774 | orchestrator | 00:01:33.980 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-24 00:01:33.986907 | orchestrator | 00:01:33.986 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-24 00:01:34.420678 | orchestrator | 00:01:34.420 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-24 00:01:34.424192 | orchestrator | 00:01:34.424 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-24 00:01:34.425587 | orchestrator | 00:01:34.425 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-24 00:01:34.428447 | orchestrator | 00:01:34.428 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-24 00:01:34.481510 | orchestrator | 00:01:34.481 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-07-24 00:01:34.487383 | orchestrator | 00:01:34.487 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-24 00:01:35.167677 | orchestrator | 00:01:35.167 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=dad947b4-b4af-4d0a-b341-34f985d09ea4] 2025-07-24 00:01:35.179629 | orchestrator | 00:01:35.179 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-24 00:01:37.573264 | orchestrator | 00:01:37.572 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=7ffc2fa7-c4bb-4ada-b602-3d94f3eb78b5] 2025-07-24 00:01:37.585739 | orchestrator | 00:01:37.585 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-24 00:01:37.592225 | orchestrator | 00:01:37.591 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=145aed19-8066-481a-a1a1-6b3a2d7c9bc2] 2025-07-24 00:01:37.606219 | orchestrator | 00:01:37.606 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-24 00:01:37.607536 | orchestrator | 00:01:37.607 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=dff5acf3-a2ae-41ad-a63b-d73f25443d03] 2025-07-24 00:01:37.612025 | orchestrator | 00:01:37.611 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=f247127dd9cbbf2174c11da0acaa108a8ada2eea] 2025-07-24 00:01:37.613883 | orchestrator | 00:01:37.613 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-24 00:01:37.619686 | orchestrator | 00:01:37.619 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-24 00:01:37.622942 | orchestrator | 00:01:37.622 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=759013c1-9988-48ca-b621-cb5a40ddd526] 2025-07-24 00:01:37.629029 | orchestrator | 00:01:37.628 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-24 00:01:37.635594 | orchestrator | 00:01:37.635 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=74472b5d-757b-4c6d-83c8-ac835243b859] 2025-07-24 00:01:37.644128 | orchestrator | 00:01:37.643 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-24 00:01:37.693926 | orchestrator | 00:01:37.693 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=81e97df9-f368-48fc-afd5-b0bf6553a5e5] 2025-07-24 00:01:37.706496 | orchestrator | 00:01:37.706 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-24 00:01:37.734782 | orchestrator | 00:01:37.734 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=c47a262b-271d-4bf2-9179-861ceff6be10] 2025-07-24 00:01:37.749013 | orchestrator | 00:01:37.748 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-24 00:01:37.749227 | orchestrator | 00:01:37.748 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=a25e96fd-810a-4d05-89c1-121808008436] 2025-07-24 00:01:37.754410 | orchestrator | 00:01:37.754 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=1221c27d647e4fb6f7b2e0f2616bc13578fcf2f2] 2025-07-24 00:01:37.758201 | orchestrator | 00:01:37.757 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-24 00:01:37.826466 | orchestrator | 00:01:37.826 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=78500554-86bc-40e8-b814-41d6c776f857] 2025-07-24 00:01:38.561250 | orchestrator | 00:01:38.560 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=227d588e-2d7b-42de-a955-3a84108d0e91] 2025-07-24 00:01:39.101785 | orchestrator | 00:01:39.101 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=479d5e57-a850-4bfc-a531-ca040a6a029d] 2025-07-24 00:01:39.108593 | orchestrator | 00:01:39.108 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-24 00:01:40.992761 | orchestrator | 00:01:40.992 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=e2dc3d97-1e00-40cb-be91-7a460867e0d9] 2025-07-24 00:01:41.015199 | orchestrator | 00:01:41.014 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=5ea9738b-fe4c-4e25-9e5d-f4febda287f8] 2025-07-24 00:01:41.040702 | orchestrator | 00:01:41.040 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=069b8e5c-62a5-4318-9e22-43fac1d8a409] 2025-07-24 00:01:41.041290 | orchestrator | 00:01:41.041 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=5ffe1cff-80af-40c8-ba73-30b3a5f9ef6d] 2025-07-24 00:01:41.046959 | orchestrator | 00:01:41.046 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=2ef5fce0-0c4b-4f03-8afe-2577896fe49a] 2025-07-24 00:01:41.085794 | orchestrator | 00:01:41.085 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=2ecb840a-2cb3-4fce-aab1-19b4b3016ad5] 2025-07-24 00:01:41.623546 | orchestrator | 00:01:41.623 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=6d7c4cfa-c197-4785-884d-f2d33f7ddb5b] 2025-07-24 00:01:41.636674 | orchestrator | 00:01:41.636 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-24 00:01:41.637224 | orchestrator | 00:01:41.637 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-24 00:01:41.641289 | orchestrator | 00:01:41.641 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-24 00:01:41.812432 | orchestrator | 00:01:41.812 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=db6fa122-fc8b-4c85-a8f4-4d23a60129a7] 2025-07-24 00:01:41.825577 | orchestrator | 00:01:41.825 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-24 00:01:41.825696 | orchestrator | 00:01:41.825 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-24 00:01:41.825956 | orchestrator | 00:01:41.825 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-24 00:01:41.830566 | orchestrator | 00:01:41.830 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-24 00:01:41.830631 | orchestrator | 00:01:41.830 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-24 00:01:41.841288 | orchestrator | 00:01:41.841 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-24 00:01:41.844278 | orchestrator | 00:01:41.844 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-24 00:01:41.845087 | orchestrator | 00:01:41.845 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-24 00:01:41.887053 | orchestrator | 00:01:41.886 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=82e4cc69-5d2f-404c-a162-c7f09a8ee3cf] 2025-07-24 00:01:41.897525 | orchestrator | 00:01:41.897 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-24 00:01:42.040551 | orchestrator | 00:01:42.040 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=b4a6f469-618a-4dbc-ae7f-5e3beb13aaff] 2025-07-24 00:01:42.056049 | orchestrator | 00:01:42.055 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-24 00:01:42.228155 | orchestrator | 00:01:42.227 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=8b2ca6c9-2730-43e3-b077-85d9ed3be5f3] 2025-07-24 00:01:42.232998 | orchestrator | 00:01:42.232 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-24 00:01:42.415626 | orchestrator | 00:01:42.415 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=0db1873f-bb47-431f-96c7-7d334ef1c793] 2025-07-24 00:01:42.429284 | orchestrator | 00:01:42.428 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-24 00:01:42.649091 | orchestrator | 00:01:42.648 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=24ab6747-1f4b-4b6b-8e1b-6b3d00d37915] 2025-07-24 00:01:42.654491 | orchestrator | 00:01:42.654 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-24 00:01:42.701485 | orchestrator | 00:01:42.701 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=2b3f80dc-8fe8-4b34-8d9c-a3852ca4c9b0] 2025-07-24 00:01:42.707797 | orchestrator | 00:01:42.707 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-24 00:01:42.755665 | orchestrator | 00:01:42.755 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=ac3e7646-76be-4fcd-a4a6-64b082d7fdcd] 2025-07-24 00:01:42.760381 | orchestrator | 00:01:42.760 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-24 00:01:42.816524 | orchestrator | 00:01:42.816 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=8e49ca0c-a2da-434a-bc5c-e897bd78384c] 2025-07-24 00:01:42.824154 | orchestrator | 00:01:42.823 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-24 00:01:42.850288 | orchestrator | 00:01:42.848 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=364f19c5-bb95-41cb-b25d-70144843f499] 2025-07-24 00:01:42.857637 | orchestrator | 00:01:42.857 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=7bc5bc69-d732-4aa8-973c-67216821634f] 2025-07-24 00:01:42.981570 | orchestrator | 00:01:42.981 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=26593932-4905-4b8a-9e8f-2aded4efe44a] 2025-07-24 00:01:43.095298 | orchestrator | 00:01:43.094 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=52e484b2-8da2-43d8-af2b-75e642e5a7b1] 2025-07-24 00:01:43.259613 | orchestrator | 00:01:43.259 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=19c63ea8-7487-49ea-aff0-4f131481d660] 2025-07-24 00:01:43.446309 | orchestrator | 00:01:43.445 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=06ee9ac9-e866-4f1c-b006-51a08c0944df] 2025-07-24 00:01:43.543379 | orchestrator | 00:01:43.543 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=1fbc945a-c5ee-4220-b11e-2883e206e507] 2025-07-24 00:01:43.652749 | orchestrator | 00:01:43.652 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=f3c645c9-1b5d-4201-a6f5-983f3270a74d] 2025-07-24 00:01:43.776735 | orchestrator | 00:01:43.776 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=c9f16333-3e51-400f-8b8b-c27d3560f4c1] 2025-07-24 00:01:44.529093 | orchestrator | 00:01:44.528 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=bc80d3f2-a7d7-4182-860e-2b05d60c240e] 2025-07-24 00:01:44.562336 | orchestrator | 00:01:44.562 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-24 00:01:44.571711 | orchestrator | 00:01:44.571 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-24 00:01:44.573050 | orchestrator | 00:01:44.572 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-24 00:01:44.577352 | orchestrator | 00:01:44.577 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-24 00:01:44.588645 | orchestrator | 00:01:44.588 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-24 00:01:44.589517 | orchestrator | 00:01:44.589 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-24 00:01:44.602729 | orchestrator | 00:01:44.602 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-24 00:01:47.264497 | orchestrator | 00:01:47.264 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=f7c270a8-3d7f-4fa0-ae41-61bfb1601aa0] 2025-07-24 00:01:47.272111 | orchestrator | 00:01:47.271 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-24 00:01:47.276488 | orchestrator | 00:01:47.276 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-24 00:01:47.281410 | orchestrator | 00:01:47.281 STDOUT terraform: local_file.inventory: Creating... 2025-07-24 00:01:47.282993 | orchestrator | 00:01:47.282 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=73a46f42278359ae84c8f382b3ae94b833c2e45a] 2025-07-24 00:01:47.285318 | orchestrator | 00:01:47.285 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=de19ade0b80a5af7609fcb2061fe520d8b719bf9] 2025-07-24 00:01:48.032057 | orchestrator | 00:01:48.031 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=f7c270a8-3d7f-4fa0-ae41-61bfb1601aa0] 2025-07-24 00:01:54.573394 | orchestrator | 00:01:54.573 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-24 00:01:54.578442 | orchestrator | 00:01:54.578 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-24 00:01:54.578530 | orchestrator | 00:01:54.578 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-24 00:01:54.591049 | orchestrator | 00:01:54.590 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-24 00:01:54.591529 | orchestrator | 00:01:54.591 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-24 00:01:54.604599 | orchestrator | 00:01:54.604 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-24 00:02:04.575805 | orchestrator | 00:02:04.575 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-24 00:02:04.579056 | orchestrator | 00:02:04.578 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-24 00:02:04.579133 | orchestrator | 00:02:04.578 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-24 00:02:04.591345 | orchestrator | 00:02:04.591 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-24 00:02:04.592432 | orchestrator | 00:02:04.592 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-24 00:02:04.604736 | orchestrator | 00:02:04.604 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-24 00:02:05.068513 | orchestrator | 00:02:05.068 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=36ab50dd-cc2c-4c42-a099-35de6f9881a3] 2025-07-24 00:02:05.286346 | orchestrator | 00:02:05.285 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=9112b798-0d2c-403c-98f6-c10e116c2c7d] 2025-07-24 00:02:05.775777 | orchestrator | 00:02:05.775 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=887c0544-dcbd-4132-9115-2fc84fcd6692] 2025-07-24 00:02:14.594236 | orchestrator | 00:02:14.593 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-07-24 00:02:14.594376 | orchestrator | 00:02:14.594 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-07-24 00:02:14.605435 | orchestrator | 00:02:14.605 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-07-24 00:02:15.542707 | orchestrator | 00:02:15.542 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=eaecdd10-661b-4895-93c2-5221d2a81dad] 2025-07-24 00:02:15.651631 | orchestrator | 00:02:15.651 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=b33a7c10-d21a-46b9-b074-8a3101577a0d] 2025-07-24 00:02:15.905579 | orchestrator | 00:02:15.904 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=2a7d5be3-9a6a-40f9-90a9-a3a2f7db930e] 2025-07-24 00:02:15.936821 | orchestrator | 00:02:15.936 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-24 00:02:15.945785 | orchestrator | 00:02:15.945 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-24 00:02:15.945856 | orchestrator | 00:02:15.945 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-24 00:02:15.946127 | orchestrator | 00:02:15.946 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-24 00:02:15.946385 | orchestrator | 00:02:15.946 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-24 00:02:15.946561 | orchestrator | 00:02:15.946 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-24 00:02:15.946735 | orchestrator | 00:02:15.946 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-24 00:02:15.947133 | orchestrator | 00:02:15.947 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-24 00:02:15.947300 | orchestrator | 00:02:15.947 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=5031147475335916965] 2025-07-24 00:02:15.958320 | orchestrator | 00:02:15.958 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-24 00:02:15.959478 | orchestrator | 00:02:15.959 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-24 00:02:15.970319 | orchestrator | 00:02:15.970 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-24 00:02:19.386297 | orchestrator | 00:02:19.385 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=9112b798-0d2c-403c-98f6-c10e116c2c7d/78500554-86bc-40e8-b814-41d6c776f857] 2025-07-24 00:02:19.386792 | orchestrator | 00:02:19.386 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=eaecdd10-661b-4895-93c2-5221d2a81dad/a25e96fd-810a-4d05-89c1-121808008436] 2025-07-24 00:02:19.412550 | orchestrator | 00:02:19.412 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=36ab50dd-cc2c-4c42-a099-35de6f9881a3/74472b5d-757b-4c6d-83c8-ac835243b859] 2025-07-24 00:02:19.429137 | orchestrator | 00:02:19.428 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=9112b798-0d2c-403c-98f6-c10e116c2c7d/145aed19-8066-481a-a1a1-6b3a2d7c9bc2] 2025-07-24 00:02:19.439962 | orchestrator | 00:02:19.439 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=eaecdd10-661b-4895-93c2-5221d2a81dad/7ffc2fa7-c4bb-4ada-b602-3d94f3eb78b5] 2025-07-24 00:02:19.464132 | orchestrator | 00:02:19.463 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=36ab50dd-cc2c-4c42-a099-35de6f9881a3/759013c1-9988-48ca-b621-cb5a40ddd526] 2025-07-24 00:02:25.548942 | orchestrator | 00:02:25.548 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=eaecdd10-661b-4895-93c2-5221d2a81dad/dff5acf3-a2ae-41ad-a63b-d73f25443d03] 2025-07-24 00:02:25.559855 | orchestrator | 00:02:25.559 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=9112b798-0d2c-403c-98f6-c10e116c2c7d/c47a262b-271d-4bf2-9179-861ceff6be10] 2025-07-24 00:02:25.667362 | orchestrator | 00:02:25.666 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=36ab50dd-cc2c-4c42-a099-35de6f9881a3/81e97df9-f368-48fc-afd5-b0bf6553a5e5] 2025-07-24 00:02:25.971251 | orchestrator | 00:02:25.970 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-24 00:02:35.975397 | orchestrator | 00:02:35.975 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-24 00:02:36.407583 | orchestrator | 00:02:36.406 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=7d00aeb0-122d-446a-9b1d-7bfb7bd7ba22] 2025-07-24 00:02:36.422124 | orchestrator | 00:02:36.421 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-24 00:02:36.422237 | orchestrator | 00:02:36.422 STDOUT terraform: Outputs: 2025-07-24 00:02:36.422248 | orchestrator | 00:02:36.422 STDOUT terraform: manager_address = 2025-07-24 00:02:36.422262 | orchestrator | 00:02:36.422 STDOUT terraform: private_key = 2025-07-24 00:02:36.566242 | orchestrator | ok: Runtime: 0:01:09.988589 2025-07-24 00:02:36.597140 | 2025-07-24 00:02:36.597253 | TASK [Create infrastructure (stable)] 2025-07-24 00:02:37.136477 | orchestrator | skipping: Conditional result was False 2025-07-24 00:02:37.151826 | 2025-07-24 00:02:37.151952 | TASK [Fetch manager address] 2025-07-24 00:02:37.615216 | orchestrator | ok 2025-07-24 00:02:37.624378 | 2025-07-24 00:02:37.624480 | TASK [Set manager_host address] 2025-07-24 00:02:37.704557 | orchestrator | ok 2025-07-24 00:02:37.713754 | 2025-07-24 00:02:37.713858 | LOOP [Update ansible collections] 2025-07-24 00:02:39.036423 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-24 00:02:39.036706 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-24 00:02:39.036744 | orchestrator | Starting galaxy collection install process 2025-07-24 00:02:39.036769 | orchestrator | Process install dependency map 2025-07-24 00:02:39.036791 | orchestrator | Starting collection install process 2025-07-24 00:02:39.036811 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-07-24 00:02:39.036847 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-07-24 00:02:39.036873 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-24 00:02:39.036925 | orchestrator | ok: Item: commons Runtime: 0:00:00.995245 2025-07-24 00:02:40.005885 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-24 00:02:40.006178 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-24 00:02:40.006240 | orchestrator | Starting galaxy collection install process 2025-07-24 00:02:40.006283 | orchestrator | Process install dependency map 2025-07-24 00:02:40.006320 | orchestrator | Starting collection install process 2025-07-24 00:02:40.006356 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-07-24 00:02:40.006393 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-07-24 00:02:40.006428 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-24 00:02:40.006481 | orchestrator | ok: Item: services Runtime: 0:00:00.699051 2025-07-24 00:02:40.029844 | 2025-07-24 00:02:40.030014 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-24 00:02:50.594729 | orchestrator | ok 2025-07-24 00:02:50.607741 | 2025-07-24 00:02:50.607877 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-24 00:03:50.658779 | orchestrator | ok 2025-07-24 00:03:50.670716 | 2025-07-24 00:03:50.670904 | TASK [Fetch manager ssh hostkey] 2025-07-24 00:03:52.248775 | orchestrator | Output suppressed because no_log was given 2025-07-24 00:03:52.263006 | 2025-07-24 00:03:52.263169 | TASK [Get ssh keypair from terraform environment] 2025-07-24 00:03:52.798688 | orchestrator | ok: Runtime: 0:00:00.008467 2025-07-24 00:03:52.814709 | 2025-07-24 00:03:52.814918 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-24 00:03:52.862793 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-24 00:03:52.872259 | 2025-07-24 00:03:52.872381 | TASK [Run manager part 0] 2025-07-24 00:03:53.882668 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-24 00:03:53.937517 | orchestrator | 2025-07-24 00:03:53.937567 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-24 00:03:53.937575 | orchestrator | 2025-07-24 00:03:53.937588 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-24 00:03:55.742201 | orchestrator | ok: [testbed-manager] 2025-07-24 00:03:55.742296 | orchestrator | 2025-07-24 00:03:55.742349 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-24 00:03:55.742373 | orchestrator | 2025-07-24 00:03:55.742395 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-24 00:03:57.602527 | orchestrator | ok: [testbed-manager] 2025-07-24 00:03:57.602626 | orchestrator | 2025-07-24 00:03:57.602644 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-24 00:03:58.282252 | orchestrator | ok: [testbed-manager] 2025-07-24 00:03:58.282304 | orchestrator | 2025-07-24 00:03:58.282316 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-24 00:03:58.332951 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:03:58.333007 | orchestrator | 2025-07-24 00:03:58.333016 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-24 00:03:58.358338 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:03:58.358391 | orchestrator | 2025-07-24 00:03:58.358398 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-24 00:03:58.381961 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:03:58.382029 | orchestrator | 2025-07-24 00:03:58.382036 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-24 00:03:58.414902 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:03:58.414957 | orchestrator | 2025-07-24 00:03:58.414963 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-24 00:03:58.444937 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:03:58.444991 | orchestrator | 2025-07-24 00:03:58.444998 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-24 00:03:58.479499 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:03:58.479556 | orchestrator | 2025-07-24 00:03:58.479564 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-24 00:03:58.509605 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:03:58.509658 | orchestrator | 2025-07-24 00:03:58.509665 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-24 00:03:59.287783 | orchestrator | changed: [testbed-manager] 2025-07-24 00:03:59.287890 | orchestrator | 2025-07-24 00:03:59.287919 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-24 00:06:19.747720 | orchestrator | changed: [testbed-manager] 2025-07-24 00:06:19.747946 | orchestrator | 2025-07-24 00:06:19.747969 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-24 00:07:34.556951 | orchestrator | changed: [testbed-manager] 2025-07-24 00:07:34.557056 | orchestrator | 2025-07-24 00:07:34.557073 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-24 00:07:57.129105 | orchestrator | changed: [testbed-manager] 2025-07-24 00:07:57.129167 | orchestrator | 2025-07-24 00:07:57.129178 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-24 00:08:07.809099 | orchestrator | changed: [testbed-manager] 2025-07-24 00:08:07.809236 | orchestrator | 2025-07-24 00:08:07.809256 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-24 00:08:07.859768 | orchestrator | ok: [testbed-manager] 2025-07-24 00:08:07.859847 | orchestrator | 2025-07-24 00:08:07.859858 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-24 00:08:08.671507 | orchestrator | ok: [testbed-manager] 2025-07-24 00:08:08.671555 | orchestrator | 2025-07-24 00:08:08.671565 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-24 00:08:09.454243 | orchestrator | changed: [testbed-manager] 2025-07-24 00:08:09.454282 | orchestrator | 2025-07-24 00:08:09.454290 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-24 00:08:17.870677 | orchestrator | changed: [testbed-manager] 2025-07-24 00:08:17.870757 | orchestrator | 2025-07-24 00:08:17.870801 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-24 00:08:25.143996 | orchestrator | changed: [testbed-manager] 2025-07-24 00:08:25.144062 | orchestrator | 2025-07-24 00:08:25.144079 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-24 00:08:27.863813 | orchestrator | changed: [testbed-manager] 2025-07-24 00:08:27.863869 | orchestrator | 2025-07-24 00:08:27.863880 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-24 00:08:29.676062 | orchestrator | changed: [testbed-manager] 2025-07-24 00:08:29.676104 | orchestrator | 2025-07-24 00:08:29.676112 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-24 00:08:30.832324 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-24 00:08:30.832427 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-24 00:08:30.832443 | orchestrator | 2025-07-24 00:08:30.832456 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-24 00:08:30.878790 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-24 00:08:30.878867 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-24 00:08:30.878881 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-24 00:08:30.878894 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-24 00:08:37.842130 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-24 00:08:37.842205 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-24 00:08:37.842222 | orchestrator | 2025-07-24 00:08:37.842236 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-24 00:08:38.443327 | orchestrator | changed: [testbed-manager] 2025-07-24 00:08:38.443369 | orchestrator | 2025-07-24 00:08:38.443378 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-24 00:08:58.866212 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-24 00:08:58.866265 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-24 00:08:58.866275 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-24 00:08:58.866282 | orchestrator | 2025-07-24 00:08:58.866290 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-24 00:09:01.252258 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-24 00:09:01.252349 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-24 00:09:01.252372 | orchestrator | 2025-07-24 00:09:01.252392 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-24 00:09:01.252407 | orchestrator | 2025-07-24 00:09:01.252419 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-24 00:09:02.672253 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:02.672360 | orchestrator | 2025-07-24 00:09:02.672379 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-24 00:09:02.725875 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:02.725969 | orchestrator | 2025-07-24 00:09:02.725984 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-24 00:09:02.803422 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:02.803464 | orchestrator | 2025-07-24 00:09:02.803472 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-24 00:09:03.605556 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:03.605661 | orchestrator | 2025-07-24 00:09:03.605678 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-24 00:09:04.333395 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:04.333494 | orchestrator | 2025-07-24 00:09:04.333510 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-24 00:09:06.817517 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-24 00:09:06.817570 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-24 00:09:06.817579 | orchestrator | 2025-07-24 00:09:06.817594 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-24 00:09:08.832745 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:08.832866 | orchestrator | 2025-07-24 00:09:08.832881 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-24 00:09:10.637261 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-24 00:09:10.637353 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-24 00:09:10.637369 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-24 00:09:10.637382 | orchestrator | 2025-07-24 00:09:10.637395 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-24 00:09:10.696547 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:09:10.696692 | orchestrator | 2025-07-24 00:09:10.696721 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-24 00:09:11.282730 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:11.282813 | orchestrator | 2025-07-24 00:09:11.282832 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-24 00:09:11.355503 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:09:11.355593 | orchestrator | 2025-07-24 00:09:11.355609 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-24 00:09:12.214557 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-24 00:09:12.214671 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:12.214689 | orchestrator | 2025-07-24 00:09:12.214702 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-24 00:09:12.253399 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:09:12.253497 | orchestrator | 2025-07-24 00:09:12.253519 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-24 00:09:12.285965 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:09:12.286075 | orchestrator | 2025-07-24 00:09:12.286090 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-24 00:09:12.318437 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:09:12.318495 | orchestrator | 2025-07-24 00:09:12.318504 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-24 00:09:12.369673 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:09:12.369744 | orchestrator | 2025-07-24 00:09:12.369757 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-24 00:09:13.134960 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:13.135047 | orchestrator | 2025-07-24 00:09:13.135062 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-24 00:09:13.135075 | orchestrator | 2025-07-24 00:09:13.135086 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-24 00:09:14.547375 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:14.547470 | orchestrator | 2025-07-24 00:09:14.547486 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-24 00:09:15.521609 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:15.521745 | orchestrator | 2025-07-24 00:09:15.521764 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-24 00:09:15.521777 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-24 00:09:15.521789 | orchestrator | 2025-07-24 00:09:16.105788 | orchestrator | ok: Runtime: 0:05:22.464338 2025-07-24 00:09:16.124816 | 2025-07-24 00:09:16.125034 | TASK [Point out that the log in on the manager is now possible] 2025-07-24 00:09:16.167340 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-24 00:09:16.177744 | 2025-07-24 00:09:16.177860 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-24 00:09:16.211006 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-24 00:09:16.219181 | 2025-07-24 00:09:16.219295 | TASK [Run manager part 1 + 2] 2025-07-24 00:09:17.183008 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-24 00:09:17.237946 | orchestrator | 2025-07-24 00:09:17.238060 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-24 00:09:17.238081 | orchestrator | 2025-07-24 00:09:17.238110 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-24 00:09:19.824703 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:19.824808 | orchestrator | 2025-07-24 00:09:19.824868 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-24 00:09:19.871084 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:09:19.871168 | orchestrator | 2025-07-24 00:09:19.871188 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-24 00:09:19.910722 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:19.910793 | orchestrator | 2025-07-24 00:09:19.910809 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-24 00:09:19.951376 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:19.951457 | orchestrator | 2025-07-24 00:09:19.951476 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-24 00:09:20.027749 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:20.027832 | orchestrator | 2025-07-24 00:09:20.027851 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-24 00:09:20.089482 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:20.089557 | orchestrator | 2025-07-24 00:09:20.089575 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-24 00:09:20.138257 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-24 00:09:20.138341 | orchestrator | 2025-07-24 00:09:20.138357 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-24 00:09:20.917939 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:20.918176 | orchestrator | 2025-07-24 00:09:20.918204 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-24 00:09:20.964856 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:09:20.964934 | orchestrator | 2025-07-24 00:09:20.964950 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-24 00:09:22.358008 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:22.358085 | orchestrator | 2025-07-24 00:09:22.358094 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-24 00:09:22.950084 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:22.950138 | orchestrator | 2025-07-24 00:09:22.950146 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-24 00:09:24.265161 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:24.265223 | orchestrator | 2025-07-24 00:09:24.265238 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-24 00:09:40.052152 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:40.052282 | orchestrator | 2025-07-24 00:09:40.052302 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-24 00:09:40.758573 | orchestrator | ok: [testbed-manager] 2025-07-24 00:09:40.758684 | orchestrator | 2025-07-24 00:09:40.758705 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-24 00:09:40.812857 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:09:40.812939 | orchestrator | 2025-07-24 00:09:40.812954 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-24 00:09:41.768254 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:41.768319 | orchestrator | 2025-07-24 00:09:41.768335 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-24 00:09:42.756394 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:42.757272 | orchestrator | 2025-07-24 00:09:42.757298 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-24 00:09:43.349505 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:43.349604 | orchestrator | 2025-07-24 00:09:43.349621 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-24 00:09:43.390282 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-24 00:09:43.390418 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-24 00:09:43.390443 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-24 00:09:43.390456 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-24 00:09:48.227394 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:48.227472 | orchestrator | 2025-07-24 00:09:48.227489 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-24 00:09:57.591316 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-24 00:09:57.591443 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-24 00:09:57.591476 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-24 00:09:57.591497 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-24 00:09:57.591530 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-24 00:09:57.591552 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-24 00:09:57.591570 | orchestrator | 2025-07-24 00:09:57.591583 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-24 00:09:58.664297 | orchestrator | changed: [testbed-manager] 2025-07-24 00:09:58.664332 | orchestrator | 2025-07-24 00:09:58.664338 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-24 00:09:58.703494 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:09:58.703534 | orchestrator | 2025-07-24 00:09:58.703542 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-24 00:10:01.873059 | orchestrator | changed: [testbed-manager] 2025-07-24 00:10:01.873153 | orchestrator | 2025-07-24 00:10:01.873172 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-24 00:10:01.917894 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:10:01.917979 | orchestrator | 2025-07-24 00:10:01.917995 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-24 00:11:45.336739 | orchestrator | changed: [testbed-manager] 2025-07-24 00:11:45.336862 | orchestrator | 2025-07-24 00:11:45.336885 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-24 00:11:46.507976 | orchestrator | ok: [testbed-manager] 2025-07-24 00:11:46.508013 | orchestrator | 2025-07-24 00:11:46.508020 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-24 00:11:46.508028 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-24 00:11:46.508033 | orchestrator | 2025-07-24 00:11:46.857847 | orchestrator | ok: Runtime: 0:02:30.039935 2025-07-24 00:11:46.873933 | 2025-07-24 00:11:46.874131 | TASK [Reboot manager] 2025-07-24 00:11:48.410598 | orchestrator | ok: Runtime: 0:00:00.987584 2025-07-24 00:11:48.418945 | 2025-07-24 00:11:48.419151 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-24 00:12:04.841910 | orchestrator | ok 2025-07-24 00:12:04.849707 | 2025-07-24 00:12:04.849817 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-24 00:13:04.897008 | orchestrator | ok 2025-07-24 00:13:04.907409 | 2025-07-24 00:13:04.907535 | TASK [Deploy manager + bootstrap nodes] 2025-07-24 00:13:07.421721 | orchestrator | 2025-07-24 00:13:07.421906 | orchestrator | # DEPLOY MANAGER 2025-07-24 00:13:07.421928 | orchestrator | 2025-07-24 00:13:07.421943 | orchestrator | + set -e 2025-07-24 00:13:07.421956 | orchestrator | + echo 2025-07-24 00:13:07.421971 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-24 00:13:07.421988 | orchestrator | + echo 2025-07-24 00:13:07.422089 | orchestrator | + cat /opt/manager-vars.sh 2025-07-24 00:13:07.424813 | orchestrator | export NUMBER_OF_NODES=6 2025-07-24 00:13:07.424849 | orchestrator | 2025-07-24 00:13:07.424862 | orchestrator | export CEPH_VERSION=reef 2025-07-24 00:13:07.424876 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-24 00:13:07.424889 | orchestrator | export MANAGER_VERSION=latest 2025-07-24 00:13:07.424943 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-24 00:13:07.424955 | orchestrator | 2025-07-24 00:13:07.424974 | orchestrator | export ARA=false 2025-07-24 00:13:07.424986 | orchestrator | export DEPLOY_MODE=manager 2025-07-24 00:13:07.425003 | orchestrator | export TEMPEST=true 2025-07-24 00:13:07.425015 | orchestrator | export IS_ZUUL=true 2025-07-24 00:13:07.425026 | orchestrator | 2025-07-24 00:13:07.425045 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-07-24 00:13:07.425056 | orchestrator | export EXTERNAL_API=false 2025-07-24 00:13:07.425067 | orchestrator | 2025-07-24 00:13:07.425078 | orchestrator | export IMAGE_USER=ubuntu 2025-07-24 00:13:07.425092 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-24 00:13:07.425104 | orchestrator | 2025-07-24 00:13:07.425115 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-24 00:13:07.425137 | orchestrator | 2025-07-24 00:13:07.425158 | orchestrator | + echo 2025-07-24 00:13:07.425179 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-24 00:13:07.426055 | orchestrator | ++ export INTERACTIVE=false 2025-07-24 00:13:07.426081 | orchestrator | ++ INTERACTIVE=false 2025-07-24 00:13:07.426092 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-24 00:13:07.426104 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-24 00:13:07.426262 | orchestrator | + source /opt/manager-vars.sh 2025-07-24 00:13:07.426309 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-24 00:13:07.426331 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-24 00:13:07.426342 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-24 00:13:07.426353 | orchestrator | ++ CEPH_VERSION=reef 2025-07-24 00:13:07.426363 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-24 00:13:07.426397 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-24 00:13:07.426409 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-24 00:13:07.426420 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-24 00:13:07.426460 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-24 00:13:07.426482 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-24 00:13:07.426493 | orchestrator | ++ export ARA=false 2025-07-24 00:13:07.426528 | orchestrator | ++ ARA=false 2025-07-24 00:13:07.426539 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-24 00:13:07.426550 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-24 00:13:07.426560 | orchestrator | ++ export TEMPEST=true 2025-07-24 00:13:07.426571 | orchestrator | ++ TEMPEST=true 2025-07-24 00:13:07.426582 | orchestrator | ++ export IS_ZUUL=true 2025-07-24 00:13:07.426593 | orchestrator | ++ IS_ZUUL=true 2025-07-24 00:13:07.426603 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-07-24 00:13:07.426615 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-07-24 00:13:07.426625 | orchestrator | ++ export EXTERNAL_API=false 2025-07-24 00:13:07.426636 | orchestrator | ++ EXTERNAL_API=false 2025-07-24 00:13:07.426647 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-24 00:13:07.426657 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-24 00:13:07.426672 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-24 00:13:07.426683 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-24 00:13:07.426694 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-24 00:13:07.426705 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-24 00:13:07.426717 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-24 00:13:07.475448 | orchestrator | + docker version 2025-07-24 00:13:07.743466 | orchestrator | Client: Docker Engine - Community 2025-07-24 00:13:07.743599 | orchestrator | Version: 27.5.1 2025-07-24 00:13:07.743626 | orchestrator | API version: 1.47 2025-07-24 00:13:07.743649 | orchestrator | Go version: go1.22.11 2025-07-24 00:13:07.743669 | orchestrator | Git commit: 9f9e405 2025-07-24 00:13:07.743681 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-24 00:13:07.743694 | orchestrator | OS/Arch: linux/amd64 2025-07-24 00:13:07.743705 | orchestrator | Context: default 2025-07-24 00:13:07.743716 | orchestrator | 2025-07-24 00:13:07.743727 | orchestrator | Server: Docker Engine - Community 2025-07-24 00:13:07.743739 | orchestrator | Engine: 2025-07-24 00:13:07.743749 | orchestrator | Version: 27.5.1 2025-07-24 00:13:07.743761 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-24 00:13:07.743802 | orchestrator | Go version: go1.22.11 2025-07-24 00:13:07.743816 | orchestrator | Git commit: 4c9b3b0 2025-07-24 00:13:07.743834 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-24 00:13:07.743864 | orchestrator | OS/Arch: linux/amd64 2025-07-24 00:13:07.743882 | orchestrator | Experimental: false 2025-07-24 00:13:07.743900 | orchestrator | containerd: 2025-07-24 00:13:07.743917 | orchestrator | Version: 1.7.27 2025-07-24 00:13:07.743936 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-24 00:13:07.743954 | orchestrator | runc: 2025-07-24 00:13:07.743974 | orchestrator | Version: 1.2.5 2025-07-24 00:13:07.743992 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-24 00:13:07.744009 | orchestrator | docker-init: 2025-07-24 00:13:07.744028 | orchestrator | Version: 0.19.0 2025-07-24 00:13:07.744040 | orchestrator | GitCommit: de40ad0 2025-07-24 00:13:07.747427 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-24 00:13:07.756808 | orchestrator | + set -e 2025-07-24 00:13:07.758098 | orchestrator | + source /opt/manager-vars.sh 2025-07-24 00:13:07.758201 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-24 00:13:07.758219 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-24 00:13:07.758256 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-24 00:13:07.758268 | orchestrator | ++ CEPH_VERSION=reef 2025-07-24 00:13:07.758279 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-24 00:13:07.758292 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-24 00:13:07.758303 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-24 00:13:07.758314 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-24 00:13:07.758325 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-24 00:13:07.758336 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-24 00:13:07.758347 | orchestrator | ++ export ARA=false 2025-07-24 00:13:07.758358 | orchestrator | ++ ARA=false 2025-07-24 00:13:07.758369 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-24 00:13:07.758380 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-24 00:13:07.758392 | orchestrator | ++ export TEMPEST=true 2025-07-24 00:13:07.758410 | orchestrator | ++ TEMPEST=true 2025-07-24 00:13:07.758428 | orchestrator | ++ export IS_ZUUL=true 2025-07-24 00:13:07.758446 | orchestrator | ++ IS_ZUUL=true 2025-07-24 00:13:07.758466 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-07-24 00:13:07.758483 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.173 2025-07-24 00:13:07.758503 | orchestrator | ++ export EXTERNAL_API=false 2025-07-24 00:13:07.758522 | orchestrator | ++ EXTERNAL_API=false 2025-07-24 00:13:07.758542 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-24 00:13:07.758563 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-24 00:13:07.758582 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-24 00:13:07.758598 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-24 00:13:07.758610 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-24 00:13:07.758620 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-24 00:13:07.758646 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-24 00:13:07.758657 | orchestrator | ++ export INTERACTIVE=false 2025-07-24 00:13:07.758668 | orchestrator | ++ INTERACTIVE=false 2025-07-24 00:13:07.758690 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-24 00:13:07.758705 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-24 00:13:07.758716 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-24 00:13:07.758727 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-24 00:13:07.758738 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-07-24 00:13:07.765175 | orchestrator | + set -e 2025-07-24 00:13:07.765229 | orchestrator | + VERSION=reef 2025-07-24 00:13:07.765986 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-24 00:13:07.774390 | orchestrator | + [[ -n ceph_version: reef ]] 2025-07-24 00:13:07.774477 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-07-24 00:13:07.779926 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-07-24 00:13:07.785958 | orchestrator | + set -e 2025-07-24 00:13:07.786002 | orchestrator | + VERSION=2024.2 2025-07-24 00:13:07.786976 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-24 00:13:07.790651 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-07-24 00:13:07.790695 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-07-24 00:13:07.796268 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-24 00:13:07.797305 | orchestrator | ++ semver latest 7.0.0 2025-07-24 00:13:07.858162 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-24 00:13:07.858286 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-24 00:13:07.858303 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-24 00:13:07.858317 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-24 00:13:07.951680 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-24 00:13:07.953719 | orchestrator | + source /opt/venv/bin/activate 2025-07-24 00:13:07.954834 | orchestrator | ++ deactivate nondestructive 2025-07-24 00:13:07.954882 | orchestrator | ++ '[' -n '' ']' 2025-07-24 00:13:07.954903 | orchestrator | ++ '[' -n '' ']' 2025-07-24 00:13:07.954914 | orchestrator | ++ hash -r 2025-07-24 00:13:07.954924 | orchestrator | ++ '[' -n '' ']' 2025-07-24 00:13:07.954934 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-24 00:13:07.954954 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-24 00:13:07.954964 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-24 00:13:07.954979 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-24 00:13:07.955043 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-24 00:13:07.955059 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-24 00:13:07.955075 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-24 00:13:07.955171 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-24 00:13:07.955190 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-24 00:13:07.955202 | orchestrator | ++ export PATH 2025-07-24 00:13:07.955275 | orchestrator | ++ '[' -n '' ']' 2025-07-24 00:13:07.955467 | orchestrator | ++ '[' -z '' ']' 2025-07-24 00:13:07.955484 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-24 00:13:07.955495 | orchestrator | ++ PS1='(venv) ' 2025-07-24 00:13:07.955506 | orchestrator | ++ export PS1 2025-07-24 00:13:07.955522 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-24 00:13:07.955535 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-24 00:13:07.955554 | orchestrator | ++ hash -r 2025-07-24 00:13:07.955732 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-24 00:13:09.988202 | orchestrator | 2025-07-24 00:13:09.988405 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-24 00:13:09.988446 | orchestrator | 2025-07-24 00:13:09.988474 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-24 00:13:10.578667 | orchestrator | ok: [testbed-manager] 2025-07-24 00:13:10.578771 | orchestrator | 2025-07-24 00:13:10.578787 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-24 00:13:11.623166 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:11.623312 | orchestrator | 2025-07-24 00:13:11.623331 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-24 00:13:11.623344 | orchestrator | 2025-07-24 00:13:11.623356 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-24 00:13:14.076669 | orchestrator | ok: [testbed-manager] 2025-07-24 00:13:14.076772 | orchestrator | 2025-07-24 00:13:14.076789 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-24 00:13:14.131168 | orchestrator | ok: [testbed-manager] 2025-07-24 00:13:14.131291 | orchestrator | 2025-07-24 00:13:14.131309 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-24 00:13:14.592806 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:14.592905 | orchestrator | 2025-07-24 00:13:14.592920 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-24 00:13:14.624928 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:13:14.625065 | orchestrator | 2025-07-24 00:13:14.625079 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-24 00:13:14.978765 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:14.978865 | orchestrator | 2025-07-24 00:13:14.978880 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-24 00:13:15.044535 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:13:15.044663 | orchestrator | 2025-07-24 00:13:15.044690 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-24 00:13:15.440637 | orchestrator | ok: [testbed-manager] 2025-07-24 00:13:15.440764 | orchestrator | 2025-07-24 00:13:15.440792 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-24 00:13:15.553895 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:13:15.553997 | orchestrator | 2025-07-24 00:13:15.554163 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-24 00:13:15.554183 | orchestrator | 2025-07-24 00:13:15.554196 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-24 00:13:17.418337 | orchestrator | ok: [testbed-manager] 2025-07-24 00:13:17.418447 | orchestrator | 2025-07-24 00:13:17.418464 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-24 00:13:17.540460 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-24 00:13:17.540552 | orchestrator | 2025-07-24 00:13:17.540567 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-24 00:13:17.596400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-24 00:13:17.596482 | orchestrator | 2025-07-24 00:13:17.596493 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-24 00:13:18.755490 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-24 00:13:18.755614 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-24 00:13:18.755652 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-24 00:13:18.755676 | orchestrator | 2025-07-24 00:13:18.755689 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-24 00:13:20.652891 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-24 00:13:20.653741 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-24 00:13:20.653777 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-24 00:13:20.653793 | orchestrator | 2025-07-24 00:13:20.653807 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-24 00:13:21.319136 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-24 00:13:21.319295 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:21.319311 | orchestrator | 2025-07-24 00:13:21.319324 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-24 00:13:21.989979 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-24 00:13:21.990134 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:21.990150 | orchestrator | 2025-07-24 00:13:21.990162 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-24 00:13:22.032106 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:13:22.032214 | orchestrator | 2025-07-24 00:13:22.032229 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-24 00:13:22.378458 | orchestrator | ok: [testbed-manager] 2025-07-24 00:13:22.378557 | orchestrator | 2025-07-24 00:13:22.378572 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-24 00:13:22.447155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-24 00:13:22.447281 | orchestrator | 2025-07-24 00:13:22.447296 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-24 00:13:23.528056 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:23.528159 | orchestrator | 2025-07-24 00:13:23.528174 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-24 00:13:24.375008 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:24.375109 | orchestrator | 2025-07-24 00:13:24.375125 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-24 00:13:36.336655 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:36.336765 | orchestrator | 2025-07-24 00:13:36.336782 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-24 00:13:36.387345 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:13:36.387433 | orchestrator | 2025-07-24 00:13:36.387449 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-24 00:13:36.387463 | orchestrator | 2025-07-24 00:13:36.387474 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-24 00:13:38.319397 | orchestrator | ok: [testbed-manager] 2025-07-24 00:13:38.319501 | orchestrator | 2025-07-24 00:13:38.319544 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-24 00:13:38.427707 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-24 00:13:38.427806 | orchestrator | 2025-07-24 00:13:38.427821 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-24 00:13:38.508383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-24 00:13:38.508476 | orchestrator | 2025-07-24 00:13:38.508492 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-24 00:13:41.479562 | orchestrator | ok: [testbed-manager] 2025-07-24 00:13:41.479663 | orchestrator | 2025-07-24 00:13:41.479678 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-24 00:13:41.542841 | orchestrator | ok: [testbed-manager] 2025-07-24 00:13:41.542914 | orchestrator | 2025-07-24 00:13:41.542923 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-24 00:13:41.691949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-24 00:13:41.692070 | orchestrator | 2025-07-24 00:13:41.692098 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-24 00:13:44.730321 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-24 00:13:44.730424 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-24 00:13:44.730438 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-24 00:13:44.730450 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-24 00:13:44.730461 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-24 00:13:44.730472 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-24 00:13:44.730483 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-24 00:13:44.730494 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-24 00:13:44.730505 | orchestrator | 2025-07-24 00:13:44.730516 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-24 00:13:45.402966 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:45.403057 | orchestrator | 2025-07-24 00:13:45.403067 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-24 00:13:46.058563 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:46.058666 | orchestrator | 2025-07-24 00:13:46.058681 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-24 00:13:46.141002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-24 00:13:46.141098 | orchestrator | 2025-07-24 00:13:46.141139 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-24 00:13:47.420066 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-24 00:13:47.420230 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-24 00:13:47.420250 | orchestrator | 2025-07-24 00:13:47.420263 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-24 00:13:48.078789 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:48.078885 | orchestrator | 2025-07-24 00:13:48.078901 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-24 00:13:48.141947 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:13:48.142083 | orchestrator | 2025-07-24 00:13:48.142140 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-24 00:13:48.210142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-24 00:13:48.210237 | orchestrator | 2025-07-24 00:13:48.210252 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-24 00:13:49.666692 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-24 00:13:49.666789 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-24 00:13:49.666813 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:49.666857 | orchestrator | 2025-07-24 00:13:49.666875 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-24 00:13:50.332913 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:50.333019 | orchestrator | 2025-07-24 00:13:50.333035 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-24 00:13:50.397577 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:13:50.397686 | orchestrator | 2025-07-24 00:13:50.397702 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-24 00:13:50.504189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-24 00:13:50.504282 | orchestrator | 2025-07-24 00:13:50.504297 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-24 00:13:51.051728 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:51.051836 | orchestrator | 2025-07-24 00:13:51.051853 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-24 00:13:51.485533 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:51.485624 | orchestrator | 2025-07-24 00:13:51.485640 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-24 00:13:52.768815 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-24 00:13:52.768914 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-24 00:13:52.768929 | orchestrator | 2025-07-24 00:13:52.768942 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-24 00:13:53.452801 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:53.452898 | orchestrator | 2025-07-24 00:13:53.452915 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-24 00:13:53.928747 | orchestrator | ok: [testbed-manager] 2025-07-24 00:13:53.928845 | orchestrator | 2025-07-24 00:13:53.928860 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-24 00:13:54.294243 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:54.294319 | orchestrator | 2025-07-24 00:13:54.294330 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-24 00:13:54.345472 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:13:54.345570 | orchestrator | 2025-07-24 00:13:54.345585 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-24 00:13:54.426797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-24 00:13:54.426892 | orchestrator | 2025-07-24 00:13:54.426907 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-24 00:13:54.476489 | orchestrator | ok: [testbed-manager] 2025-07-24 00:13:54.476610 | orchestrator | 2025-07-24 00:13:54.476626 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-24 00:13:56.628947 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-24 00:13:56.629050 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-24 00:13:56.629067 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-24 00:13:56.629105 | orchestrator | 2025-07-24 00:13:56.629118 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-24 00:13:57.385845 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:57.385942 | orchestrator | 2025-07-24 00:13:57.385956 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-24 00:13:58.126159 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:58.126251 | orchestrator | 2025-07-24 00:13:58.126266 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-24 00:13:58.893698 | orchestrator | changed: [testbed-manager] 2025-07-24 00:13:58.893790 | orchestrator | 2025-07-24 00:13:58.893807 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-24 00:13:58.980715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-24 00:13:58.980836 | orchestrator | 2025-07-24 00:13:58.980862 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-24 00:13:59.027967 | orchestrator | ok: [testbed-manager] 2025-07-24 00:13:59.028061 | orchestrator | 2025-07-24 00:13:59.028114 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-24 00:13:59.794275 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-24 00:13:59.794368 | orchestrator | 2025-07-24 00:13:59.794383 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-24 00:13:59.882489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-24 00:13:59.882582 | orchestrator | 2025-07-24 00:13:59.882598 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-24 00:14:00.618522 | orchestrator | changed: [testbed-manager] 2025-07-24 00:14:00.618609 | orchestrator | 2025-07-24 00:14:00.618622 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-24 00:14:01.272480 | orchestrator | ok: [testbed-manager] 2025-07-24 00:14:01.272627 | orchestrator | 2025-07-24 00:14:01.272641 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-24 00:14:01.328703 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:14:01.328797 | orchestrator | 2025-07-24 00:14:01.328814 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-24 00:14:01.385569 | orchestrator | ok: [testbed-manager] 2025-07-24 00:14:01.385661 | orchestrator | 2025-07-24 00:14:01.385675 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-24 00:14:02.314904 | orchestrator | changed: [testbed-manager] 2025-07-24 00:14:02.315793 | orchestrator | 2025-07-24 00:14:02.315838 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-24 00:15:09.379575 | orchestrator | changed: [testbed-manager] 2025-07-24 00:15:09.379681 | orchestrator | 2025-07-24 00:15:09.379698 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-24 00:15:10.369272 | orchestrator | ok: [testbed-manager] 2025-07-24 00:15:10.370305 | orchestrator | 2025-07-24 00:15:10.370349 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-24 00:15:10.429089 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:15:10.429189 | orchestrator | 2025-07-24 00:15:10.429204 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-24 00:15:13.239950 | orchestrator | changed: [testbed-manager] 2025-07-24 00:15:13.240056 | orchestrator | 2025-07-24 00:15:13.240075 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-24 00:15:13.293724 | orchestrator | ok: [testbed-manager] 2025-07-24 00:15:13.293824 | orchestrator | 2025-07-24 00:15:13.293847 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-24 00:15:13.293912 | orchestrator | 2025-07-24 00:15:13.293933 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-24 00:15:13.349581 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:15:13.349685 | orchestrator | 2025-07-24 00:15:13.349701 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-24 00:16:13.406124 | orchestrator | Pausing for 60 seconds 2025-07-24 00:16:13.406234 | orchestrator | changed: [testbed-manager] 2025-07-24 00:16:13.406249 | orchestrator | 2025-07-24 00:16:13.406261 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-24 00:16:17.106796 | orchestrator | changed: [testbed-manager] 2025-07-24 00:16:17.106900 | orchestrator | 2025-07-24 00:16:17.106915 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-24 00:17:19.452607 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-24 00:17:19.452729 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-24 00:17:19.452746 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2025-07-24 00:17:19.452758 | orchestrator | changed: [testbed-manager] 2025-07-24 00:17:19.452772 | orchestrator | 2025-07-24 00:17:19.452785 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-24 00:17:29.338384 | orchestrator | changed: [testbed-manager] 2025-07-24 00:17:29.338554 | orchestrator | 2025-07-24 00:17:29.338572 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-24 00:17:29.417994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-24 00:17:29.418139 | orchestrator | 2025-07-24 00:17:29.418154 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-24 00:17:29.418166 | orchestrator | 2025-07-24 00:17:29.418178 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-24 00:17:29.467222 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:17:29.467317 | orchestrator | 2025-07-24 00:17:29.467332 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-24 00:17:29.467345 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-24 00:17:29.467357 | orchestrator | 2025-07-24 00:17:29.567713 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-24 00:17:29.567808 | orchestrator | + deactivate 2025-07-24 00:17:29.567823 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-24 00:17:29.567835 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-24 00:17:29.567846 | orchestrator | + export PATH 2025-07-24 00:17:29.567857 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-24 00:17:29.567868 | orchestrator | + '[' -n '' ']' 2025-07-24 00:17:29.567879 | orchestrator | + hash -r 2025-07-24 00:17:29.567912 | orchestrator | + '[' -n '' ']' 2025-07-24 00:17:29.567924 | orchestrator | + unset VIRTUAL_ENV 2025-07-24 00:17:29.567934 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-24 00:17:29.567955 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-24 00:17:29.567974 | orchestrator | + unset -f deactivate 2025-07-24 00:17:29.567994 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-24 00:17:29.574005 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-24 00:17:29.574113 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-24 00:17:29.574126 | orchestrator | + local max_attempts=60 2025-07-24 00:17:29.574138 | orchestrator | + local name=ceph-ansible 2025-07-24 00:17:29.574148 | orchestrator | + local attempt_num=1 2025-07-24 00:17:29.575171 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-24 00:17:29.614864 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-24 00:17:29.614940 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-24 00:17:29.614954 | orchestrator | + local max_attempts=60 2025-07-24 00:17:29.614966 | orchestrator | + local name=kolla-ansible 2025-07-24 00:17:29.614977 | orchestrator | + local attempt_num=1 2025-07-24 00:17:29.615653 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-24 00:17:29.658603 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-24 00:17:29.658702 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-24 00:17:29.658717 | orchestrator | + local max_attempts=60 2025-07-24 00:17:29.658729 | orchestrator | + local name=osism-ansible 2025-07-24 00:17:29.658740 | orchestrator | + local attempt_num=1 2025-07-24 00:17:29.659573 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-24 00:17:29.701669 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-24 00:17:29.701757 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-24 00:17:29.701772 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-24 00:17:30.419297 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-24 00:17:30.631612 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-24 00:17:30.631710 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-07-24 00:17:30.631725 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-07-24 00:17:30.631737 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-07-24 00:17:30.631773 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2025-07-24 00:17:30.631796 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2025-07-24 00:17:30.631808 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2025-07-24 00:17:30.631819 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-07-24 00:17:30.631830 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2025-07-24 00:17:30.631841 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-07-24 00:17:30.631851 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-07-24 00:17:30.631862 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-07-24 00:17:30.631873 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-07-24 00:17:30.631884 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-07-24 00:17:30.631894 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-07-24 00:17:30.639751 | orchestrator | ++ semver latest 7.0.0 2025-07-24 00:17:30.700658 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-24 00:17:30.700744 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-24 00:17:30.700761 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-24 00:17:30.705454 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-24 00:17:42.848411 | orchestrator | 2025-07-24 00:17:42 | INFO  | Task 25a740ba-aabf-4906-b596-72fd2e547871 (resolvconf) was prepared for execution. 2025-07-24 00:17:42.848583 | orchestrator | 2025-07-24 00:17:42 | INFO  | It takes a moment until task 25a740ba-aabf-4906-b596-72fd2e547871 (resolvconf) has been started and output is visible here. 2025-07-24 00:18:02.714787 | orchestrator | 2025-07-24 00:18:02.714909 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-24 00:18:02.714927 | orchestrator | 2025-07-24 00:18:02.714941 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-24 00:18:02.714953 | orchestrator | Thursday 24 July 2025 00:17:49 +0000 (0:00:00.149) 0:00:00.149 ********* 2025-07-24 00:18:02.714964 | orchestrator | ok: [testbed-manager] 2025-07-24 00:18:02.714976 | orchestrator | 2025-07-24 00:18:02.714988 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-24 00:18:02.715000 | orchestrator | Thursday 24 July 2025 00:17:53 +0000 (0:00:04.204) 0:00:04.354 ********* 2025-07-24 00:18:02.715010 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:18:02.715026 | orchestrator | 2025-07-24 00:18:02.715037 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-24 00:18:02.715071 | orchestrator | Thursday 24 July 2025 00:17:53 +0000 (0:00:00.058) 0:00:04.413 ********* 2025-07-24 00:18:02.715083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-24 00:18:02.715095 | orchestrator | 2025-07-24 00:18:02.715106 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-24 00:18:02.715117 | orchestrator | Thursday 24 July 2025 00:17:53 +0000 (0:00:00.084) 0:00:04.497 ********* 2025-07-24 00:18:02.715128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-24 00:18:02.715139 | orchestrator | 2025-07-24 00:18:02.715149 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-24 00:18:02.715160 | orchestrator | Thursday 24 July 2025 00:17:53 +0000 (0:00:00.086) 0:00:04.584 ********* 2025-07-24 00:18:02.715171 | orchestrator | ok: [testbed-manager] 2025-07-24 00:18:02.715181 | orchestrator | 2025-07-24 00:18:02.715192 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-24 00:18:02.715203 | orchestrator | Thursday 24 July 2025 00:17:55 +0000 (0:00:01.642) 0:00:06.226 ********* 2025-07-24 00:18:02.715213 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:18:02.715224 | orchestrator | 2025-07-24 00:18:02.715234 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-24 00:18:02.715245 | orchestrator | Thursday 24 July 2025 00:17:55 +0000 (0:00:00.101) 0:00:06.327 ********* 2025-07-24 00:18:02.715256 | orchestrator | ok: [testbed-manager] 2025-07-24 00:18:02.715267 | orchestrator | 2025-07-24 00:18:02.715277 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-24 00:18:02.715288 | orchestrator | Thursday 24 July 2025 00:17:55 +0000 (0:00:00.727) 0:00:07.055 ********* 2025-07-24 00:18:02.715301 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:18:02.715313 | orchestrator | 2025-07-24 00:18:02.715325 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-24 00:18:02.715339 | orchestrator | Thursday 24 July 2025 00:17:56 +0000 (0:00:00.104) 0:00:07.159 ********* 2025-07-24 00:18:02.715352 | orchestrator | changed: [testbed-manager] 2025-07-24 00:18:02.715365 | orchestrator | 2025-07-24 00:18:02.715377 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-24 00:18:02.715390 | orchestrator | Thursday 24 July 2025 00:17:57 +0000 (0:00:01.056) 0:00:08.216 ********* 2025-07-24 00:18:02.715446 | orchestrator | changed: [testbed-manager] 2025-07-24 00:18:02.715471 | orchestrator | 2025-07-24 00:18:02.715491 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-24 00:18:02.715506 | orchestrator | Thursday 24 July 2025 00:17:58 +0000 (0:00:01.691) 0:00:09.907 ********* 2025-07-24 00:18:02.715517 | orchestrator | ok: [testbed-manager] 2025-07-24 00:18:02.715527 | orchestrator | 2025-07-24 00:18:02.715538 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-24 00:18:02.715549 | orchestrator | Thursday 24 July 2025 00:18:00 +0000 (0:00:01.538) 0:00:11.446 ********* 2025-07-24 00:18:02.715560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-24 00:18:02.715571 | orchestrator | 2025-07-24 00:18:02.715593 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-24 00:18:02.715604 | orchestrator | Thursday 24 July 2025 00:18:00 +0000 (0:00:00.092) 0:00:11.539 ********* 2025-07-24 00:18:02.715615 | orchestrator | changed: [testbed-manager] 2025-07-24 00:18:02.715625 | orchestrator | 2025-07-24 00:18:02.715636 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-24 00:18:02.715648 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-24 00:18:02.715659 | orchestrator | 2025-07-24 00:18:02.715678 | orchestrator | 2025-07-24 00:18:02.715689 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-24 00:18:02.715700 | orchestrator | Thursday 24 July 2025 00:18:02 +0000 (0:00:01.639) 0:00:13.179 ********* 2025-07-24 00:18:02.715710 | orchestrator | =============================================================================== 2025-07-24 00:18:02.715721 | orchestrator | Gathering Facts --------------------------------------------------------- 4.20s 2025-07-24 00:18:02.715732 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.69s 2025-07-24 00:18:02.715743 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.64s 2025-07-24 00:18:02.715753 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.64s 2025-07-24 00:18:02.715764 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.54s 2025-07-24 00:18:02.715775 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 1.06s 2025-07-24 00:18:02.715804 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.73s 2025-07-24 00:18:02.715815 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.10s 2025-07-24 00:18:02.715826 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.10s 2025-07-24 00:18:02.715837 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-07-24 00:18:02.715847 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-07-24 00:18:02.715858 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-07-24 00:18:02.715869 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-07-24 00:18:02.988527 | orchestrator | + osism apply sshconfig 2025-07-24 00:18:14.948595 | orchestrator | 2025-07-24 00:18:14 | INFO  | Task 2aae7865-2d56-43fb-929c-f98f21b876fa (sshconfig) was prepared for execution. 2025-07-24 00:18:14.948716 | orchestrator | 2025-07-24 00:18:14 | INFO  | It takes a moment until task 2aae7865-2d56-43fb-929c-f98f21b876fa (sshconfig) has been started and output is visible here. 2025-07-24 00:18:32.454390 | orchestrator | 2025-07-24 00:18:32.454501 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-24 00:18:32.454517 | orchestrator | 2025-07-24 00:18:32.454530 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-24 00:18:32.454541 | orchestrator | Thursday 24 July 2025 00:18:21 +0000 (0:00:00.152) 0:00:00.152 ********* 2025-07-24 00:18:32.454552 | orchestrator | ok: [testbed-manager] 2025-07-24 00:18:32.454564 | orchestrator | 2025-07-24 00:18:32.454575 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-24 00:18:32.454586 | orchestrator | Thursday 24 July 2025 00:18:22 +0000 (0:00:00.827) 0:00:00.979 ********* 2025-07-24 00:18:32.454597 | orchestrator | changed: [testbed-manager] 2025-07-24 00:18:32.454609 | orchestrator | 2025-07-24 00:18:32.454620 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-24 00:18:32.454630 | orchestrator | Thursday 24 July 2025 00:18:22 +0000 (0:00:00.956) 0:00:01.936 ********* 2025-07-24 00:18:32.454641 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-24 00:18:32.454652 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-24 00:18:32.454663 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-24 00:18:32.454674 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-24 00:18:32.454685 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-24 00:18:32.454696 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-24 00:18:32.454728 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-24 00:18:32.454740 | orchestrator | 2025-07-24 00:18:32.454750 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-24 00:18:32.454761 | orchestrator | Thursday 24 July 2025 00:18:31 +0000 (0:00:08.024) 0:00:09.960 ********* 2025-07-24 00:18:32.454797 | orchestrator | skipping: [testbed-manager] 2025-07-24 00:18:32.454809 | orchestrator | 2025-07-24 00:18:32.454820 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-24 00:18:32.454831 | orchestrator | Thursday 24 July 2025 00:18:31 +0000 (0:00:00.069) 0:00:10.030 ********* 2025-07-24 00:18:32.454841 | orchestrator | changed: [testbed-manager] 2025-07-24 00:18:32.454852 | orchestrator | 2025-07-24 00:18:32.454863 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-24 00:18:32.454876 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-24 00:18:32.454890 | orchestrator | 2025-07-24 00:18:32.454903 | orchestrator | 2025-07-24 00:18:32.454915 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-24 00:18:32.454928 | orchestrator | Thursday 24 July 2025 00:18:31 +0000 (0:00:00.819) 0:00:10.849 ********* 2025-07-24 00:18:32.454940 | orchestrator | =============================================================================== 2025-07-24 00:18:32.454951 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 8.03s 2025-07-24 00:18:32.454962 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.96s 2025-07-24 00:18:32.454973 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.83s 2025-07-24 00:18:32.454983 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.82s 2025-07-24 00:18:32.454994 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-07-24 00:18:32.735859 | orchestrator | + osism apply known-hosts 2025-07-24 00:18:44.841915 | orchestrator | 2025-07-24 00:18:44 | INFO  | Task 4a8cd9d9-6cf0-4a9a-9298-6ada6ba794d4 (known-hosts) was prepared for execution. 2025-07-24 00:18:44.842082 | orchestrator | 2025-07-24 00:18:44 | INFO  | It takes a moment until task 4a8cd9d9-6cf0-4a9a-9298-6ada6ba794d4 (known-hosts) has been started and output is visible here. 2025-07-24 00:18:59.006335 | orchestrator | 2025-07-24 00:18:59 | INFO  | Task f739bc9d-260c-470d-950e-5fe8bc194513 (known-hosts) was prepared for execution. 2025-07-24 00:18:59.006434 | orchestrator | 2025-07-24 00:18:59 | INFO  | It takes a moment until task f739bc9d-260c-470d-950e-5fe8bc194513 (known-hosts) has been started and output is visible here. 2025-07-24 00:19:11.845634 | orchestrator | 2025-07-24 00:19:11.845752 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-24 00:19:11.845770 | orchestrator | 2025-07-24 00:19:11.845783 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-24 00:19:11.845795 | orchestrator | Thursday 24 July 2025 00:18:50 +0000 (0:00:00.155) 0:00:00.155 ********* 2025-07-24 00:19:11.845807 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-24 00:19:11.845819 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-24 00:19:11.845830 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-24 00:19:11.845841 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-24 00:19:11.845852 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-24 00:19:11.845863 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-24 00:19:11.845873 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-24 00:19:11.845884 | orchestrator | 2025-07-24 00:19:11.845895 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-24 00:19:11.845907 | orchestrator | Thursday 24 July 2025 00:18:58 +0000 (0:00:07.187) 0:00:07.343 ********* 2025-07-24 00:19:11.845920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-24 00:19:11.845933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-24 00:19:11.845965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-24 00:19:11.845986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-24 00:19:11.845997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-24 00:19:11.846008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-24 00:19:11.846081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-24 00:19:11.846093 | orchestrator | 2025-07-24 00:19:11.846111 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-24 00:19:11.846131 | orchestrator | Thursday 24 July 2025 00:18:58 +0000 (0:00:00.190) 0:00:07.533 ********* 2025-07-24 00:19:11.846152 | orchestrator | [ERROR]: Task failed: Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-24 00:19:11.846173 | orchestrator |  2025-07-24 00:19:11.846193 | orchestrator | Task failed. 2025-07-24 00:19:11.846210 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:2:3 2025-07-24 00:19:11.846223 | orchestrator |  2025-07-24 00:19:11.846236 | orchestrator | 1 --- 2025-07-24 00:19:11.846274 | orchestrator | 2 - name: Write scanned known_hosts entries 2025-07-24 00:19:11.846288 | orchestrator |  ^ column 3 2025-07-24 00:19:11.846301 | orchestrator |  2025-07-24 00:19:11.846313 | orchestrator | <<< caused by >>> 2025-07-24 00:19:11.846325 | orchestrator |  2025-07-24 00:19:11.846337 | orchestrator | Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-24 00:19:11.846350 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:12:7 2025-07-24 00:19:11.846362 | orchestrator |  2025-07-24 00:19:11.846375 | orchestrator | 10 when: 2025-07-24 00:19:11.846387 | orchestrator | 11 - item['stdout_lines'] is defined 2025-07-24 00:19:11.846399 | orchestrator | 12 - item['stdout_lines'] | length 2025-07-24 00:19:11.846412 | orchestrator |  ^ column 7 2025-07-24 00:19:11.846424 | orchestrator |  2025-07-24 00:19:11.846437 | orchestrator | Broken conditionals can be temporarily allowed with the `ALLOW_BROKEN_CONDITIONALS` configuration option. 2025-07-24 00:19:11.846449 | orchestrator |  2025-07-24 00:19:11.846462 | orchestrator | failed: [testbed-manager] (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD+HE4SdCr4rdJRP6GHwWyzHXntuv8b2UgBAOWFGT1ObAXGVspmV4+frQtylLUB1A72dcjM9axkxrizojE9Gfxc=) => changed=false  2025-07-24 00:19:11.846477 | orchestrator |  ansible_loop_var: inner_item 2025-07-24 00:19:11.846490 | orchestrator |  inner_item: testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD+HE4SdCr4rdJRP6GHwWyzHXntuv8b2UgBAOWFGT1ObAXGVspmV4+frQtylLUB1A72dcjM9axkxrizojE9Gfxc= 2025-07-24 00:19:11.846503 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-24 00:19:11.846538 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHEI5EG+NLb7p1o/qepSBdDruRGpNs+czYs3aEZ0tRUDy/a3R6izeqLZVP2eMGMYoDeIZuhv8u+68hHwgeXNgkGPri1xLfX8jPoA0vAHOPZ1lP0JS0K5Ca8ecDwlSI40mW7KHCQPLLJVwVrKMfOibBDrtY7NdvRow+aRQlT50001v/+rDkmYxFrFc+yIJ6JV/vpZW4pQe9XR1NPkLRBIyuKYyyJd2dimoQ1aHr0+5yXWkNpN9pkwcOKUycyoizFuPlBuDJrnlh2KMTpRuH8EDUP12AyzaYYxm4YgACbeAqCJr5dZKjuEf0xetDIPURNrUcmbjXFTtG5juT8idkV8QXF5sHPJpttNoriOrsCyT/aF6SDPfmEHABDtlaJVLbdGwAU1kLV9/Xd0ODCScese1RDmHjLDB8LHep0N846JAzgTIyfuHDrseFIKE+ZuleMYHy0AZ1PERWmwWmggsk2EWCQ2PCyqp6UbbdpefSMSL7mtsPFGse5vAK9jxTmzA5F/M=) => changed=false  2025-07-24 00:19:11.846564 | orchestrator |  ansible_loop_var: inner_item 2025-07-24 00:19:11.846576 | orchestrator |  inner_item: testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHEI5EG+NLb7p1o/qepSBdDruRGpNs+czYs3aEZ0tRUDy/a3R6izeqLZVP2eMGMYoDeIZuhv8u+68hHwgeXNgkGPri1xLfX8jPoA0vAHOPZ1lP0JS0K5Ca8ecDwlSI40mW7KHCQPLLJVwVrKMfOibBDrtY7NdvRow+aRQlT50001v/+rDkmYxFrFc+yIJ6JV/vpZW4pQe9XR1NPkLRBIyuKYyyJd2dimoQ1aHr0+5yXWkNpN9pkwcOKUycyoizFuPlBuDJrnlh2KMTpRuH8EDUP12AyzaYYxm4YgACbeAqCJr5dZKjuEf0xetDIPURNrUcmbjXFTtG5juT8idkV8QXF5sHPJpttNoriOrsCyT/aF6SDPfmEHABDtlaJVLbdGwAU1kLV9/Xd0ODCScese1RDmHjLDB8LHep0N846JAzgTIyfuHDrseFIKE+ZuleMYHy0AZ1PERWmwWmggsk2EWCQ2PCyqp6UbbdpefSMSL7mtsPFGse5vAK9jxTmzA5F/M= 2025-07-24 00:19:11.846588 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-24 00:19:11.846663 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWdVtdR5/TKZ/4CDPlSalpYrzZ2Csi5P+UPAL7yEbtA) => changed=false  2025-07-24 00:19:11.846676 | orchestrator |  ansible_loop_var: inner_item 2025-07-24 00:19:11.846688 | orchestrator |  inner_item: testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWdVtdR5/TKZ/4CDPlSalpYrzZ2Csi5P+UPAL7yEbtA 2025-07-24 00:19:11.846699 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-24 00:19:11.846710 | orchestrator | 2025-07-24 00:19:11.846721 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-24 00:19:11.846732 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-24 00:19:11.846743 | orchestrator | 2025-07-24 00:19:11.846754 | orchestrator | 2025-07-24 00:19:11.846765 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-24 00:19:11.846775 | orchestrator | Thursday 24 July 2025 00:18:58 +0000 (0:00:00.108) 0:00:07.641 ********* 2025-07-24 00:19:11.846786 | orchestrator | =============================================================================== 2025-07-24 00:19:11.846797 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 7.19s 2025-07-24 00:19:11.846808 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-07-24 00:19:11.846819 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.11s 2025-07-24 00:19:11.846830 | orchestrator | 2025-07-24 00:19:11.846840 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-24 00:19:11.846851 | orchestrator | 2025-07-24 00:19:11.846862 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-24 00:19:11.846873 | orchestrator | Thursday 24 July 2025 00:19:05 +0000 (0:00:00.153) 0:00:00.153 ********* 2025-07-24 00:19:11.846884 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-24 00:19:11.846894 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-24 00:19:11.846905 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-24 00:19:11.846916 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-24 00:19:11.846926 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-24 00:19:11.846937 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-24 00:19:11.846947 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-24 00:19:11.846958 | orchestrator | 2025-07-24 00:19:11.846969 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-24 00:19:11.846986 | orchestrator | Thursday 24 July 2025 00:19:11 +0000 (0:00:06.564) 0:00:06.717 ********* 2025-07-24 00:19:11.846997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-24 00:19:11.847008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-24 00:19:11.847019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-24 00:19:11.847037 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-24 00:19:12.498483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-24 00:19:12.498578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-24 00:19:12.498590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-24 00:19:12.498601 | orchestrator | 2025-07-24 00:19:12.498611 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-24 00:19:12.498621 | orchestrator | Thursday 24 July 2025 00:19:11 +0000 (0:00:00.183) 0:00:06.901 ********* 2025-07-24 00:19:12.498631 | orchestrator | [ERROR]: Task failed: Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-24 00:19:12.498641 | orchestrator |  2025-07-24 00:19:12.498651 | orchestrator | Task failed. 2025-07-24 00:19:12.498661 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:2:3 2025-07-24 00:19:12.498670 | orchestrator |  2025-07-24 00:19:12.498679 | orchestrator | 1 --- 2025-07-24 00:19:12.498687 | orchestrator | 2 - name: Write scanned known_hosts entries 2025-07-24 00:19:12.498696 | orchestrator |  ^ column 3 2025-07-24 00:19:12.498705 | orchestrator |  2025-07-24 00:19:12.498713 | orchestrator | <<< caused by >>> 2025-07-24 00:19:12.498722 | orchestrator |  2025-07-24 00:19:12.498731 | orchestrator | Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-24 00:19:12.498740 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:12:7 2025-07-24 00:19:12.498749 | orchestrator |  2025-07-24 00:19:12.498758 | orchestrator | 10 when: 2025-07-24 00:19:12.498766 | orchestrator | 11 - item['stdout_lines'] is defined 2025-07-24 00:19:12.498775 | orchestrator | 12 - item['stdout_lines'] | length 2025-07-24 00:19:12.498785 | orchestrator |  ^ column 7 2025-07-24 00:19:12.498793 | orchestrator |  2025-07-24 00:19:12.498820 | orchestrator | Broken conditionals can be temporarily allowed with the `ALLOW_BROKEN_CONDITIONALS` configuration option. 2025-07-24 00:19:12.498829 | orchestrator |  2025-07-24 00:19:12.498839 | orchestrator | failed: [testbed-manager] (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD+HE4SdCr4rdJRP6GHwWyzHXntuv8b2UgBAOWFGT1ObAXGVspmV4+frQtylLUB1A72dcjM9axkxrizojE9Gfxc=) => changed=false  2025-07-24 00:19:12.498850 | orchestrator |  ansible_loop_var: inner_item 2025-07-24 00:19:12.498859 | orchestrator |  inner_item: testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD+HE4SdCr4rdJRP6GHwWyzHXntuv8b2UgBAOWFGT1ObAXGVspmV4+frQtylLUB1A72dcjM9axkxrizojE9Gfxc= 2025-07-24 00:19:12.498885 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-24 00:19:12.498897 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHEI5EG+NLb7p1o/qepSBdDruRGpNs+czYs3aEZ0tRUDy/a3R6izeqLZVP2eMGMYoDeIZuhv8u+68hHwgeXNgkGPri1xLfX8jPoA0vAHOPZ1lP0JS0K5Ca8ecDwlSI40mW7KHCQPLLJVwVrKMfOibBDrtY7NdvRow+aRQlT50001v/+rDkmYxFrFc+yIJ6JV/vpZW4pQe9XR1NPkLRBIyuKYyyJd2dimoQ1aHr0+5yXWkNpN9pkwcOKUycyoizFuPlBuDJrnlh2KMTpRuH8EDUP12AyzaYYxm4YgACbeAqCJr5dZKjuEf0xetDIPURNrUcmbjXFTtG5juT8idkV8QXF5sHPJpttNoriOrsCyT/aF6SDPfmEHABDtlaJVLbdGwAU1kLV9/Xd0ODCScese1RDmHjLDB8LHep0N846JAzgTIyfuHDrseFIKE+ZuleMYHy0AZ1PERWmwWmggsk2EWCQ2PCyqp6UbbdpefSMSL7mtsPFGse5vAK9jxTmzA5F/M=) => changed=false  2025-07-24 00:19:12.498908 | orchestrator |  ansible_loop_var: inner_item 2025-07-24 00:19:12.498917 | orchestrator |  inner_item: testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHEI5EG+NLb7p1o/qepSBdDruRGpNs+czYs3aEZ0tRUDy/a3R6izeqLZVP2eMGMYoDeIZuhv8u+68hHwgeXNgkGPri1xLfX8jPoA0vAHOPZ1lP0JS0K5Ca8ecDwlSI40mW7KHCQPLLJVwVrKMfOibBDrtY7NdvRow+aRQlT50001v/+rDkmYxFrFc+yIJ6JV/vpZW4pQe9XR1NPkLRBIyuKYyyJd2dimoQ1aHr0+5yXWkNpN9pkwcOKUycyoizFuPlBuDJrnlh2KMTpRuH8EDUP12AyzaYYxm4YgACbeAqCJr5dZKjuEf0xetDIPURNrUcmbjXFTtG5juT8idkV8QXF5sHPJpttNoriOrsCyT/aF6SDPfmEHABDtlaJVLbdGwAU1kLV9/Xd0ODCScese1RDmHjLDB8LHep0N846JAzgTIyfuHDrseFIKE+ZuleMYHy0AZ1PERWmwWmggsk2EWCQ2PCyqp6UbbdpefSMSL7mtsPFGse5vAK9jxTmzA5F/M= 2025-07-24 00:19:12.498927 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-24 00:19:12.498935 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWdVtdR5/TKZ/4CDPlSalpYrzZ2Csi5P+UPAL7yEbtA) => changed=false  2025-07-24 00:19:12.498945 | orchestrator |  ansible_loop_var: inner_item 2025-07-24 00:19:12.498969 | orchestrator |  inner_item: testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWdVtdR5/TKZ/4CDPlSalpYrzZ2Csi5P+UPAL7yEbtA 2025-07-24 00:19:12.498979 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-24 00:19:12.498987 | orchestrator | 2025-07-24 00:19:12.498996 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-24 00:19:12.499005 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-24 00:19:12.499013 | orchestrator | 2025-07-24 00:19:12.499022 | orchestrator | 2025-07-24 00:19:12.499031 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-24 00:19:12.499041 | orchestrator | Thursday 24 July 2025 00:19:11 +0000 (0:00:00.095) 0:00:06.996 ********* 2025-07-24 00:19:12.499051 | orchestrator | =============================================================================== 2025-07-24 00:19:12.499061 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.56s 2025-07-24 00:19:12.499071 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-07-24 00:19:12.499081 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.10s 2025-07-24 00:19:13.202065 | orchestrator | ERROR 2025-07-24 00:19:13.202520 | orchestrator | { 2025-07-24 00:19:13.202627 | orchestrator | "delta": "0:06:07.195190", 2025-07-24 00:19:13.202695 | orchestrator | "end": "2025-07-24 00:19:12.776143", 2025-07-24 00:19:13.202753 | orchestrator | "msg": "non-zero return code", 2025-07-24 00:19:13.202807 | orchestrator | "rc": 2, 2025-07-24 00:19:13.202964 | orchestrator | "start": "2025-07-24 00:13:05.580953" 2025-07-24 00:19:13.203037 | orchestrator | } failure 2025-07-24 00:19:13.220281 | 2025-07-24 00:19:13.220396 | PLAY RECAP 2025-07-24 00:19:13.220469 | orchestrator | ok: 20 changed: 7 unreachable: 0 failed: 1 skipped: 2 rescued: 0 ignored: 0 2025-07-24 00:19:13.220519 | 2025-07-24 00:19:13.358501 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-24 00:19:13.359893 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-24 00:19:14.145623 | 2025-07-24 00:19:14.145872 | PLAY [Post output play] 2025-07-24 00:19:14.161480 | 2025-07-24 00:19:14.161611 | LOOP [stage-output : Register sources] 2025-07-24 00:19:14.231553 | 2025-07-24 00:19:14.231831 | TASK [stage-output : Check sudo] 2025-07-24 00:19:15.335348 | orchestrator | sudo: a password is required 2025-07-24 00:19:15.777848 | orchestrator | ok: Runtime: 0:00:00.264834 2025-07-24 00:19:15.792577 | 2025-07-24 00:19:15.792752 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-24 00:19:15.836027 | 2025-07-24 00:19:15.836373 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-24 00:19:15.905551 | orchestrator | ok 2025-07-24 00:19:15.914723 | 2025-07-24 00:19:15.914889 | LOOP [stage-output : Ensure target folders exist] 2025-07-24 00:19:16.421151 | orchestrator | ok: "docs" 2025-07-24 00:19:16.421503 | 2025-07-24 00:19:16.670875 | orchestrator | ok: "artifacts" 2025-07-24 00:19:16.939066 | orchestrator | ok: "logs" 2025-07-24 00:19:16.963897 | 2025-07-24 00:19:16.964154 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-24 00:19:17.004799 | 2025-07-24 00:19:17.005164 | TASK [stage-output : Make all log files readable] 2025-07-24 00:19:17.307626 | orchestrator | ok 2025-07-24 00:19:17.316251 | 2025-07-24 00:19:17.316392 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-24 00:19:17.351080 | orchestrator | skipping: Conditional result was False 2025-07-24 00:19:17.358891 | 2025-07-24 00:19:17.359070 | TASK [stage-output : Discover log files for compression] 2025-07-24 00:19:17.384605 | orchestrator | skipping: Conditional result was False 2025-07-24 00:19:17.395154 | 2025-07-24 00:19:17.395311 | LOOP [stage-output : Archive everything from logs] 2025-07-24 00:19:17.440660 | 2025-07-24 00:19:17.440858 | PLAY [Post cleanup play] 2025-07-24 00:19:17.450267 | 2025-07-24 00:19:17.450404 | TASK [Set cloud fact (Zuul deployment)] 2025-07-24 00:19:17.503397 | orchestrator | ok 2025-07-24 00:19:17.512703 | 2025-07-24 00:19:17.512823 | TASK [Set cloud fact (local deployment)] 2025-07-24 00:19:17.536842 | orchestrator | skipping: Conditional result was False 2025-07-24 00:19:17.547023 | 2025-07-24 00:19:17.547141 | TASK [Clean the cloud environment] 2025-07-24 00:19:18.791712 | orchestrator | 2025-07-24 00:19:18 - clean up servers 2025-07-24 00:19:19.537992 | orchestrator | 2025-07-24 00:19:19 - testbed-manager 2025-07-24 00:19:19.629176 | orchestrator | 2025-07-24 00:19:19 - testbed-node-0 2025-07-24 00:19:19.717618 | orchestrator | 2025-07-24 00:19:19 - testbed-node-5 2025-07-24 00:19:19.807270 | orchestrator | 2025-07-24 00:19:19 - testbed-node-2 2025-07-24 00:19:19.898863 | orchestrator | 2025-07-24 00:19:19 - testbed-node-3 2025-07-24 00:19:19.993353 | orchestrator | 2025-07-24 00:19:19 - testbed-node-1 2025-07-24 00:19:20.086103 | orchestrator | 2025-07-24 00:19:20 - testbed-node-4 2025-07-24 00:19:20.172815 | orchestrator | 2025-07-24 00:19:20 - clean up keypairs 2025-07-24 00:19:20.191624 | orchestrator | 2025-07-24 00:19:20 - testbed 2025-07-24 00:19:20.217720 | orchestrator | 2025-07-24 00:19:20 - wait for servers to be gone 2025-07-24 00:19:31.071917 | orchestrator | 2025-07-24 00:19:31 - clean up ports 2025-07-24 00:19:31.264363 | orchestrator | 2025-07-24 00:19:31 - 1fbc945a-c5ee-4220-b11e-2883e206e507 2025-07-24 00:19:31.744665 | orchestrator | 2025-07-24 00:19:31 - 2b3f80dc-8fe8-4b34-8d9c-a3852ca4c9b0 2025-07-24 00:19:31.992251 | orchestrator | 2025-07-24 00:19:31 - 52e484b2-8da2-43d8-af2b-75e642e5a7b1 2025-07-24 00:19:32.205290 | orchestrator | 2025-07-24 00:19:32 - 7bc5bc69-d732-4aa8-973c-67216821634f 2025-07-24 00:19:32.415150 | orchestrator | 2025-07-24 00:19:32 - 8e49ca0c-a2da-434a-bc5c-e897bd78384c 2025-07-24 00:19:32.618909 | orchestrator | 2025-07-24 00:19:32 - ac3e7646-76be-4fcd-a4a6-64b082d7fdcd 2025-07-24 00:19:32.823753 | orchestrator | 2025-07-24 00:19:32 - c9f16333-3e51-400f-8b8b-c27d3560f4c1 2025-07-24 00:19:33.059628 | orchestrator | 2025-07-24 00:19:33 - clean up volumes 2025-07-24 00:19:33.177695 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-1-node-base 2025-07-24 00:19:33.219523 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-3-node-base 2025-07-24 00:19:33.263031 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-5-node-base 2025-07-24 00:19:33.304414 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-0-node-base 2025-07-24 00:19:33.352556 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-2-node-base 2025-07-24 00:19:33.393197 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-4-node-base 2025-07-24 00:19:33.433736 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-manager-base 2025-07-24 00:19:33.476914 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-4-node-4 2025-07-24 00:19:33.521865 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-6-node-3 2025-07-24 00:19:33.563165 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-1-node-4 2025-07-24 00:19:33.610375 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-2-node-5 2025-07-24 00:19:33.653889 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-7-node-4 2025-07-24 00:19:33.704866 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-0-node-3 2025-07-24 00:19:33.750798 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-8-node-5 2025-07-24 00:19:33.790632 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-5-node-5 2025-07-24 00:19:33.835659 | orchestrator | 2025-07-24 00:19:33 - testbed-volume-3-node-3 2025-07-24 00:19:33.881091 | orchestrator | 2025-07-24 00:19:33 - disconnect routers 2025-07-24 00:19:33.995683 | orchestrator | 2025-07-24 00:19:33 - testbed 2025-07-24 00:19:35.418361 | orchestrator | 2025-07-24 00:19:35 - clean up subnets 2025-07-24 00:19:35.457767 | orchestrator | 2025-07-24 00:19:35 - subnet-testbed-management 2025-07-24 00:19:35.638111 | orchestrator | 2025-07-24 00:19:35 - clean up networks 2025-07-24 00:19:35.800638 | orchestrator | 2025-07-24 00:19:35 - net-testbed-management 2025-07-24 00:19:36.078381 | orchestrator | 2025-07-24 00:19:36 - clean up security groups 2025-07-24 00:19:36.131871 | orchestrator | 2025-07-24 00:19:36 - testbed-management 2025-07-24 00:19:36.258670 | orchestrator | 2025-07-24 00:19:36 - testbed-node 2025-07-24 00:19:36.370370 | orchestrator | 2025-07-24 00:19:36 - clean up floating ips 2025-07-24 00:19:36.414317 | orchestrator | 2025-07-24 00:19:36 - 81.163.192.173 2025-07-24 00:19:36.752828 | orchestrator | 2025-07-24 00:19:36 - clean up routers 2025-07-24 00:19:36.878287 | orchestrator | 2025-07-24 00:19:36 - testbed 2025-07-24 00:19:38.103855 | orchestrator | ok: Runtime: 0:00:19.885278 2025-07-24 00:19:38.108327 | 2025-07-24 00:19:38.108487 | PLAY RECAP 2025-07-24 00:19:38.108610 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-24 00:19:38.108673 | 2025-07-24 00:19:38.248047 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-24 00:19:38.249068 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-24 00:19:38.979095 | 2025-07-24 00:19:38.979273 | PLAY [Cleanup play] 2025-07-24 00:19:38.996300 | 2025-07-24 00:19:38.996444 | TASK [Set cloud fact (Zuul deployment)] 2025-07-24 00:19:39.054318 | orchestrator | ok 2025-07-24 00:19:39.063744 | 2025-07-24 00:19:39.063901 | TASK [Set cloud fact (local deployment)] 2025-07-24 00:19:39.099088 | orchestrator | skipping: Conditional result was False 2025-07-24 00:19:39.117921 | 2025-07-24 00:19:39.118100 | TASK [Clean the cloud environment] 2025-07-24 00:19:40.271896 | orchestrator | 2025-07-24 00:19:40 - clean up servers 2025-07-24 00:19:40.751155 | orchestrator | 2025-07-24 00:19:40 - clean up keypairs 2025-07-24 00:19:40.771459 | orchestrator | 2025-07-24 00:19:40 - wait for servers to be gone 2025-07-24 00:19:40.811904 | orchestrator | 2025-07-24 00:19:40 - clean up ports 2025-07-24 00:19:40.895783 | orchestrator | 2025-07-24 00:19:40 - clean up volumes 2025-07-24 00:19:40.962685 | orchestrator | 2025-07-24 00:19:40 - disconnect routers 2025-07-24 00:19:40.991989 | orchestrator | 2025-07-24 00:19:40 - clean up subnets 2025-07-24 00:19:41.009327 | orchestrator | 2025-07-24 00:19:41 - clean up networks 2025-07-24 00:19:41.169293 | orchestrator | 2025-07-24 00:19:41 - clean up security groups 2025-07-24 00:19:41.208222 | orchestrator | 2025-07-24 00:19:41 - clean up floating ips 2025-07-24 00:19:41.232034 | orchestrator | 2025-07-24 00:19:41 - clean up routers 2025-07-24 00:19:42.002917 | orchestrator | ok: Runtime: 0:00:01.385382 2025-07-24 00:19:42.005359 | 2025-07-24 00:19:42.005473 | PLAY RECAP 2025-07-24 00:19:42.005543 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-24 00:19:42.005578 | 2025-07-24 00:19:42.120880 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-24 00:19:42.122551 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-24 00:19:42.867813 | 2025-07-24 00:19:42.868029 | PLAY [Base post-fetch] 2025-07-24 00:19:42.884044 | 2025-07-24 00:19:42.884179 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-24 00:19:42.940270 | orchestrator | skipping: Conditional result was False 2025-07-24 00:19:42.954460 | 2025-07-24 00:19:42.954662 | TASK [fetch-output : Set log path for single node] 2025-07-24 00:19:43.005715 | orchestrator | ok 2025-07-24 00:19:43.014082 | 2025-07-24 00:19:43.014233 | LOOP [fetch-output : Ensure local output dirs] 2025-07-24 00:19:43.527770 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/2423a3765b4b44bd9960365058545dbd/work/logs" 2025-07-24 00:19:43.801683 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2423a3765b4b44bd9960365058545dbd/work/artifacts" 2025-07-24 00:19:44.106683 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2423a3765b4b44bd9960365058545dbd/work/docs" 2025-07-24 00:19:44.121900 | 2025-07-24 00:19:44.122063 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-24 00:19:45.065824 | orchestrator | changed: .d..t...... ./ 2025-07-24 00:19:45.066228 | orchestrator | changed: All items complete 2025-07-24 00:19:45.066308 | 2025-07-24 00:19:45.826623 | orchestrator | changed: .d..t...... ./ 2025-07-24 00:19:46.568422 | orchestrator | changed: .d..t...... ./ 2025-07-24 00:19:46.593075 | 2025-07-24 00:19:46.593222 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-24 00:19:46.632093 | orchestrator | skipping: Conditional result was False 2025-07-24 00:19:46.635682 | orchestrator | skipping: Conditional result was False 2025-07-24 00:19:46.654865 | 2025-07-24 00:19:46.655007 | PLAY RECAP 2025-07-24 00:19:46.655087 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-07-24 00:19:46.655132 | 2025-07-24 00:19:46.787040 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-24 00:19:46.788049 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-24 00:19:47.523678 | 2025-07-24 00:19:47.523850 | PLAY [Base post] 2025-07-24 00:19:47.539421 | 2025-07-24 00:19:47.539567 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-24 00:19:49.313144 | orchestrator | changed 2025-07-24 00:19:49.322868 | 2025-07-24 00:19:49.323020 | PLAY RECAP 2025-07-24 00:19:49.323100 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-24 00:19:49.323176 | 2025-07-24 00:19:49.440466 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-24 00:19:49.442832 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-24 00:19:50.235983 | 2025-07-24 00:19:50.236174 | PLAY [Base post-logs] 2025-07-24 00:19:50.246588 | 2025-07-24 00:19:50.246716 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-24 00:19:50.742692 | localhost | changed 2025-07-24 00:19:50.760896 | 2025-07-24 00:19:50.761114 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-24 00:19:50.801453 | localhost | ok 2025-07-24 00:19:50.810357 | 2025-07-24 00:19:50.810535 | TASK [Set zuul-log-path fact] 2025-07-24 00:19:50.840021 | localhost | ok 2025-07-24 00:19:50.855146 | 2025-07-24 00:19:50.855310 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-24 00:19:50.893584 | localhost | ok 2025-07-24 00:19:50.900612 | 2025-07-24 00:19:50.900797 | TASK [upload-logs : Create log directories] 2025-07-24 00:19:51.420693 | localhost | changed 2025-07-24 00:19:51.426888 | 2025-07-24 00:19:51.427083 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-24 00:19:51.928108 | localhost -> localhost | ok: Runtime: 0:00:00.004002 2025-07-24 00:19:51.935043 | 2025-07-24 00:19:51.935223 | TASK [upload-logs : Upload logs to log server] 2025-07-24 00:19:52.500553 | localhost | Output suppressed because no_log was given 2025-07-24 00:19:52.503690 | 2025-07-24 00:19:52.503858 | LOOP [upload-logs : Compress console log and json output] 2025-07-24 00:19:52.560094 | localhost | skipping: Conditional result was False 2025-07-24 00:19:52.565253 | localhost | skipping: Conditional result was False 2025-07-24 00:19:52.577587 | 2025-07-24 00:19:52.577888 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-24 00:19:52.624358 | localhost | skipping: Conditional result was False 2025-07-24 00:19:52.625097 | 2025-07-24 00:19:52.628358 | localhost | skipping: Conditional result was False 2025-07-24 00:19:52.641638 | 2025-07-24 00:19:52.641875 | LOOP [upload-logs : Upload console log and json output]