2025-12-05 00:00:10.199161 | Job console starting 2025-12-05 00:00:10.258498 | Updating git repos 2025-12-05 00:00:10.678796 | Cloning repos into workspace 2025-12-05 00:00:10.915229 | Restoring repo states 2025-12-05 00:00:10.945796 | Merging changes 2025-12-05 00:00:10.945813 | Checking out repos 2025-12-05 00:00:11.270194 | Preparing playbooks 2025-12-05 00:00:12.153442 | Running Ansible setup 2025-12-05 00:00:20.634711 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-12-05 00:00:23.361909 | 2025-12-05 00:00:23.362092 | PLAY [Base pre] 2025-12-05 00:00:23.398800 | 2025-12-05 00:00:23.398998 | TASK [Setup log path fact] 2025-12-05 00:00:23.431069 | orchestrator | ok 2025-12-05 00:00:23.472032 | 2025-12-05 00:00:23.472220 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-12-05 00:00:23.539303 | orchestrator | ok 2025-12-05 00:00:23.558633 | 2025-12-05 00:00:23.558788 | TASK [emit-job-header : Print job information] 2025-12-05 00:00:23.631235 | # Job Information 2025-12-05 00:00:23.631480 | Ansible Version: 2.16.14 2025-12-05 00:00:23.631521 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2025-12-05 00:00:23.631588 | Pipeline: periodic-midnight 2025-12-05 00:00:23.631615 | Executor: 521e9411259a 2025-12-05 00:00:23.631637 | Triggered by: https://github.com/osism/testbed 2025-12-05 00:00:23.631658 | Event ID: 5dceccc39a3a4c3dbaad19580110cea1 2025-12-05 00:00:23.649310 | 2025-12-05 00:00:23.649469 | LOOP [emit-job-header : Print node information] 2025-12-05 00:00:23.903727 | orchestrator | ok: 2025-12-05 00:00:23.904005 | orchestrator | # Node Information 2025-12-05 00:00:23.904043 | orchestrator | Inventory Hostname: orchestrator 2025-12-05 00:00:23.904068 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-12-05 00:00:23.904089 | orchestrator | Username: zuul-testbed03 2025-12-05 00:00:23.904110 | orchestrator | Distro: Debian 12.12 2025-12-05 00:00:23.904132 | orchestrator | Provider: static-testbed 2025-12-05 00:00:23.904153 | orchestrator | Region: 2025-12-05 00:00:23.904174 | orchestrator | Label: testbed-orchestrator 2025-12-05 00:00:23.904193 | orchestrator | Product Name: OpenStack Nova 2025-12-05 00:00:23.904212 | orchestrator | Interface IP: 81.163.193.140 2025-12-05 00:00:23.923909 | 2025-12-05 00:00:23.924076 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-12-05 00:00:25.788579 | orchestrator -> localhost | changed 2025-12-05 00:00:25.798724 | 2025-12-05 00:00:25.812176 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-12-05 00:00:29.340209 | orchestrator -> localhost | changed 2025-12-05 00:00:29.359949 | 2025-12-05 00:00:29.360108 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-12-05 00:00:30.331349 | orchestrator -> localhost | ok 2025-12-05 00:00:30.339051 | 2025-12-05 00:00:30.339211 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-12-05 00:00:30.407689 | orchestrator | ok 2025-12-05 00:00:30.451933 | orchestrator | included: /var/lib/zuul/builds/0f87daf7bba64c959b03fea26b5993d0/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-12-05 00:00:30.511081 | 2025-12-05 00:00:30.511260 | TASK [add-build-sshkey : Create Temp SSH key] 2025-12-05 00:00:36.154672 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-12-05 00:00:36.154925 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/0f87daf7bba64c959b03fea26b5993d0/work/0f87daf7bba64c959b03fea26b5993d0_id_rsa 2025-12-05 00:00:36.154966 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/0f87daf7bba64c959b03fea26b5993d0/work/0f87daf7bba64c959b03fea26b5993d0_id_rsa.pub 2025-12-05 00:00:36.154994 | orchestrator -> localhost | The key fingerprint is: 2025-12-05 00:00:36.155018 | orchestrator -> localhost | SHA256:korKnRRgZfO8yjZHxA0bRSNbeO39W68VO4IvHM2TNdU zuul-build-sshkey 2025-12-05 00:00:36.155041 | orchestrator -> localhost | The key's randomart image is: 2025-12-05 00:00:36.155075 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-12-05 00:00:36.155099 | orchestrator -> localhost | | + ++=. .| 2025-12-05 00:00:36.155122 | orchestrator -> localhost | | o =.O... E| 2025-12-05 00:00:36.155143 | orchestrator -> localhost | | o B... . .| 2025-12-05 00:00:36.155163 | orchestrator -> localhost | |. . . .. . . o | 2025-12-05 00:00:36.155183 | orchestrator -> localhost | | . oo S + o..| 2025-12-05 00:00:36.155206 | orchestrator -> localhost | | ..+. . ..* .o| 2025-12-05 00:00:36.155226 | orchestrator -> localhost | | .*.. ....+oo| 2025-12-05 00:00:36.155245 | orchestrator -> localhost | |..+ + o....o| 2025-12-05 00:00:36.155266 | orchestrator -> localhost | |.. o .... | 2025-12-05 00:00:36.155286 | orchestrator -> localhost | +----[SHA256]-----+ 2025-12-05 00:00:36.155341 | orchestrator -> localhost | ok: Runtime: 0:00:03.439525 2025-12-05 00:00:36.163206 | 2025-12-05 00:00:36.163329 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-12-05 00:00:36.241222 | orchestrator | ok 2025-12-05 00:00:36.273467 | orchestrator | included: /var/lib/zuul/builds/0f87daf7bba64c959b03fea26b5993d0/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-12-05 00:00:36.297206 | 2025-12-05 00:00:36.297305 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-12-05 00:00:36.358494 | orchestrator | skipping: Conditional result was False 2025-12-05 00:00:36.365064 | 2025-12-05 00:00:36.365162 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-12-05 00:00:37.325569 | orchestrator | changed 2025-12-05 00:00:37.342907 | 2025-12-05 00:00:37.343015 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-12-05 00:00:37.661842 | orchestrator | ok 2025-12-05 00:00:37.666883 | 2025-12-05 00:00:37.666962 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-12-05 00:00:38.925053 | orchestrator | ok 2025-12-05 00:00:38.930732 | 2025-12-05 00:00:38.930824 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-12-05 00:00:39.441432 | orchestrator | ok 2025-12-05 00:00:39.446559 | 2025-12-05 00:00:39.446921 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-12-05 00:00:39.474704 | orchestrator | skipping: Conditional result was False 2025-12-05 00:00:39.491749 | 2025-12-05 00:00:39.491845 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-12-05 00:00:40.488951 | orchestrator -> localhost | changed 2025-12-05 00:00:40.507490 | 2025-12-05 00:00:40.507631 | TASK [add-build-sshkey : Add back temp key] 2025-12-05 00:00:41.509541 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/0f87daf7bba64c959b03fea26b5993d0/work/0f87daf7bba64c959b03fea26b5993d0_id_rsa (zuul-build-sshkey) 2025-12-05 00:00:41.509761 | orchestrator -> localhost | ok: Runtime: 0:00:00.038726 2025-12-05 00:00:41.516575 | 2025-12-05 00:00:41.516685 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-12-05 00:00:42.094907 | orchestrator | ok 2025-12-05 00:00:42.100779 | 2025-12-05 00:00:42.100879 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-12-05 00:00:42.156533 | orchestrator | skipping: Conditional result was False 2025-12-05 00:00:42.268171 | 2025-12-05 00:00:42.268326 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-12-05 00:00:42.984207 | orchestrator | ok 2025-12-05 00:00:43.015823 | 2025-12-05 00:00:43.015937 | TASK [validate-host : Define zuul_info_dir fact] 2025-12-05 00:00:43.075174 | orchestrator | ok 2025-12-05 00:00:43.084618 | 2025-12-05 00:00:43.084723 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-12-05 00:00:44.029456 | orchestrator -> localhost | ok 2025-12-05 00:00:44.036930 | 2025-12-05 00:00:44.037035 | TASK [validate-host : Collect information about the host] 2025-12-05 00:00:45.591915 | orchestrator | ok 2025-12-05 00:00:45.637159 | 2025-12-05 00:00:45.637287 | TASK [validate-host : Sanitize hostname] 2025-12-05 00:00:45.845847 | orchestrator | ok 2025-12-05 00:00:45.859310 | 2025-12-05 00:00:45.859428 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-12-05 00:00:47.192042 | orchestrator -> localhost | changed 2025-12-05 00:00:47.200514 | 2025-12-05 00:00:47.200734 | TASK [validate-host : Collect information about zuul worker] 2025-12-05 00:00:48.306485 | orchestrator | ok 2025-12-05 00:00:48.311635 | 2025-12-05 00:00:48.311735 | TASK [validate-host : Write out all zuul information for each host] 2025-12-05 00:00:50.257491 | orchestrator -> localhost | changed 2025-12-05 00:00:50.303525 | 2025-12-05 00:00:50.303769 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-12-05 00:00:50.666968 | orchestrator | ok 2025-12-05 00:00:50.680207 | 2025-12-05 00:00:50.680349 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-12-05 00:01:52.799665 | orchestrator | changed: 2025-12-05 00:01:52.799896 | orchestrator | .d..t...... src/ 2025-12-05 00:01:52.799931 | orchestrator | .d..t...... src/github.com/ 2025-12-05 00:01:52.799955 | orchestrator | .d..t...... src/github.com/osism/ 2025-12-05 00:01:52.799977 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-12-05 00:01:52.799998 | orchestrator | RedHat.yml 2025-12-05 00:01:52.842899 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-12-05 00:01:52.842917 | orchestrator | RedHat.yml 2025-12-05 00:01:52.842971 | orchestrator | = 2.2.0"... 2025-12-05 00:02:10.092707 | orchestrator | 00:02:10.092 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-12-05 00:02:10.127278 | orchestrator | 00:02:10.126 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-12-05 00:02:10.612214 | orchestrator | 00:02:10.612 STDOUT terraform: - Installing hashicorp/local v2.6.1... 2025-12-05 00:02:11.613949 | orchestrator | 00:02:11.613 STDOUT terraform: - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2025-12-05 00:02:11.679106 | orchestrator | 00:02:11.678 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-12-05 00:02:12.174107 | orchestrator | 00:02:12.173 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-12-05 00:02:12.496924 | orchestrator | 00:02:12.496 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.4.0... 2025-12-05 00:02:13.333014 | orchestrator | 00:02:13.331 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2025-12-05 00:02:13.333123 | orchestrator | 00:02:13.333 STDOUT terraform: Providers are signed by their developers. 2025-12-05 00:02:13.333155 | orchestrator | 00:02:13.333 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-12-05 00:02:13.333224 | orchestrator | 00:02:13.333 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-12-05 00:02:13.333351 | orchestrator | 00:02:13.333 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-12-05 00:02:13.333404 | orchestrator | 00:02:13.333 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-12-05 00:02:13.333460 | orchestrator | 00:02:13.333 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-12-05 00:02:13.333470 | orchestrator | 00:02:13.333 STDOUT terraform: you run "tofu init" in the future. 2025-12-05 00:02:13.334126 | orchestrator | 00:02:13.334 STDOUT terraform: OpenTofu has been successfully initialized! 2025-12-05 00:02:13.334252 | orchestrator | 00:02:13.334 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-12-05 00:02:13.334301 | orchestrator | 00:02:13.334 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-12-05 00:02:13.334309 | orchestrator | 00:02:13.334 STDOUT terraform: should now work. 2025-12-05 00:02:13.334366 | orchestrator | 00:02:13.334 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-12-05 00:02:13.334418 | orchestrator | 00:02:13.334 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-12-05 00:02:13.334478 | orchestrator | 00:02:13.334 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-12-05 00:02:13.688999 | orchestrator | 00:02:13.688 STDOUT terraform: Created and switched to workspace "ci"! 2025-12-05 00:02:13.689077 | orchestrator | 00:02:13.688 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-12-05 00:02:13.689093 | orchestrator | 00:02:13.688 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-12-05 00:02:13.689101 | orchestrator | 00:02:13.689 STDOUT terraform: for this configuration. 2025-12-05 00:02:13.991803 | orchestrator | 00:02:13.990 STDOUT terraform: ci.auto.tfvars 2025-12-05 00:02:14.188399 | orchestrator | 00:02:14.188 STDOUT terraform: default_custom.tf 2025-12-05 00:02:15.830573 | orchestrator | 00:02:15.830 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-12-05 00:02:16.399489 | orchestrator | 00:02:16.398 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-12-05 00:02:16.656091 | orchestrator | 00:02:16.651 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-12-05 00:02:16.656337 | orchestrator | 00:02:16.654 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-12-05 00:02:16.656359 | orchestrator | 00:02:16.655 STDOUT terraform:  + create 2025-12-05 00:02:16.656373 | orchestrator | 00:02:16.655 STDOUT terraform:  <= read (data resources) 2025-12-05 00:02:16.656385 | orchestrator | 00:02:16.655 STDOUT terraform: OpenTofu will perform the following actions: 2025-12-05 00:02:16.656397 | orchestrator | 00:02:16.655 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-12-05 00:02:16.656423 | orchestrator | 00:02:16.655 STDOUT terraform:  # (config refers to values not yet known) 2025-12-05 00:02:16.656435 | orchestrator | 00:02:16.655 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-12-05 00:02:16.656446 | orchestrator | 00:02:16.655 STDOUT terraform:  + checksum = (known after apply) 2025-12-05 00:02:16.656457 | orchestrator | 00:02:16.655 STDOUT terraform:  + created_at = (known after apply) 2025-12-05 00:02:16.656467 | orchestrator | 00:02:16.655 STDOUT terraform:  + file = (known after apply) 2025-12-05 00:02:16.656478 | orchestrator | 00:02:16.655 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.656489 | orchestrator | 00:02:16.655 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.656500 | orchestrator | 00:02:16.655 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-12-05 00:02:16.656511 | orchestrator | 00:02:16.655 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-12-05 00:02:16.656522 | orchestrator | 00:02:16.655 STDOUT terraform:  + most_recent = true 2025-12-05 00:02:16.656532 | orchestrator | 00:02:16.655 STDOUT terraform:  + name = (known after apply) 2025-12-05 00:02:16.656543 | orchestrator | 00:02:16.655 STDOUT terraform:  + protected = (known after apply) 2025-12-05 00:02:16.656554 | orchestrator | 00:02:16.655 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.656565 | orchestrator | 00:02:16.655 STDOUT terraform:  + schema = (known after apply) 2025-12-05 00:02:16.656575 | orchestrator | 00:02:16.655 STDOUT terraform:  + size_bytes = (known after apply) 2025-12-05 00:02:16.656586 | orchestrator | 00:02:16.655 STDOUT terraform:  + tags = (known after apply) 2025-12-05 00:02:16.656597 | orchestrator | 00:02:16.655 STDOUT terraform:  + updated_at = (known after apply) 2025-12-05 00:02:16.656632 | orchestrator | 00:02:16.655 STDOUT terraform:  } 2025-12-05 00:02:16.656658 | orchestrator | 00:02:16.655 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-12-05 00:02:16.656670 | orchestrator | 00:02:16.656 STDOUT terraform:  # (config refers to values not yet known) 2025-12-05 00:02:16.656681 | orchestrator | 00:02:16.656 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-12-05 00:02:16.656692 | orchestrator | 00:02:16.656 STDOUT terraform:  + checksum = (known after apply) 2025-12-05 00:02:16.656702 | orchestrator | 00:02:16.656 STDOUT terraform:  + created_at = (known after apply) 2025-12-05 00:02:16.656713 | orchestrator | 00:02:16.656 STDOUT terraform:  + file = (known after apply) 2025-12-05 00:02:16.656729 | orchestrator | 00:02:16.656 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.656740 | orchestrator | 00:02:16.656 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.656751 | orchestrator | 00:02:16.656 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-12-05 00:02:16.656761 | orchestrator | 00:02:16.656 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-12-05 00:02:16.656772 | orchestrator | 00:02:16.656 STDOUT terraform:  + most_recent = true 2025-12-05 00:02:16.656782 | orchestrator | 00:02:16.656 STDOUT terraform:  + name = (known after apply) 2025-12-05 00:02:16.656793 | orchestrator | 00:02:16.656 STDOUT terraform:  + protected = (known after apply) 2025-12-05 00:02:16.656804 | orchestrator | 00:02:16.656 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.656814 | orchestrator | 00:02:16.656 STDOUT terraform:  + schema = (known after apply) 2025-12-05 00:02:16.656825 | orchestrator | 00:02:16.656 STDOUT terraform:  + size_bytes = (known after apply) 2025-12-05 00:02:16.656835 | orchestrator | 00:02:16.656 STDOUT terraform:  + tags = (known after apply) 2025-12-05 00:02:16.656846 | orchestrator | 00:02:16.656 STDOUT terraform:  + updated_at = (known after apply) 2025-12-05 00:02:16.656857 | orchestrator | 00:02:16.656 STDOUT terraform:  } 2025-12-05 00:02:16.656871 | orchestrator | 00:02:16.656 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-12-05 00:02:16.656882 | orchestrator | 00:02:16.656 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-12-05 00:02:16.656893 | orchestrator | 00:02:16.656 STDOUT terraform:  + content = (known after apply) 2025-12-05 00:02:16.656909 | orchestrator | 00:02:16.656 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-12-05 00:02:16.656921 | orchestrator | 00:02:16.656 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-12-05 00:02:16.656931 | orchestrator | 00:02:16.656 STDOUT terraform:  + content_md5 = (known after apply) 2025-12-05 00:02:16.656946 | orchestrator | 00:02:16.656 STDOUT terraform:  + content_sha1 = (known after apply) 2025-12-05 00:02:16.656956 | orchestrator | 00:02:16.656 STDOUT terraform:  + content_sha256 = (known after apply) 2025-12-05 00:02:16.656970 | orchestrator | 00:02:16.656 STDOUT terraform:  + content_sha512 = (known after apply) 2025-12-05 00:02:16.656984 | orchestrator | 00:02:16.656 STDOUT terraform:  + directory_permission = "0777" 2025-12-05 00:02:16.657008 | orchestrator | 00:02:16.656 STDOUT terraform:  + file_permission = "0644" 2025-12-05 00:02:16.657061 | orchestrator | 00:02:16.656 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-12-05 00:02:16.657078 | orchestrator | 00:02:16.657 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.657089 | orchestrator | 00:02:16.657 STDOUT terraform:  } 2025-12-05 00:02:16.657264 | orchestrator | 00:02:16.657 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-12-05 00:02:16.657308 | orchestrator | 00:02:16.657 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-12-05 00:02:16.657324 | orchestrator | 00:02:16.657 STDOUT terraform:  + content = (known after apply) 2025-12-05 00:02:16.657377 | orchestrator | 00:02:16.657 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-12-05 00:02:16.660524 | orchestrator | 00:02:16.657 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-12-05 00:02:16.660605 | orchestrator | 00:02:16.660 STDOUT terraform:  + content_md5 = (known after apply) 2025-12-05 00:02:16.660618 | orchestrator | 00:02:16.660 STDOUT terraform:  + content_sha1 = (known after apply) 2025-12-05 00:02:16.660629 | orchestrator | 00:02:16.660 STDOUT terraform:  + content_sha256 = (known after apply) 2025-12-05 00:02:16.660643 | orchestrator | 00:02:16.660 STDOUT terraform:  + content_sha512 = (known after apply) 2025-12-05 00:02:16.660655 | orchestrator | 00:02:16.660 STDOUT terraform:  + directory_permission = "0777" 2025-12-05 00:02:16.660669 | orchestrator | 00:02:16.660 STDOUT terraform:  + file_permission = "0644" 2025-12-05 00:02:16.660816 | orchestrator | 00:02:16.660 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-12-05 00:02:16.660888 | orchestrator | 00:02:16.660 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.660902 | orchestrator | 00:02:16.660 STDOUT terraform:  } 2025-12-05 00:02:16.660918 | orchestrator | 00:02:16.660 STDOUT terraform:  # local_file.inventory will be created 2025-12-05 00:02:16.660930 | orchestrator | 00:02:16.660 STDOUT terraform:  + resource "local_file" "inventory" { 2025-12-05 00:02:16.660941 | orchestrator | 00:02:16.660 STDOUT terraform:  + content = (known after apply) 2025-12-05 00:02:16.660955 | orchestrator | 00:02:16.660 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-12-05 00:02:16.660968 | orchestrator | 00:02:16.660 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-12-05 00:02:16.661023 | orchestrator | 00:02:16.660 STDOUT terraform:  + content_md5 = (known after apply) 2025-12-05 00:02:16.661055 | orchestrator | 00:02:16.661 STDOUT terraform:  + content_sha1 = (known after apply) 2025-12-05 00:02:16.661089 | orchestrator | 00:02:16.661 STDOUT terraform:  + content_sha256 = (known after apply) 2025-12-05 00:02:16.661129 | orchestrator | 00:02:16.661 STDOUT terraform:  + content_sha512 = (known after apply) 2025-12-05 00:02:16.661146 | orchestrator | 00:02:16.661 STDOUT terraform:  + directory_permission = "0777" 2025-12-05 00:02:16.661180 | orchestrator | 00:02:16.661 STDOUT terraform:  + file_permission = "0644" 2025-12-05 00:02:16.661215 | orchestrator | 00:02:16.661 STDOUT terraform:  + filename = "inventory.ci" 2025-12-05 00:02:16.661258 | orchestrator | 00:02:16.661 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.661270 | orchestrator | 00:02:16.661 STDOUT terraform:  } 2025-12-05 00:02:16.661284 | orchestrator | 00:02:16.661 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-12-05 00:02:16.661299 | orchestrator | 00:02:16.661 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-12-05 00:02:16.661341 | orchestrator | 00:02:16.661 STDOUT terraform:  + content = (sensitive value) 2025-12-05 00:02:16.661357 | orchestrator | 00:02:16.661 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-12-05 00:02:16.661414 | orchestrator | 00:02:16.661 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-12-05 00:02:16.661431 | orchestrator | 00:02:16.661 STDOUT terraform:  + content_md5 = (known after apply) 2025-12-05 00:02:16.661471 | orchestrator | 00:02:16.661 STDOUT terraform:  + content_sha1 = (known after apply) 2025-12-05 00:02:16.661500 | orchestrator | 00:02:16.661 STDOUT terraform:  + content_sha256 = (known after apply) 2025-12-05 00:02:16.661554 | orchestrator | 00:02:16.661 STDOUT terraform:  + content_sha512 = (known after apply) 2025-12-05 00:02:16.661570 | orchestrator | 00:02:16.661 STDOUT terraform:  + directory_permission = "0700" 2025-12-05 00:02:16.661584 | orchestrator | 00:02:16.661 STDOUT terraform:  + file_permission = "0600" 2025-12-05 00:02:16.661629 | orchestrator | 00:02:16.661 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-12-05 00:02:16.661646 | orchestrator | 00:02:16.661 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.661659 | orchestrator | 00:02:16.661 STDOUT terraform:  } 2025-12-05 00:02:16.661673 | orchestrator | 00:02:16.661 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-12-05 00:02:16.661705 | orchestrator | 00:02:16.661 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-12-05 00:02:16.661720 | orchestrator | 00:02:16.661 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.661733 | orchestrator | 00:02:16.661 STDOUT terraform:  } 2025-12-05 00:02:16.661779 | orchestrator | 00:02:16.661 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-12-05 00:02:16.661845 | orchestrator | 00:02:16.661 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-12-05 00:02:16.661862 | orchestrator | 00:02:16.661 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.661876 | orchestrator | 00:02:16.661 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.661916 | orchestrator | 00:02:16.661 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.661946 | orchestrator | 00:02:16.661 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.661997 | orchestrator | 00:02:16.661 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.662013 | orchestrator | 00:02:16.661 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-12-05 00:02:16.662074 | orchestrator | 00:02:16.662 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.662089 | orchestrator | 00:02:16.662 STDOUT terraform:  + size = 80 2025-12-05 00:02:16.662112 | orchestrator | 00:02:16.662 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.662127 | orchestrator | 00:02:16.662 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.662138 | orchestrator | 00:02:16.662 STDOUT terraform:  } 2025-12-05 00:02:16.662244 | orchestrator | 00:02:16.662 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-12-05 00:02:16.662263 | orchestrator | 00:02:16.662 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-12-05 00:02:16.662277 | orchestrator | 00:02:16.662 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.662310 | orchestrator | 00:02:16.662 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.662325 | orchestrator | 00:02:16.662 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.662364 | orchestrator | 00:02:16.662 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.662396 | orchestrator | 00:02:16.662 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.662452 | orchestrator | 00:02:16.662 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-12-05 00:02:16.662466 | orchestrator | 00:02:16.662 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.662492 | orchestrator | 00:02:16.662 STDOUT terraform:  + size = 80 2025-12-05 00:02:16.662506 | orchestrator | 00:02:16.662 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.662535 | orchestrator | 00:02:16.662 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.662545 | orchestrator | 00:02:16.662 STDOUT terraform:  } 2025-12-05 00:02:16.662590 | orchestrator | 00:02:16.662 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-12-05 00:02:16.662651 | orchestrator | 00:02:16.662 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-12-05 00:02:16.662667 | orchestrator | 00:02:16.662 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.662679 | orchestrator | 00:02:16.662 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.662727 | orchestrator | 00:02:16.662 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.662765 | orchestrator | 00:02:16.662 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.662800 | orchestrator | 00:02:16.662 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.662837 | orchestrator | 00:02:16.662 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-12-05 00:02:16.662866 | orchestrator | 00:02:16.662 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.662880 | orchestrator | 00:02:16.662 STDOUT terraform:  + size = 80 2025-12-05 00:02:16.662907 | orchestrator | 00:02:16.662 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.662938 | orchestrator | 00:02:16.662 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.662948 | orchestrator | 00:02:16.662 STDOUT terraform:  } 2025-12-05 00:02:16.662987 | orchestrator | 00:02:16.662 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-12-05 00:02:16.663028 | orchestrator | 00:02:16.662 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-12-05 00:02:16.663063 | orchestrator | 00:02:16.663 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.663077 | orchestrator | 00:02:16.663 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.663133 | orchestrator | 00:02:16.663 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.663148 | orchestrator | 00:02:16.663 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.663182 | orchestrator | 00:02:16.663 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.663235 | orchestrator | 00:02:16.663 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-12-05 00:02:16.663267 | orchestrator | 00:02:16.663 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.663280 | orchestrator | 00:02:16.663 STDOUT terraform:  + size = 80 2025-12-05 00:02:16.663305 | orchestrator | 00:02:16.663 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.663319 | orchestrator | 00:02:16.663 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.663331 | orchestrator | 00:02:16.663 STDOUT terraform:  } 2025-12-05 00:02:16.663385 | orchestrator | 00:02:16.663 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-12-05 00:02:16.663422 | orchestrator | 00:02:16.663 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-12-05 00:02:16.663461 | orchestrator | 00:02:16.663 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.663476 | orchestrator | 00:02:16.663 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.663511 | orchestrator | 00:02:16.663 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.663547 | orchestrator | 00:02:16.663 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.668781 | orchestrator | 00:02:16.663 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.668831 | orchestrator | 00:02:16.663 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-12-05 00:02:16.668840 | orchestrator | 00:02:16.664 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.668848 | orchestrator | 00:02:16.664 STDOUT terraform:  + size = 80 2025-12-05 00:02:16.668857 | orchestrator | 00:02:16.664 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.668866 | orchestrator | 00:02:16.664 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.668874 | orchestrator | 00:02:16.664 STDOUT terraform:  } 2025-12-05 00:02:16.668883 | orchestrator | 00:02:16.664 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-12-05 00:02:16.668892 | orchestrator | 00:02:16.664 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-12-05 00:02:16.668900 | orchestrator | 00:02:16.664 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.668924 | orchestrator | 00:02:16.668 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.668932 | orchestrator | 00:02:16.668 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.668940 | orchestrator | 00:02:16.668 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.668947 | orchestrator | 00:02:16.668 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.668955 | orchestrator | 00:02:16.668 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-12-05 00:02:16.668963 | orchestrator | 00:02:16.668 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.668970 | orchestrator | 00:02:16.668 STDOUT terraform:  + size = 80 2025-12-05 00:02:16.668978 | orchestrator | 00:02:16.668 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.668986 | orchestrator | 00:02:16.668 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.668993 | orchestrator | 00:02:16.668 STDOUT terraform:  } 2025-12-05 00:02:16.669001 | orchestrator | 00:02:16.668 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-12-05 00:02:16.669019 | orchestrator | 00:02:16.668 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-12-05 00:02:16.669027 | orchestrator | 00:02:16.668 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.669035 | orchestrator | 00:02:16.668 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.669049 | orchestrator | 00:02:16.668 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.669057 | orchestrator | 00:02:16.668 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.669065 | orchestrator | 00:02:16.668 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.669073 | orchestrator | 00:02:16.668 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-12-05 00:02:16.669083 | orchestrator | 00:02:16.669 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.669094 | orchestrator | 00:02:16.669 STDOUT terraform:  + size = 80 2025-12-05 00:02:16.669135 | orchestrator | 00:02:16.669 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.669148 | orchestrator | 00:02:16.669 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.669211 | orchestrator | 00:02:16.669 STDOUT terraform:  } 2025-12-05 00:02:16.678325 | orchestrator | 00:02:16.678 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-12-05 00:02:16.678457 | orchestrator | 00:02:16.678 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-12-05 00:02:16.678526 | orchestrator | 00:02:16.678 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.678578 | orchestrator | 00:02:16.678 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.678642 | orchestrator | 00:02:16.678 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.678707 | orchestrator | 00:02:16.678 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.678773 | orchestrator | 00:02:16.678 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-12-05 00:02:16.678849 | orchestrator | 00:02:16.678 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.678891 | orchestrator | 00:02:16.678 STDOUT terraform:  + size = 20 2025-12-05 00:02:16.678937 | orchestrator | 00:02:16.678 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.679023 | orchestrator | 00:02:16.678 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.679056 | orchestrator | 00:02:16.679 STDOUT terraform:  } 2025-12-05 00:02:16.679142 | orchestrator | 00:02:16.679 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-12-05 00:02:16.679264 | orchestrator | 00:02:16.679 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-12-05 00:02:16.679332 | orchestrator | 00:02:16.679 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.679379 | orchestrator | 00:02:16.679 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.679450 | orchestrator | 00:02:16.679 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.679515 | orchestrator | 00:02:16.679 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.679581 | orchestrator | 00:02:16.679 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-12-05 00:02:16.679661 | orchestrator | 00:02:16.679 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.679707 | orchestrator | 00:02:16.679 STDOUT terraform:  + size = 20 2025-12-05 00:02:16.679753 | orchestrator | 00:02:16.679 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.679800 | orchestrator | 00:02:16.679 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.679833 | orchestrator | 00:02:16.679 STDOUT terraform:  } 2025-12-05 00:02:16.679909 | orchestrator | 00:02:16.679 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-12-05 00:02:16.679982 | orchestrator | 00:02:16.679 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-12-05 00:02:16.680046 | orchestrator | 00:02:16.679 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.680093 | orchestrator | 00:02:16.680 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.680193 | orchestrator | 00:02:16.680 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.680265 | orchestrator | 00:02:16.680 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.680334 | orchestrator | 00:02:16.680 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-12-05 00:02:16.680398 | orchestrator | 00:02:16.680 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.680441 | orchestrator | 00:02:16.680 STDOUT terraform:  + size = 20 2025-12-05 00:02:16.680487 | orchestrator | 00:02:16.680 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.680537 | orchestrator | 00:02:16.680 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.680570 | orchestrator | 00:02:16.680 STDOUT terraform:  } 2025-12-05 00:02:16.680650 | orchestrator | 00:02:16.680 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-12-05 00:02:16.680725 | orchestrator | 00:02:16.680 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-12-05 00:02:16.680783 | orchestrator | 00:02:16.680 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.680828 | orchestrator | 00:02:16.680 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.680885 | orchestrator | 00:02:16.680 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.680941 | orchestrator | 00:02:16.680 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.681002 | orchestrator | 00:02:16.680 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-12-05 00:02:16.681059 | orchestrator | 00:02:16.681 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.681098 | orchestrator | 00:02:16.681 STDOUT terraform:  + size = 20 2025-12-05 00:02:16.681140 | orchestrator | 00:02:16.681 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.681196 | orchestrator | 00:02:16.681 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.681225 | orchestrator | 00:02:16.681 STDOUT terraform:  } 2025-12-05 00:02:16.681298 | orchestrator | 00:02:16.681 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-12-05 00:02:16.681364 | orchestrator | 00:02:16.681 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-12-05 00:02:16.681419 | orchestrator | 00:02:16.681 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.681460 | orchestrator | 00:02:16.681 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.681516 | orchestrator | 00:02:16.681 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.681570 | orchestrator | 00:02:16.681 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.681629 | orchestrator | 00:02:16.681 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-12-05 00:02:16.681687 | orchestrator | 00:02:16.681 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.681723 | orchestrator | 00:02:16.681 STDOUT terraform:  + size = 20 2025-12-05 00:02:16.681766 | orchestrator | 00:02:16.681 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.681807 | orchestrator | 00:02:16.681 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.681835 | orchestrator | 00:02:16.681 STDOUT terraform:  } 2025-12-05 00:02:16.681902 | orchestrator | 00:02:16.681 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-12-05 00:02:16.681966 | orchestrator | 00:02:16.681 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-12-05 00:02:16.682043 | orchestrator | 00:02:16.681 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.682087 | orchestrator | 00:02:16.682 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.682144 | orchestrator | 00:02:16.682 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.682217 | orchestrator | 00:02:16.682 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.682283 | orchestrator | 00:02:16.682 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-12-05 00:02:16.682340 | orchestrator | 00:02:16.682 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.682379 | orchestrator | 00:02:16.682 STDOUT terraform:  + size = 20 2025-12-05 00:02:16.682419 | orchestrator | 00:02:16.682 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.682461 | orchestrator | 00:02:16.682 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.682488 | orchestrator | 00:02:16.682 STDOUT terraform:  } 2025-12-05 00:02:16.682553 | orchestrator | 00:02:16.682 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-12-05 00:02:16.682619 | orchestrator | 00:02:16.682 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-12-05 00:02:16.682674 | orchestrator | 00:02:16.682 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.682718 | orchestrator | 00:02:16.682 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.682775 | orchestrator | 00:02:16.682 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.682828 | orchestrator | 00:02:16.682 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.682885 | orchestrator | 00:02:16.682 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-12-05 00:02:16.682940 | orchestrator | 00:02:16.682 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.682976 | orchestrator | 00:02:16.682 STDOUT terraform:  + size = 20 2025-12-05 00:02:16.683017 | orchestrator | 00:02:16.682 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.683058 | orchestrator | 00:02:16.683 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.683085 | orchestrator | 00:02:16.683 STDOUT terraform:  } 2025-12-05 00:02:16.683165 | orchestrator | 00:02:16.683 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-12-05 00:02:16.683232 | orchestrator | 00:02:16.683 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-12-05 00:02:16.683288 | orchestrator | 00:02:16.683 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.683327 | orchestrator | 00:02:16.683 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.683383 | orchestrator | 00:02:16.683 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.683437 | orchestrator | 00:02:16.683 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.683493 | orchestrator | 00:02:16.683 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-12-05 00:02:16.683547 | orchestrator | 00:02:16.683 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.683583 | orchestrator | 00:02:16.683 STDOUT terraform:  + size = 20 2025-12-05 00:02:16.683626 | orchestrator | 00:02:16.683 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.683666 | orchestrator | 00:02:16.683 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.683693 | orchestrator | 00:02:16.683 STDOUT terraform:  } 2025-12-05 00:02:16.683769 | orchestrator | 00:02:16.683 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-12-05 00:02:16.683833 | orchestrator | 00:02:16.683 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-12-05 00:02:16.683886 | orchestrator | 00:02:16.683 STDOUT terraform:  + attachment = (known after apply) 2025-12-05 00:02:16.683927 | orchestrator | 00:02:16.683 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.683981 | orchestrator | 00:02:16.683 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.684035 | orchestrator | 00:02:16.683 STDOUT terraform:  + metadata = (known after apply) 2025-12-05 00:02:16.684093 | orchestrator | 00:02:16.684 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-12-05 00:02:16.684146 | orchestrator | 00:02:16.684 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.684239 | orchestrator | 00:02:16.684 STDOUT terraform:  + size = 20 2025-12-05 00:02:16.684290 | orchestrator | 00:02:16.684 STDOUT terraform:  + volume_retype_policy = "never" 2025-12-05 00:02:16.684329 | orchestrator | 00:02:16.684 STDOUT terraform:  + volume_type = "ssd" 2025-12-05 00:02:16.684356 | orchestrator | 00:02:16.684 STDOUT terraform:  } 2025-12-05 00:02:16.684442 | orchestrator | 00:02:16.684 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-12-05 00:02:16.684513 | orchestrator | 00:02:16.684 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-12-05 00:02:16.684566 | orchestrator | 00:02:16.684 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-12-05 00:02:16.684619 | orchestrator | 00:02:16.684 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-12-05 00:02:16.684674 | orchestrator | 00:02:16.684 STDOUT terraform:  + all_metadata = (known after apply) 2025-12-05 00:02:16.684727 | orchestrator | 00:02:16.684 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.684763 | orchestrator | 00:02:16.684 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.684798 | orchestrator | 00:02:16.684 STDOUT terraform:  + config_drive = true 2025-12-05 00:02:16.684848 | orchestrator | 00:02:16.684 STDOUT terraform:  + created = (known after apply) 2025-12-05 00:02:16.684900 | orchestrator | 00:02:16.684 STDOUT terraform:  + flavor_id = (known after apply) 2025-12-05 00:02:16.684944 | orchestrator | 00:02:16.684 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-12-05 00:02:16.684980 | orchestrator | 00:02:16.684 STDOUT terraform:  + force_delete = false 2025-12-05 00:02:16.685029 | orchestrator | 00:02:16.684 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-12-05 00:02:16.685082 | orchestrator | 00:02:16.685 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.685132 | orchestrator | 00:02:16.685 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.685197 | orchestrator | 00:02:16.685 STDOUT terraform:  + image_name = (known after apply) 2025-12-05 00:02:16.685242 | orchestrator | 00:02:16.685 STDOUT terraform:  + key_pair = "testbed" 2025-12-05 00:02:16.685297 | orchestrator | 00:02:16.685 STDOUT terraform:  + name = "testbed-manager" 2025-12-05 00:02:16.685335 | orchestrator | 00:02:16.685 STDOUT terraform:  + power_state = "active" 2025-12-05 00:02:16.685385 | orchestrator | 00:02:16.685 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.685436 | orchestrator | 00:02:16.685 STDOUT terraform:  + security_groups = (known after apply) 2025-12-05 00:02:16.685474 | orchestrator | 00:02:16.685 STDOUT terraform:  + stop_before_destroy = false 2025-12-05 00:02:16.685524 | orchestrator | 00:02:16.685 STDOUT terraform:  + updated = (known after apply) 2025-12-05 00:02:16.685568 | orchestrator | 00:02:16.685 STDOUT terraform:  + user_data = (sensitive value) 2025-12-05 00:02:16.685597 | orchestrator | 00:02:16.685 STDOUT terraform:  + block_device { 2025-12-05 00:02:16.685636 | orchestrator | 00:02:16.685 STDOUT terraform:  + boot_index = 0 2025-12-05 00:02:16.685680 | orchestrator | 00:02:16.685 STDOUT terraform:  + delete_on_termination = false 2025-12-05 00:02:16.685724 | orchestrator | 00:02:16.685 STDOUT terraform:  + destination_type = "volume" 2025-12-05 00:02:16.685766 | orchestrator | 00:02:16.685 STDOUT terraform:  + multiattach = false 2025-12-05 00:02:16.685810 | orchestrator | 00:02:16.685 STDOUT terraform:  + source_type = "volume" 2025-12-05 00:02:16.685864 | orchestrator | 00:02:16.685 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.685890 | orchestrator | 00:02:16.685 STDOUT terraform:  } 2025-12-05 00:02:16.685916 | orchestrator | 00:02:16.685 STDOUT terraform:  + network { 2025-12-05 00:02:16.685949 | orchestrator | 00:02:16.685 STDOUT terraform:  + access_network = false 2025-12-05 00:02:16.685994 | orchestrator | 00:02:16.685 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-12-05 00:02:16.686060 | orchestrator | 00:02:16.686 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-12-05 00:02:16.686108 | orchestrator | 00:02:16.686 STDOUT terraform:  + mac = (known after apply) 2025-12-05 00:02:16.686185 | orchestrator | 00:02:16.686 STDOUT terraform:  + name = (known after apply) 2025-12-05 00:02:16.686234 | orchestrator | 00:02:16.686 STDOUT terraform:  + port = (known after apply) 2025-12-05 00:02:16.686279 | orchestrator | 00:02:16.686 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.686307 | orchestrator | 00:02:16.686 STDOUT terraform:  } 2025-12-05 00:02:16.686331 | orchestrator | 00:02:16.686 STDOUT terraform:  } 2025-12-05 00:02:16.686388 | orchestrator | 00:02:16.686 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-12-05 00:02:16.686441 | orchestrator | 00:02:16.686 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-12-05 00:02:16.686486 | orchestrator | 00:02:16.686 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-12-05 00:02:16.686532 | orchestrator | 00:02:16.686 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-12-05 00:02:16.686578 | orchestrator | 00:02:16.686 STDOUT terraform:  + all_metadata = (known after apply) 2025-12-05 00:02:16.686630 | orchestrator | 00:02:16.686 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.686665 | orchestrator | 00:02:16.686 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.686695 | orchestrator | 00:02:16.686 STDOUT terraform:  + config_drive = true 2025-12-05 00:02:16.686740 | orchestrator | 00:02:16.686 STDOUT terraform:  + created = (known after apply) 2025-12-05 00:02:16.686787 | orchestrator | 00:02:16.686 STDOUT terraform:  + flavor_id = (known after apply) 2025-12-05 00:02:16.686826 | orchestrator | 00:02:16.686 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-12-05 00:02:16.686859 | orchestrator | 00:02:16.686 STDOUT terraform:  + force_delete = false 2025-12-05 00:02:16.686903 | orchestrator | 00:02:16.686 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-12-05 00:02:16.686949 | orchestrator | 00:02:16.686 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.686994 | orchestrator | 00:02:16.686 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.687040 | orchestrator | 00:02:16.687 STDOUT terraform:  + image_name = (known after apply) 2025-12-05 00:02:16.687074 | orchestrator | 00:02:16.687 STDOUT terraform:  + key_pair = "testbed" 2025-12-05 00:02:16.687117 | orchestrator | 00:02:16.687 STDOUT terraform:  + name = "testbed-node-0" 2025-12-05 00:02:16.687161 | orchestrator | 00:02:16.687 STDOUT terraform:  + power_state = "active" 2025-12-05 00:02:16.687208 | orchestrator | 00:02:16.687 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.687256 | orchestrator | 00:02:16.687 STDOUT terraform:  + security_groups = (known after apply) 2025-12-05 00:02:16.687288 | orchestrator | 00:02:16.687 STDOUT terraform:  + stop_before_destroy = false 2025-12-05 00:02:16.687333 | orchestrator | 00:02:16.687 STDOUT terraform:  + updated = (known after apply) 2025-12-05 00:02:16.687395 | orchestrator | 00:02:16.687 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-12-05 00:02:16.687421 | orchestrator | 00:02:16.687 STDOUT terraform:  + block_device { 2025-12-05 00:02:16.687455 | orchestrator | 00:02:16.687 STDOUT terraform:  + boot_index = 0 2025-12-05 00:02:16.687492 | orchestrator | 00:02:16.687 STDOUT terraform:  + delete_on_termination = false 2025-12-05 00:02:16.687531 | orchestrator | 00:02:16.687 STDOUT terraform:  + destination_type = "volume" 2025-12-05 00:02:16.687568 | orchestrator | 00:02:16.687 STDOUT terraform:  + multiattach = false 2025-12-05 00:02:16.687608 | orchestrator | 00:02:16.687 STDOUT terraform:  + source_type = "volume" 2025-12-05 00:02:16.687655 | orchestrator | 00:02:16.687 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.687678 | orchestrator | 00:02:16.687 STDOUT terraform:  } 2025-12-05 00:02:16.687701 | orchestrator | 00:02:16.687 STDOUT terraform:  + network { 2025-12-05 00:02:16.687744 | orchestrator | 00:02:16.687 STDOUT terraform:  + access_network = false 2025-12-05 00:02:16.687792 | orchestrator | 00:02:16.687 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-12-05 00:02:16.687833 | orchestrator | 00:02:16.687 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-12-05 00:02:16.687879 | orchestrator | 00:02:16.687 STDOUT terraform:  + mac = (known after apply) 2025-12-05 00:02:16.687919 | orchestrator | 00:02:16.687 STDOUT terraform:  + name = (known after apply) 2025-12-05 00:02:16.687962 | orchestrator | 00:02:16.687 STDOUT terraform:  + port = (known after apply) 2025-12-05 00:02:16.688004 | orchestrator | 00:02:16.687 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.688030 | orchestrator | 00:02:16.688 STDOUT terraform:  } 2025-12-05 00:02:16.688102 | orchestrator | 00:02:16.688 STDOUT terraform:  } 2025-12-05 00:02:16.688205 | orchestrator | 00:02:16.688 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-12-05 00:02:16.688262 | orchestrator | 00:02:16.688 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-12-05 00:02:16.688309 | orchestrator | 00:02:16.688 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-12-05 00:02:16.688354 | orchestrator | 00:02:16.688 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-12-05 00:02:16.688402 | orchestrator | 00:02:16.688 STDOUT terraform:  + all_metadata = (known after apply) 2025-12-05 00:02:16.688449 | orchestrator | 00:02:16.688 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.688483 | orchestrator | 00:02:16.688 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.688515 | orchestrator | 00:02:16.688 STDOUT terraform:  + config_drive = true 2025-12-05 00:02:16.688561 | orchestrator | 00:02:16.688 STDOUT terraform:  + created = (known after apply) 2025-12-05 00:02:16.688608 | orchestrator | 00:02:16.688 STDOUT terraform:  + flavor_id = (known after apply) 2025-12-05 00:02:16.688648 | orchestrator | 00:02:16.688 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-12-05 00:02:16.688681 | orchestrator | 00:02:16.688 STDOUT terraform:  + force_delete = false 2025-12-05 00:02:16.688725 | orchestrator | 00:02:16.688 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-12-05 00:02:16.688771 | orchestrator | 00:02:16.688 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.688819 | orchestrator | 00:02:16.688 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.688866 | orchestrator | 00:02:16.688 STDOUT terraform:  + image_name = (known after apply) 2025-12-05 00:02:16.688902 | orchestrator | 00:02:16.688 STDOUT terraform:  + key_pair = "testbed" 2025-12-05 00:02:16.688944 | orchestrator | 00:02:16.688 STDOUT terraform:  + name = "testbed-node-1" 2025-12-05 00:02:16.688978 | orchestrator | 00:02:16.688 STDOUT terraform:  + power_state = "active" 2025-12-05 00:02:16.689024 | orchestrator | 00:02:16.688 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.689070 | orchestrator | 00:02:16.689 STDOUT terraform:  + security_groups = (known after apply) 2025-12-05 00:02:16.689102 | orchestrator | 00:02:16.689 STDOUT terraform:  + stop_before_destroy = false 2025-12-05 00:02:16.689147 | orchestrator | 00:02:16.689 STDOUT terraform:  + updated = (known after apply) 2025-12-05 00:02:16.689252 | orchestrator | 00:02:16.689 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-12-05 00:02:16.689285 | orchestrator | 00:02:16.689 STDOUT terraform:  + block_device { 2025-12-05 00:02:16.689322 | orchestrator | 00:02:16.689 STDOUT terraform:  + boot_index = 0 2025-12-05 00:02:16.689381 | orchestrator | 00:02:16.689 STDOUT terraform:  + delete_on_termination = false 2025-12-05 00:02:16.689422 | orchestrator | 00:02:16.689 STDOUT terraform:  + destination_type = "volume" 2025-12-05 00:02:16.689458 | orchestrator | 00:02:16.689 STDOUT terraform:  + multiattach = false 2025-12-05 00:02:16.689495 | orchestrator | 00:02:16.689 STDOUT terraform:  + source_type = "volume" 2025-12-05 00:02:16.689546 | orchestrator | 00:02:16.689 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.689569 | orchestrator | 00:02:16.689 STDOUT terraform:  } 2025-12-05 00:02:16.689591 | orchestrator | 00:02:16.689 STDOUT terraform:  + network { 2025-12-05 00:02:16.689621 | orchestrator | 00:02:16.689 STDOUT terraform:  + access_network = false 2025-12-05 00:02:16.689658 | orchestrator | 00:02:16.689 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-12-05 00:02:16.689696 | orchestrator | 00:02:16.689 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-12-05 00:02:16.689733 | orchestrator | 00:02:16.689 STDOUT terraform:  + mac = (known after apply) 2025-12-05 00:02:16.689772 | orchestrator | 00:02:16.689 STDOUT terraform:  + name = (known after apply) 2025-12-05 00:02:16.689813 | orchestrator | 00:02:16.689 STDOUT terraform:  + port = (known after apply) 2025-12-05 00:02:16.689852 | orchestrator | 00:02:16.689 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.689873 | orchestrator | 00:02:16.689 STDOUT terraform:  } 2025-12-05 00:02:16.689895 | orchestrator | 00:02:16.689 STDOUT terraform:  } 2025-12-05 00:02:16.689947 | orchestrator | 00:02:16.689 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-12-05 00:02:16.689995 | orchestrator | 00:02:16.689 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-12-05 00:02:16.690060 | orchestrator | 00:02:16.690 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-12-05 00:02:16.690104 | orchestrator | 00:02:16.690 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-12-05 00:02:16.690147 | orchestrator | 00:02:16.690 STDOUT terraform:  + all_metadata = (known after apply) 2025-12-05 00:02:16.690207 | orchestrator | 00:02:16.690 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.690240 | orchestrator | 00:02:16.690 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.690268 | orchestrator | 00:02:16.690 STDOUT terraform:  + config_drive = true 2025-12-05 00:02:16.690310 | orchestrator | 00:02:16.690 STDOUT terraform:  + created = (known after apply) 2025-12-05 00:02:16.690354 | orchestrator | 00:02:16.690 STDOUT terraform:  + flavor_id = (known after apply) 2025-12-05 00:02:16.690391 | orchestrator | 00:02:16.690 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-12-05 00:02:16.690426 | orchestrator | 00:02:16.690 STDOUT terraform:  + force_delete = false 2025-12-05 00:02:16.690467 | orchestrator | 00:02:16.690 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-12-05 00:02:16.690509 | orchestrator | 00:02:16.690 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.693142 | orchestrator | 00:02:16.693 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.693250 | orchestrator | 00:02:16.693 STDOUT terraform:  + image_name = (known after apply) 2025-12-05 00:02:16.693322 | orchestrator | 00:02:16.693 STDOUT terraform:  + key_pair = "testbed" 2025-12-05 00:02:16.693393 | orchestrator | 00:02:16.693 STDOUT terraform:  + name = "testbed-node-2" 2025-12-05 00:02:16.693462 | orchestrator | 00:02:16.693 STDOUT terraform:  + power_state = "active" 2025-12-05 00:02:16.693562 | orchestrator | 00:02:16.693 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.693662 | orchestrator | 00:02:16.693 STDOUT terraform:  + security_groups = (known after apply) 2025-12-05 00:02:16.693730 | orchestrator | 00:02:16.693 STDOUT terraform:  + stop_before_destroy = false 2025-12-05 00:02:16.693803 | orchestrator | 00:02:16.693 STDOUT terraform:  + updated = (known after apply) 2025-12-05 00:02:16.693931 | orchestrator | 00:02:16.693 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-12-05 00:02:16.693970 | orchestrator | 00:02:16.693 STDOUT terraform:  + block_device { 2025-12-05 00:02:16.694012 | orchestrator | 00:02:16.693 STDOUT terraform:  + boot_index = 0 2025-12-05 00:02:16.694084 | orchestrator | 00:02:16.694 STDOUT terraform:  + delete_on_termination = false 2025-12-05 00:02:16.694136 | orchestrator | 00:02:16.694 STDOUT terraform:  + destination_type = "volume" 2025-12-05 00:02:16.694223 | orchestrator | 00:02:16.694 STDOUT terraform:  + multiattach = false 2025-12-05 00:02:16.694276 | orchestrator | 00:02:16.694 STDOUT terraform:  + source_type = "volume" 2025-12-05 00:02:16.694343 | orchestrator | 00:02:16.694 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.694368 | orchestrator | 00:02:16.694 STDOUT terraform:  } 2025-12-05 00:02:16.694393 | orchestrator | 00:02:16.694 STDOUT terraform:  + network { 2025-12-05 00:02:16.694431 | orchestrator | 00:02:16.694 STDOUT terraform:  + access_network = false 2025-12-05 00:02:16.694486 | orchestrator | 00:02:16.694 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-12-05 00:02:16.694555 | orchestrator | 00:02:16.694 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-12-05 00:02:16.694634 | orchestrator | 00:02:16.694 STDOUT terraform:  + mac = (known after apply) 2025-12-05 00:02:16.694692 | orchestrator | 00:02:16.694 STDOUT terraform:  + name = (known after apply) 2025-12-05 00:02:16.694747 | orchestrator | 00:02:16.694 STDOUT terraform:  + port = (known after apply) 2025-12-05 00:02:16.694801 | orchestrator | 00:02:16.694 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.694823 | orchestrator | 00:02:16.694 STDOUT terraform:  } 2025-12-05 00:02:16.694846 | orchestrator | 00:02:16.694 STDOUT terraform:  } 2025-12-05 00:02:16.694922 | orchestrator | 00:02:16.694 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-12-05 00:02:16.694995 | orchestrator | 00:02:16.694 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-12-05 00:02:16.695053 | orchestrator | 00:02:16.694 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-12-05 00:02:16.695113 | orchestrator | 00:02:16.695 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-12-05 00:02:16.695190 | orchestrator | 00:02:16.695 STDOUT terraform:  + all_metadata = (known after apply) 2025-12-05 00:02:16.695254 | orchestrator | 00:02:16.695 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.695292 | orchestrator | 00:02:16.695 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.695327 | orchestrator | 00:02:16.695 STDOUT terraform:  + config_drive = true 2025-12-05 00:02:16.695387 | orchestrator | 00:02:16.695 STDOUT terraform:  + created = (known after apply) 2025-12-05 00:02:16.695445 | orchestrator | 00:02:16.695 STDOUT terraform:  + flavor_id = (known after apply) 2025-12-05 00:02:16.695496 | orchestrator | 00:02:16.695 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-12-05 00:02:16.695591 | orchestrator | 00:02:16.695 STDOUT terraform:  + force_delete = false 2025-12-05 00:02:16.695665 | orchestrator | 00:02:16.695 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-12-05 00:02:16.695773 | orchestrator | 00:02:16.695 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.695872 | orchestrator | 00:02:16.695 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.695936 | orchestrator | 00:02:16.695 STDOUT terraform:  + image_name = (known after apply) 2025-12-05 00:02:16.695978 | orchestrator | 00:02:16.695 STDOUT terraform:  + key_pair = "testbed" 2025-12-05 00:02:16.696032 | orchestrator | 00:02:16.695 STDOUT terraform:  + name = "testbed-node-3" 2025-12-05 00:02:16.696073 | orchestrator | 00:02:16.696 STDOUT terraform:  + power_state = "active" 2025-12-05 00:02:16.696134 | orchestrator | 00:02:16.696 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.696237 | orchestrator | 00:02:16.696 STDOUT terraform:  + security_groups = (known after apply) 2025-12-05 00:02:16.696275 | orchestrator | 00:02:16.696 STDOUT terraform:  + stop_before_destroy = false 2025-12-05 00:02:16.696403 | orchestrator | 00:02:16.696 STDOUT terraform:  + updated = (known after apply) 2025-12-05 00:02:16.696498 | orchestrator | 00:02:16.696 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-12-05 00:02:16.696528 | orchestrator | 00:02:16.696 STDOUT terraform:  + block_device { 2025-12-05 00:02:16.696573 | orchestrator | 00:02:16.696 STDOUT terraform:  + boot_index = 0 2025-12-05 00:02:16.696620 | orchestrator | 00:02:16.696 STDOUT terraform:  + delete_on_termination = false 2025-12-05 00:02:16.696670 | orchestrator | 00:02:16.696 STDOUT terraform:  + destination_type = "volume" 2025-12-05 00:02:16.696746 | orchestrator | 00:02:16.696 STDOUT terraform:  + multiattach = false 2025-12-05 00:02:16.696845 | orchestrator | 00:02:16.696 STDOUT terraform:  + source_type = "volume" 2025-12-05 00:02:16.696961 | orchestrator | 00:02:16.696 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.696978 | orchestrator | 00:02:16.696 STDOUT terraform:  } 2025-12-05 00:02:16.697004 | orchestrator | 00:02:16.696 STDOUT terraform:  + network { 2025-12-05 00:02:16.697040 | orchestrator | 00:02:16.697 STDOUT terraform:  + access_network = false 2025-12-05 00:02:16.697095 | orchestrator | 00:02:16.697 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-12-05 00:02:16.697148 | orchestrator | 00:02:16.697 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-12-05 00:02:16.697219 | orchestrator | 00:02:16.697 STDOUT terraform:  + mac = (known after apply) 2025-12-05 00:02:16.697274 | orchestrator | 00:02:16.697 STDOUT terraform:  + name = (known after apply) 2025-12-05 00:02:16.697326 | orchestrator | 00:02:16.697 STDOUT terraform:  + port = (known after apply) 2025-12-05 00:02:16.697380 | orchestrator | 00:02:16.697 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.697401 | orchestrator | 00:02:16.697 STDOUT terraform:  } 2025-12-05 00:02:16.697426 | orchestrator | 00:02:16.697 STDOUT terraform:  } 2025-12-05 00:02:16.697500 | orchestrator | 00:02:16.697 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-12-05 00:02:16.697576 | orchestrator | 00:02:16.697 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-12-05 00:02:16.697678 | orchestrator | 00:02:16.697 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-12-05 00:02:16.697743 | orchestrator | 00:02:16.697 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-12-05 00:02:16.697804 | orchestrator | 00:02:16.697 STDOUT terraform:  + all_metadata = (known after apply) 2025-12-05 00:02:16.697909 | orchestrator | 00:02:16.697 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.697954 | orchestrator | 00:02:16.697 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.697991 | orchestrator | 00:02:16.697 STDOUT terraform:  + config_drive = true 2025-12-05 00:02:16.698071 | orchestrator | 00:02:16.697 STDOUT terraform:  + created = (known after apply) 2025-12-05 00:02:16.698132 | orchestrator | 00:02:16.698 STDOUT terraform:  + flavor_id = (known after apply) 2025-12-05 00:02:16.698227 | orchestrator | 00:02:16.698 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-12-05 00:02:16.698268 | orchestrator | 00:02:16.698 STDOUT terraform:  + force_delete = false 2025-12-05 00:02:16.698326 | orchestrator | 00:02:16.698 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-12-05 00:02:16.698389 | orchestrator | 00:02:16.698 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.698449 | orchestrator | 00:02:16.698 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.698509 | orchestrator | 00:02:16.698 STDOUT terraform:  + image_name = (known after apply) 2025-12-05 00:02:16.698551 | orchestrator | 00:02:16.698 STDOUT terraform:  + key_pair = "testbed" 2025-12-05 00:02:16.698635 | orchestrator | 00:02:16.698 STDOUT terraform:  + name = "testbed-node-4" 2025-12-05 00:02:16.698692 | orchestrator | 00:02:16.698 STDOUT terraform:  + power_state = "active" 2025-12-05 00:02:16.698803 | orchestrator | 00:02:16.698 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.698866 | orchestrator | 00:02:16.698 STDOUT terraform:  + security_groups = (known after apply) 2025-12-05 00:02:16.698906 | orchestrator | 00:02:16.698 STDOUT terraform:  + stop_before_destroy = false 2025-12-05 00:02:16.698985 | orchestrator | 00:02:16.698 STDOUT terraform:  + updated = (known after apply) 2025-12-05 00:02:16.699103 | orchestrator | 00:02:16.698 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-12-05 00:02:16.699132 | orchestrator | 00:02:16.699 STDOUT terraform:  + block_device { 2025-12-05 00:02:16.699197 | orchestrator | 00:02:16.699 STDOUT terraform:  + boot_index = 0 2025-12-05 00:02:16.699245 | orchestrator | 00:02:16.699 STDOUT terraform:  + delete_on_termination = false 2025-12-05 00:02:16.699296 | orchestrator | 00:02:16.699 STDOUT terraform:  + destination_type = "volume" 2025-12-05 00:02:16.699352 | orchestrator | 00:02:16.699 STDOUT terraform:  + multiattach = false 2025-12-05 00:02:16.699396 | orchestrator | 00:02:16.699 STDOUT terraform:  + source_type = "volume" 2025-12-05 00:02:16.699460 | orchestrator | 00:02:16.699 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.699481 | orchestrator | 00:02:16.699 STDOUT terraform:  } 2025-12-05 00:02:16.699508 | orchestrator | 00:02:16.699 STDOUT terraform:  + network { 2025-12-05 00:02:16.699543 | orchestrator | 00:02:16.699 STDOUT terraform:  + access_network = false 2025-12-05 00:02:16.699594 | orchestrator | 00:02:16.699 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-12-05 00:02:16.699664 | orchestrator | 00:02:16.699 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-12-05 00:02:16.699696 | orchestrator | 00:02:16.699 STDOUT terraform:  + mac = (known after apply) 2025-12-05 00:02:16.699750 | orchestrator | 00:02:16.699 STDOUT terraform:  + name = (known after apply) 2025-12-05 00:02:16.699813 | orchestrator | 00:02:16.699 STDOUT terraform:  + port = (known after apply) 2025-12-05 00:02:16.699901 | orchestrator | 00:02:16.699 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.699939 | orchestrator | 00:02:16.699 STDOUT terraform:  } 2025-12-05 00:02:16.699981 | orchestrator | 00:02:16.699 STDOUT terraform:  } 2025-12-05 00:02:16.700069 | orchestrator | 00:02:16.699 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-12-05 00:02:16.700143 | orchestrator | 00:02:16.700 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-12-05 00:02:16.700214 | orchestrator | 00:02:16.700 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-12-05 00:02:16.700274 | orchestrator | 00:02:16.700 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-12-05 00:02:16.700335 | orchestrator | 00:02:16.700 STDOUT terraform:  + all_metadata = (known after apply) 2025-12-05 00:02:16.700396 | orchestrator | 00:02:16.700 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.700436 | orchestrator | 00:02:16.700 STDOUT terraform:  + availability_zone = "nova" 2025-12-05 00:02:16.700470 | orchestrator | 00:02:16.700 STDOUT terraform:  + config_drive = true 2025-12-05 00:02:16.700529 | orchestrator | 00:02:16.700 STDOUT terraform:  + created = (known after apply) 2025-12-05 00:02:16.700587 | orchestrator | 00:02:16.700 STDOUT terraform:  + flavor_id = (known after apply) 2025-12-05 00:02:16.700635 | orchestrator | 00:02:16.700 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-12-05 00:02:16.700675 | orchestrator | 00:02:16.700 STDOUT terraform:  + force_delete = false 2025-12-05 00:02:16.700734 | orchestrator | 00:02:16.700 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-12-05 00:02:16.700794 | orchestrator | 00:02:16.700 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.700854 | orchestrator | 00:02:16.700 STDOUT terraform:  + image_id = (known after apply) 2025-12-05 00:02:16.700913 | orchestrator | 00:02:16.700 STDOUT terraform:  + image_name = (known after apply) 2025-12-05 00:02:16.700955 | orchestrator | 00:02:16.700 STDOUT terraform:  + key_pair = "testbed" 2025-12-05 00:02:16.701006 | orchestrator | 00:02:16.700 STDOUT terraform:  + name = "testbed-node-5" 2025-12-05 00:02:16.701047 | orchestrator | 00:02:16.701 STDOUT terraform:  + power_state = "active" 2025-12-05 00:02:16.701106 | orchestrator | 00:02:16.701 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.701181 | orchestrator | 00:02:16.701 STDOUT terraform:  + security_groups = (known after apply) 2025-12-05 00:02:16.701218 | orchestrator | 00:02:16.701 STDOUT terraform:  + stop_before_destroy = false 2025-12-05 00:02:16.701277 | orchestrator | 00:02:16.701 STDOUT terraform:  + updated = (known after apply) 2025-12-05 00:02:16.701359 | orchestrator | 00:02:16.701 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-12-05 00:02:16.701388 | orchestrator | 00:02:16.701 STDOUT terraform:  + block_device { 2025-12-05 00:02:16.701429 | orchestrator | 00:02:16.701 STDOUT terraform:  + boot_index = 0 2025-12-05 00:02:16.701475 | orchestrator | 00:02:16.701 STDOUT terraform:  + delete_on_termination = false 2025-12-05 00:02:16.701525 | orchestrator | 00:02:16.701 STDOUT terraform:  + destination_type = "volume" 2025-12-05 00:02:16.701574 | orchestrator | 00:02:16.701 STDOUT terraform:  + multiattach = false 2025-12-05 00:02:16.701661 | orchestrator | 00:02:16.701 STDOUT terraform:  + source_type = "volume" 2025-12-05 00:02:16.701752 | orchestrator | 00:02:16.701 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.701777 | orchestrator | 00:02:16.701 STDOUT terraform:  } 2025-12-05 00:02:16.701800 | orchestrator | 00:02:16.701 STDOUT terraform:  + network { 2025-12-05 00:02:16.701834 | orchestrator | 00:02:16.701 STDOUT terraform:  + access_network = false 2025-12-05 00:02:16.701889 | orchestrator | 00:02:16.701 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-12-05 00:02:16.701955 | orchestrator | 00:02:16.701 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-12-05 00:02:16.712283 | orchestrator | 00:02:16.701 STDOUT terraform:  + mac = (known after apply) 2025-12-05 00:02:16.712387 | orchestrator | 00:02:16.712 STDOUT terraform:  + name = (known after apply) 2025-12-05 00:02:16.712451 | orchestrator | 00:02:16.712 STDOUT terraform:  + port = (known after apply) 2025-12-05 00:02:16.712498 | orchestrator | 00:02:16.712 STDOUT terraform:  + uuid = (known after apply) 2025-12-05 00:02:16.712522 | orchestrator | 00:02:16.712 STDOUT terraform:  } 2025-12-05 00:02:16.712545 | orchestrator | 00:02:16.712 STDOUT terraform:  } 2025-12-05 00:02:16.712601 | orchestrator | 00:02:16.712 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-12-05 00:02:16.712685 | orchestrator | 00:02:16.712 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-12-05 00:02:16.712736 | orchestrator | 00:02:16.712 STDOUT terraform:  + fingerprint = (known after apply) 2025-12-05 00:02:16.712785 | orchestrator | 00:02:16.712 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.712832 | orchestrator | 00:02:16.712 STDOUT terraform:  + name = "testbed" 2025-12-05 00:02:16.712886 | orchestrator | 00:02:16.712 STDOUT terraform:  + private_key = (sensitive value) 2025-12-05 00:02:16.712955 | orchestrator | 00:02:16.712 STDOUT terraform:  + public_key = (known after apply) 2025-12-05 00:02:16.713017 | orchestrator | 00:02:16.712 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.713086 | orchestrator | 00:02:16.713 STDOUT terraform:  + user_id = (known after apply) 2025-12-05 00:02:16.713116 | orchestrator | 00:02:16.713 STDOUT terraform:  } 2025-12-05 00:02:16.713249 | orchestrator | 00:02:16.713 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-12-05 00:02:16.713370 | orchestrator | 00:02:16.713 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-12-05 00:02:16.713439 | orchestrator | 00:02:16.713 STDOUT terraform:  + device = (known after apply) 2025-12-05 00:02:16.713505 | orchestrator | 00:02:16.713 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.713565 | orchestrator | 00:02:16.713 STDOUT terraform:  + instance_id = (known after apply) 2025-12-05 00:02:16.713632 | orchestrator | 00:02:16.713 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.713700 | orchestrator | 00:02:16.713 STDOUT terraform:  + volume_id = (known after apply) 2025-12-05 00:02:16.713729 | orchestrator | 00:02:16.713 STDOUT terraform:  } 2025-12-05 00:02:16.713847 | orchestrator | 00:02:16.713 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-12-05 00:02:16.713949 | orchestrator | 00:02:16.713 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-12-05 00:02:16.714035 | orchestrator | 00:02:16.713 STDOUT terraform:  + device = (known after apply) 2025-12-05 00:02:16.714106 | orchestrator | 00:02:16.714 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.714188 | orchestrator | 00:02:16.714 STDOUT terraform:  + instance_id = (known after apply) 2025-12-05 00:02:16.714236 | orchestrator | 00:02:16.714 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.714275 | orchestrator | 00:02:16.714 STDOUT terraform:  + volume_id = (known after apply) 2025-12-05 00:02:16.714293 | orchestrator | 00:02:16.714 STDOUT terraform:  } 2025-12-05 00:02:16.714359 | orchestrator | 00:02:16.714 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-12-05 00:02:16.714422 | orchestrator | 00:02:16.714 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-12-05 00:02:16.714457 | orchestrator | 00:02:16.714 STDOUT terraform:  + device = (known after apply) 2025-12-05 00:02:16.714496 | orchestrator | 00:02:16.714 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.714529 | orchestrator | 00:02:16.714 STDOUT terraform:  + instance_id = (known after apply) 2025-12-05 00:02:16.714564 | orchestrator | 00:02:16.714 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.714600 | orchestrator | 00:02:16.714 STDOUT terraform:  + volume_id = (known after apply) 2025-12-05 00:02:16.714616 | orchestrator | 00:02:16.714 STDOUT terraform:  } 2025-12-05 00:02:16.714679 | orchestrator | 00:02:16.714 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-12-05 00:02:16.714739 | orchestrator | 00:02:16.714 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-12-05 00:02:16.714773 | orchestrator | 00:02:16.714 STDOUT terraform:  + device = (known after apply) 2025-12-05 00:02:16.714808 | orchestrator | 00:02:16.714 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.714843 | orchestrator | 00:02:16.714 STDOUT terraform:  + instance_id = (known after apply) 2025-12-05 00:02:16.714878 | orchestrator | 00:02:16.714 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.714916 | orchestrator | 00:02:16.714 STDOUT terraform:  + volume_id = (known after apply) 2025-12-05 00:02:16.714941 | orchestrator | 00:02:16.714 STDOUT terraform:  } 2025-12-05 00:02:16.715027 | orchestrator | 00:02:16.714 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-12-05 00:02:16.715095 | orchestrator | 00:02:16.715 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-12-05 00:02:16.715170 | orchestrator | 00:02:16.715 STDOUT terraform:  + device = (known after apply) 2025-12-05 00:02:16.715226 | orchestrator | 00:02:16.715 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.715288 | orchestrator | 00:02:16.715 STDOUT terraform:  + instance_id = (known after apply) 2025-12-05 00:02:16.715325 | orchestrator | 00:02:16.715 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.715362 | orchestrator | 00:02:16.715 STDOUT terraform:  + volume_id = (known after apply) 2025-12-05 00:02:16.715380 | orchestrator | 00:02:16.715 STDOUT terraform:  } 2025-12-05 00:02:16.715444 | orchestrator | 00:02:16.715 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-12-05 00:02:16.715504 | orchestrator | 00:02:16.715 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-12-05 00:02:16.715541 | orchestrator | 00:02:16.715 STDOUT terraform:  + device = (known after apply) 2025-12-05 00:02:16.715576 | orchestrator | 00:02:16.715 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.715611 | orchestrator | 00:02:16.715 STDOUT terraform:  + instance_id = (known after apply) 2025-12-05 00:02:16.715653 | orchestrator | 00:02:16.715 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.715683 | orchestrator | 00:02:16.715 STDOUT terraform:  + volume_id = (known after apply) 2025-12-05 00:02:16.715699 | orchestrator | 00:02:16.715 STDOUT terraform:  } 2025-12-05 00:02:16.715763 | orchestrator | 00:02:16.715 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-12-05 00:02:16.715825 | orchestrator | 00:02:16.715 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-12-05 00:02:16.715859 | orchestrator | 00:02:16.715 STDOUT terraform:  + device = (known after apply) 2025-12-05 00:02:16.715894 | orchestrator | 00:02:16.715 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.715929 | orchestrator | 00:02:16.715 STDOUT terraform:  + instance_id = (known after apply) 2025-12-05 00:02:16.715963 | orchestrator | 00:02:16.715 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.716006 | orchestrator | 00:02:16.715 STDOUT terraform:  + volume_id = (known after apply) 2025-12-05 00:02:16.716031 | orchestrator | 00:02:16.716 STDOUT terraform:  } 2025-12-05 00:02:16.716128 | orchestrator | 00:02:16.716 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-12-05 00:02:16.716225 | orchestrator | 00:02:16.716 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-12-05 00:02:16.716262 | orchestrator | 00:02:16.716 STDOUT terraform:  + device = (known after apply) 2025-12-05 00:02:16.716297 | orchestrator | 00:02:16.716 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.716332 | orchestrator | 00:02:16.716 STDOUT terraform:  + instance_id = (known after apply) 2025-12-05 00:02:16.716369 | orchestrator | 00:02:16.716 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.716405 | orchestrator | 00:02:16.716 STDOUT terraform:  + volume_id = (known after apply) 2025-12-05 00:02:16.716422 | orchestrator | 00:02:16.716 STDOUT terraform:  } 2025-12-05 00:02:16.716487 | orchestrator | 00:02:16.716 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-12-05 00:02:16.717672 | orchestrator | 00:02:16.716 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-12-05 00:02:16.717732 | orchestrator | 00:02:16.717 STDOUT terraform:  + device = (known after apply) 2025-12-05 00:02:16.717787 | orchestrator | 00:02:16.717 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.717847 | orchestrator | 00:02:16.717 STDOUT terraform:  + instance_id = (known after apply) 2025-12-05 00:02:16.717900 | orchestrator | 00:02:16.717 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.717952 | orchestrator | 00:02:16.717 STDOUT terraform:  + volume_id = (known after apply) 2025-12-05 00:02:16.717979 | orchestrator | 00:02:16.717 STDOUT terraform:  } 2025-12-05 00:02:16.718116 | orchestrator | 00:02:16.717 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-12-05 00:02:16.718256 | orchestrator | 00:02:16.718 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-12-05 00:02:16.718317 | orchestrator | 00:02:16.718 STDOUT terraform:  + fixed_ip = (known after apply) 2025-12-05 00:02:16.718356 | orchestrator | 00:02:16.718 STDOUT terraform:  + floating_ip = (known after apply) 2025-12-05 00:02:16.718392 | orchestrator | 00:02:16.718 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.718427 | orchestrator | 00:02:16.718 STDOUT terraform:  + port_id = (known after apply) 2025-12-05 00:02:16.718462 | orchestrator | 00:02:16.718 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.718478 | orchestrator | 00:02:16.718 STDOUT terraform:  } 2025-12-05 00:02:16.718534 | orchestrator | 00:02:16.718 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-12-05 00:02:16.718591 | orchestrator | 00:02:16.718 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-12-05 00:02:16.718619 | orchestrator | 00:02:16.718 STDOUT terraform:  + address = (known after apply) 2025-12-05 00:02:16.718649 | orchestrator | 00:02:16.718 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.718678 | orchestrator | 00:02:16.718 STDOUT terraform:  + dns_domain = (known after apply) 2025-12-05 00:02:16.718707 | orchestrator | 00:02:16.718 STDOUT terraform:  + dns_name = (known after apply) 2025-12-05 00:02:16.718738 | orchestrator | 00:02:16.718 STDOUT terraform:  + fixed_ip = (known after apply) 2025-12-05 00:02:16.718766 | orchestrator | 00:02:16.718 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.718796 | orchestrator | 00:02:16.718 STDOUT terraform:  + pool = "public" 2025-12-05 00:02:16.718818 | orchestrator | 00:02:16.718 STDOUT terraform:  + port_id = (known after apply) 2025-12-05 00:02:16.718849 | orchestrator | 00:02:16.718 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.718878 | orchestrator | 00:02:16.718 STDOUT terraform:  + subnet_id = (known after apply) 2025-12-05 00:02:16.718906 | orchestrator | 00:02:16.718 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.718921 | orchestrator | 00:02:16.718 STDOUT terraform:  } 2025-12-05 00:02:16.718973 | orchestrator | 00:02:16.718 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-12-05 00:02:16.719025 | orchestrator | 00:02:16.718 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-12-05 00:02:16.719072 | orchestrator | 00:02:16.719 STDOUT terraform:  + admin_state_up = (known after apply) 2025-12-05 00:02:16.719118 | orchestrator | 00:02:16.719 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.719144 | orchestrator | 00:02:16.719 STDOUT terraform:  + availability_zone_hints = [ 2025-12-05 00:02:16.719250 | orchestrator | 00:02:16.719 STDOUT terraform:  + "nova", 2025-12-05 00:02:16.719277 | orchestrator | 00:02:16.719 STDOUT terraform:  ] 2025-12-05 00:02:16.719360 | orchestrator | 00:02:16.719 STDOUT terraform:  + dns_domain = (known after apply) 2025-12-05 00:02:16.719376 | orchestrator | 00:02:16.719 STDOUT terraform:  + external = (known after apply) 2025-12-05 00:02:16.719418 | orchestrator | 00:02:16.719 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.719467 | orchestrator | 00:02:16.719 STDOUT terraform:  + mtu = (known after apply) 2025-12-05 00:02:16.719530 | orchestrator | 00:02:16.719 STDOUT terraform:  + name = "net-testbed-management" 2025-12-05 00:02:16.719583 | orchestrator | 00:02:16.719 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-12-05 00:02:16.719627 | orchestrator | 00:02:16.719 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-12-05 00:02:16.719670 | orchestrator | 00:02:16.719 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.719717 | orchestrator | 00:02:16.719 STDOUT terraform:  + shared = (known after apply) 2025-12-05 00:02:16.719758 | orchestrator | 00:02:16.719 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.719797 | orchestrator | 00:02:16.719 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-12-05 00:02:16.719823 | orchestrator | 00:02:16.719 STDOUT terraform:  + segments (known after apply) 2025-12-05 00:02:16.719829 | orchestrator | 00:02:16.719 STDOUT terraform:  } 2025-12-05 00:02:16.719887 | orchestrator | 00:02:16.719 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-12-05 00:02:16.719942 | orchestrator | 00:02:16.719 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-12-05 00:02:16.719980 | orchestrator | 00:02:16.719 STDOUT terraform:  + admin_state_up = (known after apply) 2025-12-05 00:02:16.720019 | orchestrator | 00:02:16.719 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-12-05 00:02:16.720056 | orchestrator | 00:02:16.720 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-12-05 00:02:16.720103 | orchestrator | 00:02:16.720 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.720133 | orchestrator | 00:02:16.720 STDOUT terraform:  + device_id = (known after apply) 2025-12-05 00:02:16.720189 | orchestrator | 00:02:16.720 STDOUT terraform:  + device_owner = (known after apply) 2025-12-05 00:02:16.720256 | orchestrator | 00:02:16.720 STDOUT terraform:  + dns_assignment = (known after apply) 2025-12-05 00:02:16.720261 | orchestrator | 00:02:16.720 STDOUT terraform:  + dns_name = (known after apply) 2025-12-05 00:02:16.720296 | orchestrator | 00:02:16.720 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.720335 | orchestrator | 00:02:16.720 STDOUT terraform:  + mac_address = (known after apply) 2025-12-05 00:02:16.720373 | orchestrator | 00:02:16.720 STDOUT terraform:  + network_id = (known after apply) 2025-12-05 00:02:16.720410 | orchestrator | 00:02:16.720 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-12-05 00:02:16.720449 | orchestrator | 00:02:16.720 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-12-05 00:02:16.720490 | orchestrator | 00:02:16.720 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.720528 | orchestrator | 00:02:16.720 STDOUT terraform:  + security_group_ids = (known after apply) 2025-12-05 00:02:16.720590 | orchestrator | 00:02:16.720 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.720597 | orchestrator | 00:02:16.720 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.720627 | orchestrator | 00:02:16.720 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-12-05 00:02:16.720642 | orchestrator | 00:02:16.720 STDOUT terraform:  } 2025-12-05 00:02:16.720671 | orchestrator | 00:02:16.720 STDOUT terraform:  + binding (known after apply) 2025-12-05 00:02:16.720687 | orchestrator | 00:02:16.720 STDOUT terraform:  + fixed_ip { 2025-12-05 00:02:16.720713 | orchestrator | 00:02:16.720 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-12-05 00:02:16.720746 | orchestrator | 00:02:16.720 STDOUT terraform:  + subnet_id = (known after apply) 2025-12-05 00:02:16.720760 | orchestrator | 00:02:16.720 STDOUT terraform:  } 2025-12-05 00:02:16.720774 | orchestrator | 00:02:16.720 STDOUT terraform:  } 2025-12-05 00:02:16.720825 | orchestrator | 00:02:16.720 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-12-05 00:02:16.720873 | orchestrator | 00:02:16.720 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-12-05 00:02:16.720911 | orchestrator | 00:02:16.720 STDOUT terraform:  + admin_state_up = (known after apply) 2025-12-05 00:02:16.720950 | orchestrator | 00:02:16.720 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-12-05 00:02:16.720989 | orchestrator | 00:02:16.720 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-12-05 00:02:16.721028 | orchestrator | 00:02:16.720 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.721066 | orchestrator | 00:02:16.721 STDOUT terraform:  + device_id = (known after apply) 2025-12-05 00:02:16.721105 | orchestrator | 00:02:16.721 STDOUT terraform:  + device_owner = (known after apply) 2025-12-05 00:02:16.721145 | orchestrator | 00:02:16.721 STDOUT terraform:  + dns_assignment = (known after apply) 2025-12-05 00:02:16.721195 | orchestrator | 00:02:16.721 STDOUT terraform:  + dns_name = (known after apply) 2025-12-05 00:02:16.721233 | orchestrator | 00:02:16.721 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.721271 | orchestrator | 00:02:16.721 STDOUT terraform:  + mac_address = (known after apply) 2025-12-05 00:02:16.721309 | orchestrator | 00:02:16.721 STDOUT terraform:  + network_id = (known after apply) 2025-12-05 00:02:16.721346 | orchestrator | 00:02:16.721 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-12-05 00:02:16.721386 | orchestrator | 00:02:16.721 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-12-05 00:02:16.721424 | orchestrator | 00:02:16.721 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.721465 | orchestrator | 00:02:16.721 STDOUT terraform:  + security_group_ids = (known after apply) 2025-12-05 00:02:16.721524 | orchestrator | 00:02:16.721 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.721530 | orchestrator | 00:02:16.721 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.721545 | orchestrator | 00:02:16.721 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-12-05 00:02:16.721563 | orchestrator | 00:02:16.721 STDOUT terraform:  } 2025-12-05 00:02:16.721583 | orchestrator | 00:02:16.721 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.721615 | orchestrator | 00:02:16.721 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-12-05 00:02:16.721629 | orchestrator | 00:02:16.721 STDOUT terraform:  } 2025-12-05 00:02:16.721657 | orchestrator | 00:02:16.721 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.721682 | orchestrator | 00:02:16.721 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-12-05 00:02:16.721696 | orchestrator | 00:02:16.721 STDOUT terraform:  } 2025-12-05 00:02:16.721722 | orchestrator | 00:02:16.721 STDOUT terraform:  + binding (known after apply) 2025-12-05 00:02:16.721736 | orchestrator | 00:02:16.721 STDOUT terraform:  + fixed_ip { 2025-12-05 00:02:16.721763 | orchestrator | 00:02:16.721 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-12-05 00:02:16.721796 | orchestrator | 00:02:16.721 STDOUT terraform:  + subnet_id = (known after apply) 2025-12-05 00:02:16.721811 | orchestrator | 00:02:16.721 STDOUT terraform:  } 2025-12-05 00:02:16.721825 | orchestrator | 00:02:16.721 STDOUT terraform:  } 2025-12-05 00:02:16.721876 | orchestrator | 00:02:16.721 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-12-05 00:02:16.721924 | orchestrator | 00:02:16.721 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-12-05 00:02:16.721964 | orchestrator | 00:02:16.721 STDOUT terraform:  + admin_state_up = (known after apply) 2025-12-05 00:02:16.722002 | orchestrator | 00:02:16.721 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-12-05 00:02:16.722122 | orchestrator | 00:02:16.721 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-12-05 00:02:16.722128 | orchestrator | 00:02:16.722 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.722178 | orchestrator | 00:02:16.722 STDOUT terraform:  + device_id = (known after apply) 2025-12-05 00:02:16.722210 | orchestrator | 00:02:16.722 STDOUT terraform:  + device_owner = (known after apply) 2025-12-05 00:02:16.722252 | orchestrator | 00:02:16.722 STDOUT terraform:  + dns_assignment = (known after apply) 2025-12-05 00:02:16.722294 | orchestrator | 00:02:16.722 STDOUT terraform:  + dns_name = (known after apply) 2025-12-05 00:02:16.722342 | orchestrator | 00:02:16.722 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.722391 | orchestrator | 00:02:16.722 STDOUT terraform:  + mac_address = (known after apply) 2025-12-05 00:02:16.722431 | orchestrator | 00:02:16.722 STDOUT terraform:  + network_id = (known after apply) 2025-12-05 00:02:16.722483 | orchestrator | 00:02:16.722 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-12-05 00:02:16.722524 | orchestrator | 00:02:16.722 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-12-05 00:02:16.722563 | orchestrator | 00:02:16.722 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.722608 | orchestrator | 00:02:16.722 STDOUT terraform:  + security_group_ids = (known after apply) 2025-12-05 00:02:16.722647 | orchestrator | 00:02:16.722 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.722671 | orchestrator | 00:02:16.722 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.722703 | orchestrator | 00:02:16.722 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-12-05 00:02:16.722718 | orchestrator | 00:02:16.722 STDOUT terraform:  } 2025-12-05 00:02:16.722742 | orchestrator | 00:02:16.722 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.722780 | orchestrator | 00:02:16.722 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-12-05 00:02:16.722821 | orchestrator | 00:02:16.722 STDOUT terraform:  } 2025-12-05 00:02:16.722846 | orchestrator | 00:02:16.722 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.722876 | orchestrator | 00:02:16.722 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-12-05 00:02:16.722890 | orchestrator | 00:02:16.722 STDOUT terraform:  } 2025-12-05 00:02:16.722915 | orchestrator | 00:02:16.722 STDOUT terraform:  + binding (known after apply) 2025-12-05 00:02:16.722930 | orchestrator | 00:02:16.722 STDOUT terraform:  + fixed_ip { 2025-12-05 00:02:16.722958 | orchestrator | 00:02:16.722 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-12-05 00:02:16.722989 | orchestrator | 00:02:16.722 STDOUT terraform:  + subnet_id = (known after apply) 2025-12-05 00:02:16.723008 | orchestrator | 00:02:16.722 STDOUT terraform:  } 2025-12-05 00:02:16.723022 | orchestrator | 00:02:16.723 STDOUT terraform:  } 2025-12-05 00:02:16.723091 | orchestrator | 00:02:16.723 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-12-05 00:02:16.723167 | orchestrator | 00:02:16.723 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-12-05 00:02:16.723246 | orchestrator | 00:02:16.723 STDOUT terraform:  + admin_state_up = (known after apply) 2025-12-05 00:02:16.723286 | orchestrator | 00:02:16.723 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-12-05 00:02:16.723325 | orchestrator | 00:02:16.723 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-12-05 00:02:16.723365 | orchestrator | 00:02:16.723 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.723422 | orchestrator | 00:02:16.723 STDOUT terraform:  + device_id = (known after apply) 2025-12-05 00:02:16.723468 | orchestrator | 00:02:16.723 STDOUT terraform:  + device_owner = (known after apply) 2025-12-05 00:02:16.723507 | orchestrator | 00:02:16.723 STDOUT terraform:  + dns_assignment = (known after apply) 2025-12-05 00:02:16.723554 | orchestrator | 00:02:16.723 STDOUT terraform:  + dns_name = (known after apply) 2025-12-05 00:02:16.723600 | orchestrator | 00:02:16.723 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.723650 | orchestrator | 00:02:16.723 STDOUT terraform:  + mac_address = (known after apply) 2025-12-05 00:02:16.723693 | orchestrator | 00:02:16.723 STDOUT terraform:  + network_id = (known after apply) 2025-12-05 00:02:16.723735 | orchestrator | 00:02:16.723 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-12-05 00:02:16.723780 | orchestrator | 00:02:16.723 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-12-05 00:02:16.723819 | orchestrator | 00:02:16.723 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.723858 | orchestrator | 00:02:16.723 STDOUT terraform:  + security_group_ids = (known after apply) 2025-12-05 00:02:16.723904 | orchestrator | 00:02:16.723 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.723929 | orchestrator | 00:02:16.723 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.723961 | orchestrator | 00:02:16.723 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-12-05 00:02:16.723976 | orchestrator | 00:02:16.723 STDOUT terraform:  } 2025-12-05 00:02:16.724007 | orchestrator | 00:02:16.723 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.724050 | orchestrator | 00:02:16.724 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-12-05 00:02:16.724066 | orchestrator | 00:02:16.724 STDOUT terraform:  } 2025-12-05 00:02:16.724088 | orchestrator | 00:02:16.724 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.724120 | orchestrator | 00:02:16.724 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-12-05 00:02:16.724134 | orchestrator | 00:02:16.724 STDOUT terraform:  } 2025-12-05 00:02:16.724195 | orchestrator | 00:02:16.724 STDOUT terraform:  + binding (known after apply) 2025-12-05 00:02:16.724204 | orchestrator | 00:02:16.724 STDOUT terraform:  + fixed_ip { 2025-12-05 00:02:16.724219 | orchestrator | 00:02:16.724 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-12-05 00:02:16.724259 | orchestrator | 00:02:16.724 STDOUT terraform:  + subnet_id = (known after apply) 2025-12-05 00:02:16.724273 | orchestrator | 00:02:16.724 STDOUT terraform:  } 2025-12-05 00:02:16.724299 | orchestrator | 00:02:16.724 STDOUT terraform:  } 2025-12-05 00:02:16.724355 | orchestrator | 00:02:16.724 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-12-05 00:02:16.724418 | orchestrator | 00:02:16.724 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-12-05 00:02:16.724459 | orchestrator | 00:02:16.724 STDOUT terraform:  + admin_state_up = (known after apply) 2025-12-05 00:02:16.724507 | orchestrator | 00:02:16.724 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-12-05 00:02:16.724545 | orchestrator | 00:02:16.724 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-12-05 00:02:16.724584 | orchestrator | 00:02:16.724 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.724627 | orchestrator | 00:02:16.724 STDOUT terraform:  + device_id = (known after apply) 2025-12-05 00:02:16.724673 | orchestrator | 00:02:16.724 STDOUT terraform:  + device_owner = (known after apply) 2025-12-05 00:02:16.724713 | orchestrator | 00:02:16.724 STDOUT terraform:  + dns_assignment = (known after apply) 2025-12-05 00:02:16.724753 | orchestrator | 00:02:16.724 STDOUT terraform:  + dns_name = (known after apply) 2025-12-05 00:02:16.724804 | orchestrator | 00:02:16.724 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.724850 | orchestrator | 00:02:16.724 STDOUT terraform:  + mac_address = (known after apply) 2025-12-05 00:02:16.724890 | orchestrator | 00:02:16.724 STDOUT terraform:  + network_id = (known after apply) 2025-12-05 00:02:16.724929 | orchestrator | 00:02:16.724 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-12-05 00:02:16.724974 | orchestrator | 00:02:16.724 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-12-05 00:02:16.725016 | orchestrator | 00:02:16.724 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.725055 | orchestrator | 00:02:16.725 STDOUT terraform:  + security_group_ids = (known after apply) 2025-12-05 00:02:16.725094 | orchestrator | 00:02:16.725 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.725115 | orchestrator | 00:02:16.725 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.725168 | orchestrator | 00:02:16.725 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-12-05 00:02:16.725195 | orchestrator | 00:02:16.725 STDOUT terraform:  } 2025-12-05 00:02:16.725218 | orchestrator | 00:02:16.725 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.725249 | orchestrator | 00:02:16.725 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-12-05 00:02:16.725317 | orchestrator | 00:02:16.725 STDOUT terraform:  } 2025-12-05 00:02:16.725340 | orchestrator | 00:02:16.725 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.725371 | orchestrator | 00:02:16.725 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-12-05 00:02:16.725393 | orchestrator | 00:02:16.725 STDOUT terraform:  } 2025-12-05 00:02:16.725422 | orchestrator | 00:02:16.725 STDOUT terraform:  + binding (known after apply) 2025-12-05 00:02:16.725454 | orchestrator | 00:02:16.725 STDOUT terraform:  + fixed_ip { 2025-12-05 00:02:16.725497 | orchestrator | 00:02:16.725 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-12-05 00:02:16.725534 | orchestrator | 00:02:16.725 STDOUT terraform:  + subnet_id = (known after apply) 2025-12-05 00:02:16.725565 | orchestrator | 00:02:16.725 STDOUT terraform:  } 2025-12-05 00:02:16.725581 | orchestrator | 00:02:16.725 STDOUT terraform:  } 2025-12-05 00:02:16.725633 | orchestrator | 00:02:16.725 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-12-05 00:02:16.725792 | orchestrator | 00:02:16.725 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-12-05 00:02:16.725799 | orchestrator | 00:02:16.725 STDOUT terraform:  + admin_state_up = (known after apply) 2025-12-05 00:02:16.725831 | orchestrator | 00:02:16.725 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-12-05 00:02:16.725880 | orchestrator | 00:02:16.725 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-12-05 00:02:16.725924 | orchestrator | 00:02:16.725 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.725965 | orchestrator | 00:02:16.725 STDOUT terraform:  + device_id = (known after apply) 2025-12-05 00:02:16.726006 | orchestrator | 00:02:16.725 STDOUT terraform:  + device_owner = (known after apply) 2025-12-05 00:02:16.726074 | orchestrator | 00:02:16.726 STDOUT terraform:  + dns_assignment = (known after apply) 2025-12-05 00:02:16.726123 | orchestrator | 00:02:16.726 STDOUT terraform:  + dns_name = (known after apply) 2025-12-05 00:02:16.726188 | orchestrator | 00:02:16.726 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.726234 | orchestrator | 00:02:16.726 STDOUT terraform:  + mac_address = (known after apply) 2025-12-05 00:02:16.726283 | orchestrator | 00:02:16.726 STDOUT terraform:  + network_id = (known after apply) 2025-12-05 00:02:16.726316 | orchestrator | 00:02:16.726 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-12-05 00:02:16.726364 | orchestrator | 00:02:16.726 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-12-05 00:02:16.726411 | orchestrator | 00:02:16.726 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.726451 | orchestrator | 00:02:16.726 STDOUT terraform:  + security_group_ids = (known after apply) 2025-12-05 00:02:16.726491 | orchestrator | 00:02:16.726 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.726516 | orchestrator | 00:02:16.726 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.726558 | orchestrator | 00:02:16.726 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-12-05 00:02:16.726576 | orchestrator | 00:02:16.726 STDOUT terraform:  } 2025-12-05 00:02:16.734213 | orchestrator | 00:02:16.726 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.734259 | orchestrator | 00:02:16.726 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-12-05 00:02:16.734264 | orchestrator | 00:02:16.726 STDOUT terraform:  } 2025-12-05 00:02:16.734268 | orchestrator | 00:02:16.726 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.734272 | orchestrator | 00:02:16.726 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-12-05 00:02:16.734276 | orchestrator | 00:02:16.726 STDOUT terraform:  } 2025-12-05 00:02:16.734280 | orchestrator | 00:02:16.726 STDOUT terraform:  + binding (known after apply) 2025-12-05 00:02:16.734284 | orchestrator | 00:02:16.726 STDOUT terraform:  + fixed_ip { 2025-12-05 00:02:16.734288 | orchestrator | 00:02:16.726 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-12-05 00:02:16.734292 | orchestrator | 00:02:16.726 STDOUT terraform:  + subnet_id = (known after apply) 2025-12-05 00:02:16.734296 | orchestrator | 00:02:16.726 STDOUT terraform:  } 2025-12-05 00:02:16.734300 | orchestrator | 00:02:16.727 STDOUT terraform:  } 2025-12-05 00:02:16.734303 | orchestrator | 00:02:16.727 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-12-05 00:02:16.734308 | orchestrator | 00:02:16.727 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-12-05 00:02:16.734312 | orchestrator | 00:02:16.727 STDOUT terraform:  + admin_state_up = (known after apply) 2025-12-05 00:02:16.734316 | orchestrator | 00:02:16.727 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-12-05 00:02:16.734320 | orchestrator | 00:02:16.727 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-12-05 00:02:16.734331 | orchestrator | 00:02:16.727 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.734343 | orchestrator | 00:02:16.727 STDOUT terraform:  + device_id = (known after apply) 2025-12-05 00:02:16.734347 | orchestrator | 00:02:16.727 STDOUT terraform:  + device_owner = (known after apply) 2025-12-05 00:02:16.734351 | orchestrator | 00:02:16.727 STDOUT terraform:  + dns_assignment = (known after apply) 2025-12-05 00:02:16.734355 | orchestrator | 00:02:16.727 STDOUT terraform:  + dns_name = (known after apply) 2025-12-05 00:02:16.734358 | orchestrator | 00:02:16.727 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.734362 | orchestrator | 00:02:16.727 STDOUT terraform:  + mac_address = (known after apply) 2025-12-05 00:02:16.734366 | orchestrator | 00:02:16.727 STDOUT terraform:  + network_id = (known after apply) 2025-12-05 00:02:16.734369 | orchestrator | 00:02:16.727 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-12-05 00:02:16.734373 | orchestrator | 00:02:16.727 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-12-05 00:02:16.734377 | orchestrator | 00:02:16.727 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.734381 | orchestrator | 00:02:16.727 STDOUT terraform:  + security_group_ids = (known after apply) 2025-12-05 00:02:16.734384 | orchestrator | 00:02:16.728 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.734388 | orchestrator | 00:02:16.728 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.734392 | orchestrator | 00:02:16.728 STDOUT terraform:  + ip_addr 2025-12-05 00:02:16.734396 | orchestrator | 00:02:16.728 STDOUT terraform: ess = "192.168.16.254/32" 2025-12-05 00:02:16.734399 | orchestrator | 00:02:16.728 STDOUT terraform:  } 2025-12-05 00:02:16.734403 | orchestrator | 00:02:16.728 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.734407 | orchestrator | 00:02:16.728 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-12-05 00:02:16.734411 | orchestrator | 00:02:16.728 STDOUT terraform:  } 2025-12-05 00:02:16.734414 | orchestrator | 00:02:16.728 STDOUT terraform:  + allowed_address_pairs { 2025-12-05 00:02:16.734418 | orchestrator | 00:02:16.728 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-12-05 00:02:16.734431 | orchestrator | 00:02:16.728 STDOUT terraform:  } 2025-12-05 00:02:16.734435 | orchestrator | 00:02:16.728 STDOUT terraform:  + binding (known after apply) 2025-12-05 00:02:16.734439 | orchestrator | 00:02:16.728 STDOUT terraform:  + fixed_ip { 2025-12-05 00:02:16.734443 | orchestrator | 00:02:16.728 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-12-05 00:02:16.734446 | orchestrator | 00:02:16.728 STDOUT terraform:  + subnet_id = (known after apply) 2025-12-05 00:02:16.734450 | orchestrator | 00:02:16.728 STDOUT terraform:  } 2025-12-05 00:02:16.734454 | orchestrator | 00:02:16.728 STDOUT terraform:  } 2025-12-05 00:02:16.734458 | orchestrator | 00:02:16.728 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-12-05 00:02:16.734461 | orchestrator | 00:02:16.728 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-12-05 00:02:16.734470 | orchestrator | 00:02:16.728 STDOUT terraform:  + force_destroy = false 2025-12-05 00:02:16.734474 | orchestrator | 00:02:16.728 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.734478 | orchestrator | 00:02:16.728 STDOUT terraform:  + port_id = (known after apply) 2025-12-05 00:02:16.734482 | orchestrator | 00:02:16.729 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.734486 | orchestrator | 00:02:16.729 STDOUT terraform:  + router_id = (known after apply) 2025-12-05 00:02:16.734490 | orchestrator | 00:02:16.729 STDOUT terraform:  + subnet_id = (known after apply) 2025-12-05 00:02:16.734493 | orchestrator | 00:02:16.729 STDOUT terraform:  } 2025-12-05 00:02:16.734497 | orchestrator | 00:02:16.729 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-12-05 00:02:16.734503 | orchestrator | 00:02:16.729 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-12-05 00:02:16.734507 | orchestrator | 00:02:16.729 STDOUT terraform:  + admin_state_up = (known after apply) 2025-12-05 00:02:16.734511 | orchestrator | 00:02:16.729 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.734515 | orchestrator | 00:02:16.729 STDOUT terraform:  + availability_zone_hints = [ 2025-12-05 00:02:16.734518 | orchestrator | 00:02:16.729 STDOUT terraform:  + "nova", 2025-12-05 00:02:16.734522 | orchestrator | 00:02:16.729 STDOUT terraform:  ] 2025-12-05 00:02:16.734526 | orchestrator | 00:02:16.729 STDOUT terraform:  + distributed = (known after apply) 2025-12-05 00:02:16.734530 | orchestrator | 00:02:16.734 STDOUT terraform:  + enable_snat = (known after apply) 2025-12-05 00:02:16.734533 | orchestrator | 00:02:16.734 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-12-05 00:02:16.734539 | orchestrator | 00:02:16.734 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-12-05 00:02:16.734543 | orchestrator | 00:02:16.734 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.734548 | orchestrator | 00:02:16.734 STDOUT terraform:  + name = "testbed" 2025-12-05 00:02:16.734590 | orchestrator | 00:02:16.734 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.734644 | orchestrator | 00:02:16.734 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.734684 | orchestrator | 00:02:16.734 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-12-05 00:02:16.734715 | orchestrator | 00:02:16.734 STDOUT terraform:  } 2025-12-05 00:02:16.734791 | orchestrator | 00:02:16.734 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-12-05 00:02:16.734864 | orchestrator | 00:02:16.734 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-12-05 00:02:16.734891 | orchestrator | 00:02:16.734 STDOUT terraform:  + description = "ssh" 2025-12-05 00:02:16.734937 | orchestrator | 00:02:16.734 STDOUT terraform:  + direction = "ingress" 2025-12-05 00:02:16.734965 | orchestrator | 00:02:16.734 STDOUT terraform:  + ethertype = "IPv4" 2025-12-05 00:02:16.735019 | orchestrator | 00:02:16.734 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.735045 | orchestrator | 00:02:16.735 STDOUT terraform:  + port_range_max = 22 2025-12-05 00:02:16.735070 | orchestrator | 00:02:16.735 STDOUT terraform:  + port_range_min = 22 2025-12-05 00:02:16.735121 | orchestrator | 00:02:16.735 STDOUT terraform:  + protocol = "tcp" 2025-12-05 00:02:16.735198 | orchestrator | 00:02:16.735 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.735244 | orchestrator | 00:02:16.735 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-12-05 00:02:16.735298 | orchestrator | 00:02:16.735 STDOUT terraform:  + remote_group_id = (known after apply) 2025-12-05 00:02:16.735345 | orchestrator | 00:02:16.735 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-12-05 00:02:16.735383 | orchestrator | 00:02:16.735 STDOUT terraform:  + security_group_id = (known after apply) 2025-12-05 00:02:16.735425 | orchestrator | 00:02:16.735 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.735452 | orchestrator | 00:02:16.735 STDOUT terraform:  } 2025-12-05 00:02:16.735510 | orchestrator | 00:02:16.735 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-12-05 00:02:16.735587 | orchestrator | 00:02:16.735 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-12-05 00:02:16.735621 | orchestrator | 00:02:16.735 STDOUT terraform:  + description = "wireguard" 2025-12-05 00:02:16.735652 | orchestrator | 00:02:16.735 STDOUT terraform:  + direction = "ingress" 2025-12-05 00:02:16.735679 | orchestrator | 00:02:16.735 STDOUT terraform:  + ethertype = "IPv4" 2025-12-05 00:02:16.735719 | orchestrator | 00:02:16.735 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.735745 | orchestrator | 00:02:16.735 STDOUT terraform:  + port_range_max = 51820 2025-12-05 00:02:16.735771 | orchestrator | 00:02:16.735 STDOUT terraform:  + port_range_min = 51820 2025-12-05 00:02:16.735812 | orchestrator | 00:02:16.735 STDOUT terraform:  + protocol = "udp" 2025-12-05 00:02:16.735850 | orchestrator | 00:02:16.735 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.735888 | orchestrator | 00:02:16.735 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-12-05 00:02:16.735926 | orchestrator | 00:02:16.735 STDOUT terraform:  + remote_group_id = (known after apply) 2025-12-05 00:02:16.735958 | orchestrator | 00:02:16.735 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-12-05 00:02:16.735996 | orchestrator | 00:02:16.735 STDOUT terraform:  + security_group_id = (known after apply) 2025-12-05 00:02:16.736034 | orchestrator | 00:02:16.735 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.736049 | orchestrator | 00:02:16.736 STDOUT terraform:  } 2025-12-05 00:02:16.736107 | orchestrator | 00:02:16.736 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-12-05 00:02:16.736216 | orchestrator | 00:02:16.736 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-12-05 00:02:16.736247 | orchestrator | 00:02:16.736 STDOUT terraform:  + direction = "ingress" 2025-12-05 00:02:16.736277 | orchestrator | 00:02:16.736 STDOUT terraform:  + ethertype = "IPv4" 2025-12-05 00:02:16.736324 | orchestrator | 00:02:16.736 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.736344 | orchestrator | 00:02:16.736 STDOUT terraform:  + protocol = "tcp" 2025-12-05 00:02:16.736381 | orchestrator | 00:02:16.736 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.736415 | orchestrator | 00:02:16.736 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-12-05 00:02:16.736450 | orchestrator | 00:02:16.736 STDOUT terraform:  + remote_group_id = (known after apply) 2025-12-05 00:02:16.736487 | orchestrator | 00:02:16.736 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-12-05 00:02:16.736522 | orchestrator | 00:02:16.736 STDOUT terraform:  + security_group_id = (known after apply) 2025-12-05 00:02:16.736559 | orchestrator | 00:02:16.736 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.736565 | orchestrator | 00:02:16.736 STDOUT terraform:  } 2025-12-05 00:02:16.736621 | orchestrator | 00:02:16.736 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-12-05 00:02:16.736690 | orchestrator | 00:02:16.736 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-12-05 00:02:16.736719 | orchestrator | 00:02:16.736 STDOUT terraform:  + direction = "ingress" 2025-12-05 00:02:16.736743 | orchestrator | 00:02:16.736 STDOUT terraform:  + ethertype = "IPv4" 2025-12-05 00:02:16.736780 | orchestrator | 00:02:16.736 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.736804 | orchestrator | 00:02:16.736 STDOUT terraform:  + protocol = "udp" 2025-12-05 00:02:16.736840 | orchestrator | 00:02:16.736 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.736876 | orchestrator | 00:02:16.736 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-12-05 00:02:16.736957 | orchestrator | 00:02:16.736 STDOUT terraform:  + remote_group_id = (known after apply) 2025-12-05 00:02:16.736993 | orchestrator | 00:02:16.736 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-12-05 00:02:16.737029 | orchestrator | 00:02:16.736 STDOUT terraform:  + security_group_id = (known after apply) 2025-12-05 00:02:16.737068 | orchestrator | 00:02:16.737 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.737074 | orchestrator | 00:02:16.737 STDOUT terraform:  } 2025-12-05 00:02:16.737126 | orchestrator | 00:02:16.737 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-12-05 00:02:16.737193 | orchestrator | 00:02:16.737 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-12-05 00:02:16.737222 | orchestrator | 00:02:16.737 STDOUT terraform:  + direction = "ingress" 2025-12-05 00:02:16.737248 | orchestrator | 00:02:16.737 STDOUT terraform:  + ethertype = "IPv4" 2025-12-05 00:02:16.737284 | orchestrator | 00:02:16.737 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.737309 | orchestrator | 00:02:16.737 STDOUT terraform:  + protocol = "icmp" 2025-12-05 00:02:16.737345 | orchestrator | 00:02:16.737 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.737380 | orchestrator | 00:02:16.737 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-12-05 00:02:16.737419 | orchestrator | 00:02:16.737 STDOUT terraform:  + remote_group_id = (known after apply) 2025-12-05 00:02:16.737448 | orchestrator | 00:02:16.737 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-12-05 00:02:16.737483 | orchestrator | 00:02:16.737 STDOUT terraform:  + security_group_id = (known after apply) 2025-12-05 00:02:16.737517 | orchestrator | 00:02:16.737 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.737533 | orchestrator | 00:02:16.737 STDOUT terraform:  } 2025-12-05 00:02:16.737583 | orchestrator | 00:02:16.737 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-12-05 00:02:16.737632 | orchestrator | 00:02:16.737 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-12-05 00:02:16.737660 | orchestrator | 00:02:16.737 STDOUT terraform:  + direction = "ingress" 2025-12-05 00:02:16.737686 | orchestrator | 00:02:16.737 STDOUT terraform:  + ethertype = "IPv4" 2025-12-05 00:02:16.737723 | orchestrator | 00:02:16.737 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.737747 | orchestrator | 00:02:16.737 STDOUT terraform:  + protocol = "tcp" 2025-12-05 00:02:16.737783 | orchestrator | 00:02:16.737 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.737817 | orchestrator | 00:02:16.737 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-12-05 00:02:16.737853 | orchestrator | 00:02:16.737 STDOUT terraform:  + remote_group_id = (known after apply) 2025-12-05 00:02:16.737882 | orchestrator | 00:02:16.737 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-12-05 00:02:16.737916 | orchestrator | 00:02:16.737 STDOUT terraform:  + security_group_id = (known after apply) 2025-12-05 00:02:16.737953 | orchestrator | 00:02:16.737 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.737967 | orchestrator | 00:02:16.737 STDOUT terraform:  } 2025-12-05 00:02:16.738052 | orchestrator | 00:02:16.737 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-12-05 00:02:16.738081 | orchestrator | 00:02:16.738 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-12-05 00:02:16.738111 | orchestrator | 00:02:16.738 STDOUT terraform:  + direction = "ingress" 2025-12-05 00:02:16.738136 | orchestrator | 00:02:16.738 STDOUT terraform:  + ethertype = "IPv4" 2025-12-05 00:02:16.738186 | orchestrator | 00:02:16.738 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.738210 | orchestrator | 00:02:16.738 STDOUT terraform:  + protocol = "udp" 2025-12-05 00:02:16.738265 | orchestrator | 00:02:16.738 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.738304 | orchestrator | 00:02:16.738 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-12-05 00:02:16.738345 | orchestrator | 00:02:16.738 STDOUT terraform:  + remote_group_id = (known after apply) 2025-12-05 00:02:16.738364 | orchestrator | 00:02:16.738 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-12-05 00:02:16.738402 | orchestrator | 00:02:16.738 STDOUT terraform:  + security_group_id = (known after apply) 2025-12-05 00:02:16.738436 | orchestrator | 00:02:16.738 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.738442 | orchestrator | 00:02:16.738 STDOUT terraform:  } 2025-12-05 00:02:16.738622 | orchestrator | 00:02:16.738 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-12-05 00:02:16.738713 | orchestrator | 00:02:16.738 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-12-05 00:02:16.738742 | orchestrator | 00:02:16.738 STDOUT terraform:  + direction = "ingress" 2025-12-05 00:02:16.738756 | orchestrator | 00:02:16.738 STDOUT terraform:  + ethertype = "IPv4" 2025-12-05 00:02:16.738768 | orchestrator | 00:02:16.738 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.738779 | orchestrator | 00:02:16.738 STDOUT terraform:  + protocol = "icmp" 2025-12-05 00:02:16.738789 | orchestrator | 00:02:16.738 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.738800 | orchestrator | 00:02:16.738 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-12-05 00:02:16.738815 | orchestrator | 00:02:16.738 STDOUT terraform:  + remote_group_id = (known after apply) 2025-12-05 00:02:16.738826 | orchestrator | 00:02:16.738 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-12-05 00:02:16.738840 | orchestrator | 00:02:16.738 STDOUT terraform:  + security_group_id = (known after apply) 2025-12-05 00:02:16.738878 | orchestrator | 00:02:16.738 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.738890 | orchestrator | 00:02:16.738 STDOUT terraform:  } 2025-12-05 00:02:16.738931 | orchestrator | 00:02:16.738 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-12-05 00:02:16.738982 | orchestrator | 00:02:16.738 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-12-05 00:02:16.738999 | orchestrator | 00:02:16.738 STDOUT terraform:  + description = "vrrp" 2025-12-05 00:02:16.739036 | orchestrator | 00:02:16.738 STDOUT terraform:  + direction = "ingress" 2025-12-05 00:02:16.739052 | orchestrator | 00:02:16.739 STDOUT terraform:  + ethertype = "IPv4" 2025-12-05 00:02:16.739087 | orchestrator | 00:02:16.739 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.739103 | orchestrator | 00:02:16.739 STDOUT terraform:  + protocol = "112" 2025-12-05 00:02:16.739145 | orchestrator | 00:02:16.739 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.739196 | orchestrator | 00:02:16.739 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-12-05 00:02:16.739232 | orchestrator | 00:02:16.739 STDOUT terraform:  + remote_group_id = (known after apply) 2025-12-05 00:02:16.739266 | orchestrator | 00:02:16.739 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-12-05 00:02:16.739299 | orchestrator | 00:02:16.739 STDOUT terraform:  + security_group_id = (known after apply) 2025-12-05 00:02:16.739336 | orchestrator | 00:02:16.739 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.739351 | orchestrator | 00:02:16.739 STDOUT terraform:  } 2025-12-05 00:02:16.739394 | orchestrator | 00:02:16.739 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-12-05 00:02:16.739447 | orchestrator | 00:02:16.739 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-12-05 00:02:16.739481 | orchestrator | 00:02:16.739 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.739527 | orchestrator | 00:02:16.739 STDOUT terraform:  + description = "management security group" 2025-12-05 00:02:16.739543 | orchestrator | 00:02:16.739 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.739579 | orchestrator | 00:02:16.739 STDOUT terraform:  + name = "testbed-management" 2025-12-05 00:02:16.739595 | orchestrator | 00:02:16.739 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.739645 | orchestrator | 00:02:16.739 STDOUT terraform:  + stateful = (known after apply) 2025-12-05 00:02:16.739661 | orchestrator | 00:02:16.739 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.739673 | orchestrator | 00:02:16.739 STDOUT terraform:  } 2025-12-05 00:02:16.739708 | orchestrator | 00:02:16.739 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-12-05 00:02:16.739755 | orchestrator | 00:02:16.739 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-12-05 00:02:16.739770 | orchestrator | 00:02:16.739 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.739800 | orchestrator | 00:02:16.739 STDOUT terraform:  + description = "node security group" 2025-12-05 00:02:16.739841 | orchestrator | 00:02:16.739 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.739853 | orchestrator | 00:02:16.739 STDOUT terraform:  + name = "testbed-node" 2025-12-05 00:02:16.739867 | orchestrator | 00:02:16.739 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.739897 | orchestrator | 00:02:16.739 STDOUT terraform:  + stateful = (known after apply) 2025-12-05 00:02:16.739913 | orchestrator | 00:02:16.739 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.739927 | orchestrator | 00:02:16.739 STDOUT terraform:  } 2025-12-05 00:02:16.739973 | orchestrator | 00:02:16.739 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-12-05 00:02:16.740020 | orchestrator | 00:02:16.739 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-12-05 00:02:16.740054 | orchestrator | 00:02:16.740 STDOUT terraform:  + all_tags = (known after apply) 2025-12-05 00:02:16.740069 | orchestrator | 00:02:16.740 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-12-05 00:02:16.740084 | orchestrator | 00:02:16.740 STDOUT terraform:  + dns_nameservers = [ 2025-12-05 00:02:16.740118 | orchestrator | 00:02:16.740 STDOUT terraform:  + "8.8.8.8", 2025-12-05 00:02:16.740130 | orchestrator | 00:02:16.740 STDOUT terraform:  + "9.9.9.9", 2025-12-05 00:02:16.740144 | orchestrator | 00:02:16.740 STDOUT terraform:  ] 2025-12-05 00:02:16.740213 | orchestrator | 00:02:16.740 STDOUT terraform:  + enable_dhcp = true 2025-12-05 00:02:16.740230 | orchestrator | 00:02:16.740 STDOUT terraform:  + gateway_ip = (known after apply) 2025-12-05 00:02:16.740242 | orchestrator | 00:02:16.740 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.740256 | orchestrator | 00:02:16.740 STDOUT terraform:  + ip_version = 4 2025-12-05 00:02:16.740270 | orchestrator | 00:02:16.740 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-12-05 00:02:16.740306 | orchestrator | 00:02:16.740 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-12-05 00:02:16.740374 | orchestrator | 00:02:16.740 STDOUT terraform:  + name = "subnet-testbed-management" 2025-12-05 00:02:16.740391 | orchestrator | 00:02:16.740 STDOUT terraform:  + network_id = (known after apply) 2025-12-05 00:02:16.740418 | orchestrator | 00:02:16.740 STDOUT terraform:  + no_gateway = false 2025-12-05 00:02:16.740448 | orchestrator | 00:02:16.740 STDOUT terraform:  + region = (known after apply) 2025-12-05 00:02:16.740523 | orchestrator | 00:02:16.740 STDOUT terraform:  + service_types = (known after apply) 2025-12-05 00:02:16.740542 | orchestrator | 00:02:16.740 STDOUT terraform:  + tenant_id = (known after apply) 2025-12-05 00:02:16.740553 | orchestrator | 00:02:16.740 STDOUT terraform:  + allocation_pool { 2025-12-05 00:02:16.740569 | orchestrator | 00:02:16.740 STDOUT terraform:  + end = "192.168.31.250" 2025-12-05 00:02:16.740584 | orchestrator | 00:02:16.740 STDOUT terraform:  + start = "192.168.31.200" 2025-12-05 00:02:16.740595 | orchestrator | 00:02:16.740 STDOUT terraform:  } 2025-12-05 00:02:16.740606 | orchestrator | 00:02:16.740 STDOUT terraform:  } 2025-12-05 00:02:16.740621 | orchestrator | 00:02:16.740 STDOUT terraform:  # terraform_data.image will be created 2025-12-05 00:02:16.740632 | orchestrator | 00:02:16.740 STDOUT terraform:  + resource "terraform_data" "image" { 2025-12-05 00:02:16.740646 | orchestrator | 00:02:16.740 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.740660 | orchestrator | 00:02:16.740 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-12-05 00:02:16.740673 | orchestrator | 00:02:16.740 STDOUT terraform:  + output = (known after apply) 2025-12-05 00:02:16.740687 | orchestrator | 00:02:16.740 STDOUT terraform:  } 2025-12-05 00:02:16.740717 | orchestrator | 00:02:16.740 STDOUT terraform:  # terraform_data.image_node will be created 2025-12-05 00:02:16.740732 | orchestrator | 00:02:16.740 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-12-05 00:02:16.740763 | orchestrator | 00:02:16.740 STDOUT terraform:  + id = (known after apply) 2025-12-05 00:02:16.740779 | orchestrator | 00:02:16.740 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-12-05 00:02:16.740793 | orchestrator | 00:02:16.740 STDOUT terraform:  + output = (known after apply) 2025-12-05 00:02:16.740806 | orchestrator | 00:02:16.740 STDOUT terraform:  } 2025-12-05 00:02:16.740830 | orchestrator | 00:02:16.740 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-12-05 00:02:16.740844 | orchestrator | 00:02:16.740 STDOUT terraform: Changes to Outputs: 2025-12-05 00:02:16.740908 | orchestrator | 00:02:16.740 STDOUT terraform:  + manager_address = (sensitive value) 2025-12-05 00:02:16.740925 | orchestrator | 00:02:16.740 STDOUT terraform:  + private_key = (sensitive value) 2025-12-05 00:02:16.964440 | orchestrator | 00:02:16.963 STDOUT terraform: terraform_data.image_node: Creating... 2025-12-05 00:02:16.964524 | orchestrator | 00:02:16.964 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=4289f970-93d2-b48d-5e2e-e769f2489d91] 2025-12-05 00:02:16.964531 | orchestrator | 00:02:16.964 STDOUT terraform: terraform_data.image: Creating... 2025-12-05 00:02:16.970472 | orchestrator | 00:02:16.967 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=f719cdf9-a158-4640-df6a-58543f1bc4e5] 2025-12-05 00:02:16.970548 | orchestrator | 00:02:16.968 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-12-05 00:02:16.970555 | orchestrator | 00:02:16.968 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-12-05 00:02:16.984238 | orchestrator | 00:02:16.984 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-12-05 00:02:16.985916 | orchestrator | 00:02:16.985 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-12-05 00:02:16.985971 | orchestrator | 00:02:16.985 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-12-05 00:02:16.988475 | orchestrator | 00:02:16.988 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-12-05 00:02:16.989296 | orchestrator | 00:02:16.989 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-12-05 00:02:16.991851 | orchestrator | 00:02:16.991 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-12-05 00:02:16.995474 | orchestrator | 00:02:16.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-12-05 00:02:17.010579 | orchestrator | 00:02:17.010 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-12-05 00:02:17.437419 | orchestrator | 00:02:17.437 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-12-05 00:02:17.440881 | orchestrator | 00:02:17.440 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-12-05 00:02:17.447329 | orchestrator | 00:02:17.446 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-12-05 00:02:17.450209 | orchestrator | 00:02:17.449 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-12-05 00:02:17.493296 | orchestrator | 00:02:17.493 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-12-05 00:02:17.496267 | orchestrator | 00:02:17.496 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-12-05 00:02:18.132060 | orchestrator | 00:02:18.129 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=14b9bf6f-9867-4381-add5-a5c488de6e27] 2025-12-05 00:02:18.141027 | orchestrator | 00:02:18.140 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-12-05 00:02:20.603031 | orchestrator | 00:02:20.602 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=b336ed5a-eba8-43f6-ad9f-74d1d85f851c] 2025-12-05 00:02:20.941404 | orchestrator | 00:02:20.612 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-12-05 00:02:20.941491 | orchestrator | 00:02:20.624 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=0f2089b6-ff07-4cf8-a0e1-2eb8dc066f59] 2025-12-05 00:02:20.941506 | orchestrator | 00:02:20.632 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-12-05 00:02:20.941518 | orchestrator | 00:02:20.653 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=9493c8ab-88eb-4d1c-b68e-2fb951974223] 2025-12-05 00:02:20.941529 | orchestrator | 00:02:20.665 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=e047376e-b981-4d74-8cd7-bfa902726316] 2025-12-05 00:02:20.941540 | orchestrator | 00:02:20.670 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-12-05 00:02:20.941551 | orchestrator | 00:02:20.678 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=f30b86f3-d273-4481-80b1-42296575b44e] 2025-12-05 00:02:20.941562 | orchestrator | 00:02:20.678 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-12-05 00:02:20.941573 | orchestrator | 00:02:20.684 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-12-05 00:02:20.941584 | orchestrator | 00:02:20.700 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=5b6980b6-a050-4eba-a8d0-05652bf6b0d7] 2025-12-05 00:02:20.941595 | orchestrator | 00:02:20.706 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-12-05 00:02:20.941605 | orchestrator | 00:02:20.733 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=0ee57b6c-78c9-45c7-961f-a1697fe20540] 2025-12-05 00:02:20.941618 | orchestrator | 00:02:20.754 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-12-05 00:02:20.941630 | orchestrator | 00:02:20.754 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=8c2eb1d4-9269-4a43-bf7f-0b1729785ea6] 2025-12-05 00:02:20.941641 | orchestrator | 00:02:20.759 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=8b5b11bf-ea67-4c87-b8fa-46ee0c130163] 2025-12-05 00:02:20.941652 | orchestrator | 00:02:20.765 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-12-05 00:02:20.941663 | orchestrator | 00:02:20.768 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-12-05 00:02:21.437904 | orchestrator | 00:02:21.437 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=ab69ee090a6f1c25625de0d96c4f447d2739b335] 2025-12-05 00:02:21.438311 | orchestrator | 00:02:21.438 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=2c1a3d56ba509714c25fdb9fafbd1f6fa42b178d] 2025-12-05 00:02:21.535230 | orchestrator | 00:02:21.534 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=b66b828d-745e-4656-9833-cbc36ccb2811] 2025-12-05 00:02:21.654985 | orchestrator | 00:02:21.654 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=27a60450-99b8-4cbc-88ec-4eb143c9f55b] 2025-12-05 00:02:21.664913 | orchestrator | 00:02:21.664 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-12-05 00:02:23.997226 | orchestrator | 00:02:23.996 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=6719879f-b8de-412d-b6f5-60aad959d025] 2025-12-05 00:02:24.029986 | orchestrator | 00:02:24.029 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=b4823521-82e2-4867-b838-0e3d236b7dce] 2025-12-05 00:02:24.083096 | orchestrator | 00:02:24.082 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=cf9fda06-5d75-4c9f-9226-7e10a3b88a40] 2025-12-05 00:02:24.113111 | orchestrator | 00:02:24.112 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=ca100b5e-c8c0-43d0-936b-6f0711be341e] 2025-12-05 00:02:24.133476 | orchestrator | 00:02:24.133 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=697b8dc1-25af-4499-8a82-0dc0a73bac28] 2025-12-05 00:02:24.136045 | orchestrator | 00:02:24.135 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=f2656be5-075f-40e9-83da-a814a549cb6b] 2025-12-05 00:02:24.554328 | orchestrator | 00:02:24.551 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=6c6f50d2-05d4-4a80-a7a1-b1b84076efe0] 2025-12-05 00:02:24.561234 | orchestrator | 00:02:24.560 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-12-05 00:02:24.561669 | orchestrator | 00:02:24.561 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-12-05 00:02:24.565306 | orchestrator | 00:02:24.564 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-12-05 00:02:24.787732 | orchestrator | 00:02:24.787 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=9b625fe8-5372-4199-98cb-9c96b97f1878] 2025-12-05 00:02:24.805561 | orchestrator | 00:02:24.805 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-12-05 00:02:24.805912 | orchestrator | 00:02:24.805 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-12-05 00:02:24.806723 | orchestrator | 00:02:24.806 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-12-05 00:02:24.806999 | orchestrator | 00:02:24.806 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-12-05 00:02:24.808811 | orchestrator | 00:02:24.808 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-12-05 00:02:24.815101 | orchestrator | 00:02:24.814 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-12-05 00:02:24.975739 | orchestrator | 00:02:24.975 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=078e4063-3493-40fa-86ab-f2b803c2804d] 2025-12-05 00:02:25.110804 | orchestrator | 00:02:25.110 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=9420c78a-bc99-4992-9733-64f2fd51636f] 2025-12-05 00:02:25.277194 | orchestrator | 00:02:25.276 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=84b139c2-2501-4027-9ea7-bb9f4ca272d4] 2025-12-05 00:02:25.285431 | orchestrator | 00:02:25.285 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-12-05 00:02:25.285514 | orchestrator | 00:02:25.285 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-12-05 00:02:25.285592 | orchestrator | 00:02:25.285 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-12-05 00:02:25.288868 | orchestrator | 00:02:25.288 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-12-05 00:02:25.289225 | orchestrator | 00:02:25.289 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-12-05 00:02:25.308291 | orchestrator | 00:02:25.306 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=12af0e55-6c30-46c4-b496-38111da375a6] 2025-12-05 00:02:25.318714 | orchestrator | 00:02:25.316 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-12-05 00:02:25.462111 | orchestrator | 00:02:25.461 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=5915ddae-f2a4-42a1-b301-11097c979c3a] 2025-12-05 00:02:25.472580 | orchestrator | 00:02:25.472 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-12-05 00:02:25.553599 | orchestrator | 00:02:25.553 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=5a169b35-eff2-4184-b511-c4ed3ef51312] 2025-12-05 00:02:25.565683 | orchestrator | 00:02:25.565 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-12-05 00:02:25.572882 | orchestrator | 00:02:25.572 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=af0cf929-0a4f-4023-9e9b-33d556cf4a15] 2025-12-05 00:02:25.585202 | orchestrator | 00:02:25.584 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-12-05 00:02:25.735402 | orchestrator | 00:02:25.734 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=5c4c55bd-4ddf-46fe-9dcf-bfdf6a6ce213] 2025-12-05 00:02:25.745825 | orchestrator | 00:02:25.745 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-12-05 00:02:25.881446 | orchestrator | 00:02:25.880 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=b1c82b71-7af7-435e-9802-d3959bfcc5eb] 2025-12-05 00:02:25.989183 | orchestrator | 00:02:25.988 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=be3a6407-13ce-4796-97ec-dff9aa9cfea5] 2025-12-05 00:02:26.066540 | orchestrator | 00:02:26.066 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=6784a5d3-ac73-4fdf-9047-67b1ec649e6c] 2025-12-05 00:02:26.252230 | orchestrator | 00:02:26.251 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=0efd979f-04be-4783-975a-7064bd4fb7bd] 2025-12-05 00:02:26.340277 | orchestrator | 00:02:26.339 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=3bc557c7-16c2-487e-83a5-3fff52bb4b02] 2025-12-05 00:02:26.531407 | orchestrator | 00:02:26.531 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=f57a9d40-427d-40c9-a845-371d5ff6f58d] 2025-12-05 00:02:26.582752 | orchestrator | 00:02:26.582 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=52966465-b54e-49a6-a79f-a67b458fc457] 2025-12-05 00:02:26.672874 | orchestrator | 00:02:26.672 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=306d41de-1783-4f9a-9889-01fad5de5cfd] 2025-12-05 00:02:27.288304 | orchestrator | 00:02:27.287 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=f53dc70b-5531-435b-b4fe-d14aae5636cd] 2025-12-05 00:02:27.336056 | orchestrator | 00:02:27.335 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=186f33a1-2d8c-4a06-9c58-5a0e0e4ae622] 2025-12-05 00:02:27.355235 | orchestrator | 00:02:27.354 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-12-05 00:02:27.369199 | orchestrator | 00:02:27.369 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-12-05 00:02:27.375651 | orchestrator | 00:02:27.375 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-12-05 00:02:27.383939 | orchestrator | 00:02:27.383 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-12-05 00:02:27.386581 | orchestrator | 00:02:27.386 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-12-05 00:02:27.388242 | orchestrator | 00:02:27.388 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-12-05 00:02:27.390534 | orchestrator | 00:02:27.390 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-12-05 00:02:30.100078 | orchestrator | 00:02:30.099 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=e13b3a69-42bf-4e46-95eb-b4a87e8914b5] 2025-12-05 00:02:30.117843 | orchestrator | 00:02:30.117 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-12-05 00:02:30.122473 | orchestrator | 00:02:30.122 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-12-05 00:02:30.123173 | orchestrator | 00:02:30.123 STDOUT terraform: local_file.inventory: Creating... 2025-12-05 00:02:30.131245 | orchestrator | 00:02:30.130 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=ff76510ae8301715ee1aa9e0334b691ccdd34934] 2025-12-05 00:02:30.131630 | orchestrator | 00:02:30.131 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=288208ec366d024343ef3d63e3fc9fead89f9970] 2025-12-05 00:02:31.087069 | orchestrator | 00:02:31.086 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=e13b3a69-42bf-4e46-95eb-b4a87e8914b5] 2025-12-05 00:02:37.372112 | orchestrator | 00:02:37.371 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-12-05 00:02:37.381608 | orchestrator | 00:02:37.381 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-12-05 00:02:37.384144 | orchestrator | 00:02:37.383 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-12-05 00:02:37.387311 | orchestrator | 00:02:37.387 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-12-05 00:02:37.392840 | orchestrator | 00:02:37.392 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-12-05 00:02:37.392960 | orchestrator | 00:02:37.392 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-12-05 00:02:47.374680 | orchestrator | 00:02:47.374 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-12-05 00:02:47.382947 | orchestrator | 00:02:47.382 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-12-05 00:02:47.385225 | orchestrator | 00:02:47.385 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-12-05 00:02:47.388454 | orchestrator | 00:02:47.388 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-12-05 00:02:47.393799 | orchestrator | 00:02:47.393 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-12-05 00:02:47.393893 | orchestrator | 00:02:47.393 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-12-05 00:02:47.991975 | orchestrator | 00:02:47.991 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=3609dd0e-389d-4569-948a-1120c6e64d6f] 2025-12-05 00:02:57.375311 | orchestrator | 00:02:57.374 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-12-05 00:02:57.384169 | orchestrator | 00:02:57.383 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-12-05 00:02:57.386184 | orchestrator | 00:02:57.385 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-12-05 00:02:57.389571 | orchestrator | 00:02:57.389 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-12-05 00:02:57.394921 | orchestrator | 00:02:57.394 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-12-05 00:02:58.047322 | orchestrator | 00:02:58.046 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=7a5cdb57-33e7-4ee9-9458-5349c23da15a] 2025-12-05 00:02:58.120486 | orchestrator | 00:02:58.120 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=a4b17219-840f-47b0-8e94-778d4be52e3d] 2025-12-05 00:02:58.260972 | orchestrator | 00:02:58.260 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=b29cb8c1-7437-4f96-8fe2-5fe5eddc94c7] 2025-12-05 00:02:58.281065 | orchestrator | 00:02:58.280 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=6b46139a-d13f-478f-a691-a331645123e0] 2025-12-05 00:02:58.397534 | orchestrator | 00:02:58.397 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=7049a9a2-6ed6-4db7-a2dc-70103c10edb0] 2025-12-05 00:02:58.430107 | orchestrator | 00:02:58.429 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-12-05 00:02:58.431822 | orchestrator | 00:02:58.431 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-12-05 00:02:58.435829 | orchestrator | 00:02:58.435 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1262329259907401175] 2025-12-05 00:02:58.436429 | orchestrator | 00:02:58.436 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-12-05 00:02:58.439599 | orchestrator | 00:02:58.439 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-12-05 00:02:58.442960 | orchestrator | 00:02:58.442 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-12-05 00:02:58.443223 | orchestrator | 00:02:58.443 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-12-05 00:02:58.443424 | orchestrator | 00:02:58.443 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-12-05 00:02:58.443615 | orchestrator | 00:02:58.443 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-12-05 00:02:58.458930 | orchestrator | 00:02:58.458 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-12-05 00:02:58.467122 | orchestrator | 00:02:58.466 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-12-05 00:02:58.468728 | orchestrator | 00:02:58.468 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-12-05 00:03:01.862969 | orchestrator | 00:03:01.862 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=7a5cdb57-33e7-4ee9-9458-5349c23da15a/e047376e-b981-4d74-8cd7-bfa902726316] 2025-12-05 00:03:01.876476 | orchestrator | 00:03:01.875 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=3609dd0e-389d-4569-948a-1120c6e64d6f/8c2eb1d4-9269-4a43-bf7f-0b1729785ea6] 2025-12-05 00:03:01.890815 | orchestrator | 00:03:01.890 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=6b46139a-d13f-478f-a691-a331645123e0/0f2089b6-ff07-4cf8-a0e1-2eb8dc066f59] 2025-12-05 00:03:01.901323 | orchestrator | 00:03:01.900 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=7a5cdb57-33e7-4ee9-9458-5349c23da15a/8b5b11bf-ea67-4c87-b8fa-46ee0c130163] 2025-12-05 00:03:01.916520 | orchestrator | 00:03:01.915 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=6b46139a-d13f-478f-a691-a331645123e0/5b6980b6-a050-4eba-a8d0-05652bf6b0d7] 2025-12-05 00:03:01.948739 | orchestrator | 00:03:01.948 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=3609dd0e-389d-4569-948a-1120c6e64d6f/b336ed5a-eba8-43f6-ad9f-74d1d85f851c] 2025-12-05 00:03:08.039725 | orchestrator | 00:03:08.039 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=6b46139a-d13f-478f-a691-a331645123e0/9493c8ab-88eb-4d1c-b68e-2fb951974223] 2025-12-05 00:03:08.056150 | orchestrator | 00:03:08.055 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=7a5cdb57-33e7-4ee9-9458-5349c23da15a/f30b86f3-d273-4481-80b1-42296575b44e] 2025-12-05 00:03:08.080139 | orchestrator | 00:03:08.079 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=3609dd0e-389d-4569-948a-1120c6e64d6f/0ee57b6c-78c9-45c7-961f-a1697fe20540] 2025-12-05 00:03:08.469707 | orchestrator | 00:03:08.469 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-12-05 00:03:18.470200 | orchestrator | 00:03:18.469 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-12-05 00:03:18.837023 | orchestrator | 00:03:18.836 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=0ecf109d-7694-40ca-9adc-93a44491805d] 2025-12-05 00:03:18.853349 | orchestrator | 00:03:18.853 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-12-05 00:03:18.853440 | orchestrator | 00:03:18.853 STDOUT terraform: Outputs: 2025-12-05 00:03:18.853457 | orchestrator | 00:03:18.853 STDOUT terraform: manager_address = 2025-12-05 00:03:18.853462 | orchestrator | 00:03:18.853 STDOUT terraform: private_key = 2025-12-05 00:03:18.992139 | orchestrator | ok: Runtime: 0:01:09.889533 2025-12-05 00:03:19.031108 | 2025-12-05 00:03:19.031265 | TASK [Create infrastructure (stable)] 2025-12-05 00:03:19.567966 | orchestrator | skipping: Conditional result was False 2025-12-05 00:03:19.586231 | 2025-12-05 00:03:19.586432 | TASK [Fetch manager address] 2025-12-05 00:03:20.183097 | orchestrator | ok 2025-12-05 00:03:20.202688 | 2025-12-05 00:03:20.203005 | TASK [Set manager_host address] 2025-12-05 00:03:20.294580 | orchestrator | ok 2025-12-05 00:03:20.303315 | 2025-12-05 00:03:20.303455 | LOOP [Update ansible collections] 2025-12-05 00:03:22.070782 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-12-05 00:03:22.071264 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-12-05 00:03:22.071330 | orchestrator | Starting galaxy collection install process 2025-12-05 00:03:22.071373 | orchestrator | Process install dependency map 2025-12-05 00:03:22.071410 | orchestrator | Starting collection install process 2025-12-05 00:03:22.071444 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-12-05 00:03:22.071482 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-12-05 00:03:22.071543 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-12-05 00:03:22.071638 | orchestrator | ok: Item: commons Runtime: 0:00:01.402530 2025-12-05 00:03:23.093770 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-12-05 00:03:23.093942 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-12-05 00:03:23.094011 | orchestrator | Starting galaxy collection install process 2025-12-05 00:03:23.094050 | orchestrator | Process install dependency map 2025-12-05 00:03:23.094085 | orchestrator | Starting collection install process 2025-12-05 00:03:23.094117 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-12-05 00:03:23.094149 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-12-05 00:03:23.094181 | orchestrator | osism.services:999.0.0 was installed successfully 2025-12-05 00:03:23.094229 | orchestrator | ok: Item: services Runtime: 0:00:00.727018 2025-12-05 00:03:23.118296 | 2025-12-05 00:03:23.118485 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-12-05 00:03:33.835388 | orchestrator | ok 2025-12-05 00:03:33.854045 | 2025-12-05 00:03:33.854205 | TASK [Wait a little longer for the manager so that everything is ready] 2025-12-05 00:04:33.910948 | orchestrator | ok 2025-12-05 00:04:33.929342 | 2025-12-05 00:04:33.929552 | TASK [Fetch manager ssh hostkey] 2025-12-05 00:04:35.507977 | orchestrator | Output suppressed because no_log was given 2025-12-05 00:04:35.526106 | 2025-12-05 00:04:35.526313 | TASK [Get ssh keypair from terraform environment] 2025-12-05 00:04:36.067126 | orchestrator | ok: Runtime: 0:00:00.008297 2025-12-05 00:04:36.086968 | 2025-12-05 00:04:36.087379 | TASK [Point out that the following task takes some time and does not give any output] 2025-12-05 00:04:36.119881 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-12-05 00:04:36.127370 | 2025-12-05 00:04:36.127494 | TASK [Run manager part 0] 2025-12-05 00:04:37.287970 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-12-05 00:04:37.350538 | orchestrator | 2025-12-05 00:04:37.350638 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-12-05 00:04:37.350657 | orchestrator | 2025-12-05 00:04:37.350693 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-12-05 00:04:39.403355 | orchestrator | ok: [testbed-manager] 2025-12-05 00:04:39.403393 | orchestrator | 2025-12-05 00:04:39.403414 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-12-05 00:04:39.403425 | orchestrator | 2025-12-05 00:04:39.403436 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-12-05 00:04:41.331736 | orchestrator | ok: [testbed-manager] 2025-12-05 00:04:41.331786 | orchestrator | 2025-12-05 00:04:41.331794 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-12-05 00:04:42.024789 | orchestrator | ok: [testbed-manager] 2025-12-05 00:04:42.024844 | orchestrator | 2025-12-05 00:04:42.024857 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-12-05 00:04:42.077443 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:04:42.077482 | orchestrator | 2025-12-05 00:04:42.077491 | orchestrator | TASK [Update package cache] **************************************************** 2025-12-05 00:04:42.107548 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:04:42.107579 | orchestrator | 2025-12-05 00:04:42.107586 | orchestrator | TASK [Install required packages] *********************************************** 2025-12-05 00:04:42.136567 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:04:42.136602 | orchestrator | 2025-12-05 00:04:42.136608 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-12-05 00:04:42.166972 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:04:42.167013 | orchestrator | 2025-12-05 00:04:42.167022 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-12-05 00:04:42.196230 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:04:42.196262 | orchestrator | 2025-12-05 00:04:42.196269 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2025-12-05 00:04:42.227914 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:04:42.227961 | orchestrator | 2025-12-05 00:04:42.227968 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-12-05 00:04:42.253337 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:04:42.253368 | orchestrator | 2025-12-05 00:04:42.253374 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-12-05 00:04:42.992056 | orchestrator | changed: [testbed-manager] 2025-12-05 00:04:42.992118 | orchestrator | 2025-12-05 00:04:42.992127 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-12-05 00:07:15.388339 | orchestrator | changed: [testbed-manager] 2025-12-05 00:07:15.388571 | orchestrator | 2025-12-05 00:07:15.388596 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-12-05 00:08:45.097005 | orchestrator | changed: [testbed-manager] 2025-12-05 00:08:45.097117 | orchestrator | 2025-12-05 00:08:45.097137 | orchestrator | TASK [Install required packages] *********************************************** 2025-12-05 00:09:08.129534 | orchestrator | changed: [testbed-manager] 2025-12-05 00:09:08.129581 | orchestrator | 2025-12-05 00:09:08.129592 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-12-05 00:09:17.685882 | orchestrator | changed: [testbed-manager] 2025-12-05 00:09:17.685997 | orchestrator | 2025-12-05 00:09:17.686071 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-12-05 00:09:17.737150 | orchestrator | ok: [testbed-manager] 2025-12-05 00:09:17.737212 | orchestrator | 2025-12-05 00:09:17.737220 | orchestrator | TASK [Get current user] ******************************************************** 2025-12-05 00:09:18.568574 | orchestrator | ok: [testbed-manager] 2025-12-05 00:09:18.569984 | orchestrator | 2025-12-05 00:09:18.570049 | orchestrator | TASK [Create venv directory] *************************************************** 2025-12-05 00:09:19.325618 | orchestrator | changed: [testbed-manager] 2025-12-05 00:09:19.325727 | orchestrator | 2025-12-05 00:09:19.325750 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-12-05 00:09:25.664096 | orchestrator | changed: [testbed-manager] 2025-12-05 00:09:25.664223 | orchestrator | 2025-12-05 00:09:25.664266 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-12-05 00:09:31.830011 | orchestrator | changed: [testbed-manager] 2025-12-05 00:09:31.830143 | orchestrator | 2025-12-05 00:09:31.830157 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-12-05 00:09:34.509948 | orchestrator | changed: [testbed-manager] 2025-12-05 00:09:34.510590 | orchestrator | 2025-12-05 00:09:34.510614 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-12-05 00:09:36.241878 | orchestrator | changed: [testbed-manager] 2025-12-05 00:09:36.241967 | orchestrator | 2025-12-05 00:09:36.241983 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-12-05 00:09:37.416231 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-12-05 00:09:37.416314 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-12-05 00:09:37.416331 | orchestrator | 2025-12-05 00:09:37.416346 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-12-05 00:09:37.462872 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-12-05 00:09:37.462937 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-12-05 00:09:37.462947 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-12-05 00:09:37.462956 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-12-05 00:09:43.767179 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-12-05 00:09:43.767286 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-12-05 00:09:43.767305 | orchestrator | 2025-12-05 00:09:43.767319 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-12-05 00:09:44.353890 | orchestrator | changed: [testbed-manager] 2025-12-05 00:09:44.353999 | orchestrator | 2025-12-05 00:09:44.354052 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-12-05 00:10:04.848709 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-12-05 00:10:04.848807 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-12-05 00:10:04.848825 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-12-05 00:10:04.848838 | orchestrator | 2025-12-05 00:10:04.848851 | orchestrator | TASK [Install local collections] *********************************************** 2025-12-05 00:10:07.146273 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-12-05 00:10:07.146320 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-12-05 00:10:07.146326 | orchestrator | 2025-12-05 00:10:07.146330 | orchestrator | PLAY [Create operator user] **************************************************** 2025-12-05 00:10:07.146335 | orchestrator | 2025-12-05 00:10:07.146339 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-12-05 00:10:08.529468 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:08.529502 | orchestrator | 2025-12-05 00:10:08.529512 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-12-05 00:10:08.576612 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:08.576644 | orchestrator | 2025-12-05 00:10:08.576652 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-12-05 00:10:08.642111 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:08.692866 | orchestrator | 2025-12-05 00:10:08.692925 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-12-05 00:10:09.367006 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:09.367042 | orchestrator | 2025-12-05 00:10:09.367051 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-12-05 00:10:10.049117 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:10.049154 | orchestrator | 2025-12-05 00:10:10.049162 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-12-05 00:10:11.416586 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-12-05 00:10:11.416666 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-12-05 00:10:11.416681 | orchestrator | 2025-12-05 00:10:11.416709 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-12-05 00:10:12.835618 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:12.835734 | orchestrator | 2025-12-05 00:10:12.835752 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-12-05 00:10:14.592814 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-12-05 00:10:14.592915 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-12-05 00:10:14.592927 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-12-05 00:10:14.592936 | orchestrator | 2025-12-05 00:10:14.592947 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-12-05 00:10:14.650939 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:10:14.651006 | orchestrator | 2025-12-05 00:10:14.651016 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2025-12-05 00:10:14.723054 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:10:14.723118 | orchestrator | 2025-12-05 00:10:14.723129 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-12-05 00:10:15.270732 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:15.270826 | orchestrator | 2025-12-05 00:10:15.270838 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-12-05 00:10:15.345230 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:10:15.345362 | orchestrator | 2025-12-05 00:10:15.345379 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-12-05 00:10:16.227102 | orchestrator | changed: [testbed-manager] => (item=None) 2025-12-05 00:10:16.227501 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:16.227517 | orchestrator | 2025-12-05 00:10:16.227525 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-12-05 00:10:16.267254 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:10:16.267357 | orchestrator | 2025-12-05 00:10:16.267375 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-12-05 00:10:16.310311 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:10:16.310514 | orchestrator | 2025-12-05 00:10:16.310534 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-12-05 00:10:16.344406 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:10:16.344476 | orchestrator | 2025-12-05 00:10:16.344489 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-12-05 00:10:16.411905 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:10:16.411968 | orchestrator | 2025-12-05 00:10:16.411976 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-12-05 00:10:17.114416 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:17.114525 | orchestrator | 2025-12-05 00:10:17.114543 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-12-05 00:10:17.114556 | orchestrator | 2025-12-05 00:10:17.114565 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-12-05 00:10:18.495297 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:18.495369 | orchestrator | 2025-12-05 00:10:18.495386 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-12-05 00:10:19.457301 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:19.457401 | orchestrator | 2025-12-05 00:10:19.457416 | orchestrator | PLAY RECAP ********************************************************************* 2025-12-05 00:10:19.457428 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2025-12-05 00:10:19.457440 | orchestrator | 2025-12-05 00:10:19.907679 | orchestrator | ok: Runtime: 0:05:43.100449 2025-12-05 00:10:19.927579 | 2025-12-05 00:10:19.928320 | TASK [Point out that the log in on the manager is now possible] 2025-12-05 00:10:19.978231 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-12-05 00:10:19.995739 | 2025-12-05 00:10:19.995937 | TASK [Point out that the following task takes some time and does not give any output] 2025-12-05 00:10:20.034594 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-12-05 00:10:20.043951 | 2025-12-05 00:10:20.044170 | TASK [Run manager part 1 + 2] 2025-12-05 00:10:20.970684 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-12-05 00:10:21.031170 | orchestrator | 2025-12-05 00:10:21.031234 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-12-05 00:10:21.031242 | orchestrator | 2025-12-05 00:10:21.031257 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-12-05 00:10:23.638880 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:23.638945 | orchestrator | 2025-12-05 00:10:23.638969 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-12-05 00:10:23.679112 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:10:23.679167 | orchestrator | 2025-12-05 00:10:23.679176 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-12-05 00:10:23.715722 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:23.715778 | orchestrator | 2025-12-05 00:10:23.715785 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-12-05 00:10:23.749235 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:23.749346 | orchestrator | 2025-12-05 00:10:23.749354 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-12-05 00:10:23.820243 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:23.820363 | orchestrator | 2025-12-05 00:10:23.820375 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-12-05 00:10:23.883230 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:23.883320 | orchestrator | 2025-12-05 00:10:23.883330 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-12-05 00:10:23.931900 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-12-05 00:10:23.931959 | orchestrator | 2025-12-05 00:10:23.931966 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-12-05 00:10:24.660491 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:24.660637 | orchestrator | 2025-12-05 00:10:24.660650 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-12-05 00:10:24.706650 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:10:24.706704 | orchestrator | 2025-12-05 00:10:24.706713 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-12-05 00:10:26.044275 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:26.044328 | orchestrator | 2025-12-05 00:10:26.044337 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-12-05 00:10:26.596907 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:26.596960 | orchestrator | 2025-12-05 00:10:26.596967 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-12-05 00:10:27.764796 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:27.764852 | orchestrator | 2025-12-05 00:10:27.764865 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-12-05 00:10:43.168085 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:43.168145 | orchestrator | 2025-12-05 00:10:43.168151 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-12-05 00:10:43.893120 | orchestrator | ok: [testbed-manager] 2025-12-05 00:10:43.893276 | orchestrator | 2025-12-05 00:10:43.893298 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-12-05 00:10:43.954770 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:10:43.954848 | orchestrator | 2025-12-05 00:10:43.954855 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-12-05 00:10:44.966347 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:44.966418 | orchestrator | 2025-12-05 00:10:44.966427 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-12-05 00:10:45.944621 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:45.945066 | orchestrator | 2025-12-05 00:10:45.945105 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-12-05 00:10:46.525812 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:46.525910 | orchestrator | 2025-12-05 00:10:46.525924 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-12-05 00:10:46.568319 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-12-05 00:10:46.568495 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-12-05 00:10:46.568521 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-12-05 00:10:46.568544 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-12-05 00:10:49.005587 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:49.005714 | orchestrator | 2025-12-05 00:10:49.005740 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-12-05 00:10:57.796900 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-12-05 00:10:57.796981 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-12-05 00:10:57.796997 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-12-05 00:10:57.797010 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-12-05 00:10:57.797030 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-12-05 00:10:57.797042 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-12-05 00:10:57.797053 | orchestrator | 2025-12-05 00:10:57.797066 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-12-05 00:10:58.820550 | orchestrator | changed: [testbed-manager] 2025-12-05 00:10:58.820652 | orchestrator | 2025-12-05 00:10:58.820668 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-12-05 00:10:58.870114 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:10:58.870251 | orchestrator | 2025-12-05 00:10:58.870271 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-12-05 00:11:01.925348 | orchestrator | changed: [testbed-manager] 2025-12-05 00:11:01.925418 | orchestrator | 2025-12-05 00:11:01.925427 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-12-05 00:11:01.969253 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:11:01.969322 | orchestrator | 2025-12-05 00:11:01.969332 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-12-05 00:12:41.679889 | orchestrator | changed: [testbed-manager] 2025-12-05 00:12:41.680032 | orchestrator | 2025-12-05 00:12:41.680056 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-12-05 00:12:42.815915 | orchestrator | ok: [testbed-manager] 2025-12-05 00:12:42.816038 | orchestrator | 2025-12-05 00:12:42.816060 | orchestrator | PLAY RECAP ********************************************************************* 2025-12-05 00:12:42.816076 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-12-05 00:12:42.816091 | orchestrator | 2025-12-05 00:12:43.180653 | orchestrator | ok: Runtime: 0:02:22.530841 2025-12-05 00:12:43.199088 | 2025-12-05 00:12:43.199255 | TASK [Reboot manager] 2025-12-05 00:12:44.740967 | orchestrator | ok: Runtime: 0:00:00.957307 2025-12-05 00:12:44.759342 | 2025-12-05 00:12:44.759599 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-12-05 00:13:01.197325 | orchestrator | ok 2025-12-05 00:13:01.208143 | 2025-12-05 00:13:01.208332 | TASK [Wait a little longer for the manager so that everything is ready] 2025-12-05 00:14:01.265959 | orchestrator | ok 2025-12-05 00:14:01.276096 | 2025-12-05 00:14:01.276249 | TASK [Deploy manager + bootstrap nodes] 2025-12-05 00:14:04.863528 | orchestrator | 2025-12-05 00:14:04.863944 | orchestrator | # DEPLOY MANAGER 2025-12-05 00:14:04.863985 | orchestrator | 2025-12-05 00:14:04.864001 | orchestrator | + set -e 2025-12-05 00:14:04.864016 | orchestrator | + echo 2025-12-05 00:14:04.864031 | orchestrator | + echo '# DEPLOY MANAGER' 2025-12-05 00:14:04.864052 | orchestrator | + echo 2025-12-05 00:14:04.864101 | orchestrator | + cat /opt/manager-vars.sh 2025-12-05 00:14:04.867200 | orchestrator | export NUMBER_OF_NODES=6 2025-12-05 00:14:04.867234 | orchestrator | 2025-12-05 00:14:04.867248 | orchestrator | export CEPH_VERSION=reef 2025-12-05 00:14:04.867263 | orchestrator | export CONFIGURATION_VERSION=main 2025-12-05 00:14:04.867276 | orchestrator | export MANAGER_VERSION=latest 2025-12-05 00:14:04.867301 | orchestrator | export OPENSTACK_VERSION=2025.1 2025-12-05 00:14:04.867313 | orchestrator | 2025-12-05 00:14:04.867333 | orchestrator | export ARA=false 2025-12-05 00:14:04.867346 | orchestrator | export DEPLOY_MODE=manager 2025-12-05 00:14:04.867364 | orchestrator | export TEMPEST=true 2025-12-05 00:14:04.867378 | orchestrator | export IS_ZUUL=true 2025-12-05 00:14:04.867390 | orchestrator | 2025-12-05 00:14:04.867408 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2025-12-05 00:14:04.867422 | orchestrator | export EXTERNAL_API=false 2025-12-05 00:14:04.867434 | orchestrator | 2025-12-05 00:14:04.867446 | orchestrator | export IMAGE_USER=ubuntu 2025-12-05 00:14:04.867461 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-12-05 00:14:04.867473 | orchestrator | 2025-12-05 00:14:04.867485 | orchestrator | export CEPH_STACK=ceph-ansible 2025-12-05 00:14:04.867597 | orchestrator | 2025-12-05 00:14:04.867616 | orchestrator | + echo 2025-12-05 00:14:04.867634 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-12-05 00:14:04.868669 | orchestrator | ++ export INTERACTIVE=false 2025-12-05 00:14:04.868691 | orchestrator | ++ INTERACTIVE=false 2025-12-05 00:14:04.868706 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-12-05 00:14:04.868720 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-12-05 00:14:04.868878 | orchestrator | + source /opt/manager-vars.sh 2025-12-05 00:14:04.869207 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-12-05 00:14:04.869329 | orchestrator | ++ NUMBER_OF_NODES=6 2025-12-05 00:14:04.869353 | orchestrator | ++ export CEPH_VERSION=reef 2025-12-05 00:14:04.869366 | orchestrator | ++ CEPH_VERSION=reef 2025-12-05 00:14:04.869377 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-12-05 00:14:04.869391 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-12-05 00:14:04.869403 | orchestrator | ++ export MANAGER_VERSION=latest 2025-12-05 00:14:04.869414 | orchestrator | ++ MANAGER_VERSION=latest 2025-12-05 00:14:04.869424 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2025-12-05 00:14:04.869452 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2025-12-05 00:14:04.869463 | orchestrator | ++ export ARA=false 2025-12-05 00:14:04.869488 | orchestrator | ++ ARA=false 2025-12-05 00:14:04.869500 | orchestrator | ++ export DEPLOY_MODE=manager 2025-12-05 00:14:04.869511 | orchestrator | ++ DEPLOY_MODE=manager 2025-12-05 00:14:04.869522 | orchestrator | ++ export TEMPEST=true 2025-12-05 00:14:04.869533 | orchestrator | ++ TEMPEST=true 2025-12-05 00:14:04.869544 | orchestrator | ++ export IS_ZUUL=true 2025-12-05 00:14:04.869555 | orchestrator | ++ IS_ZUUL=true 2025-12-05 00:14:04.869566 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2025-12-05 00:14:04.869578 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2025-12-05 00:14:04.869589 | orchestrator | ++ export EXTERNAL_API=false 2025-12-05 00:14:04.869600 | orchestrator | ++ EXTERNAL_API=false 2025-12-05 00:14:04.869611 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-12-05 00:14:04.869622 | orchestrator | ++ IMAGE_USER=ubuntu 2025-12-05 00:14:04.869633 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-12-05 00:14:04.869645 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-12-05 00:14:04.869656 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-12-05 00:14:04.869672 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-12-05 00:14:04.869684 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-12-05 00:14:04.922218 | orchestrator | + docker version 2025-12-05 00:14:05.171855 | orchestrator | Client: Docker Engine - Community 2025-12-05 00:14:05.171973 | orchestrator | Version: 27.5.1 2025-12-05 00:14:05.171988 | orchestrator | API version: 1.47 2025-12-05 00:14:05.172003 | orchestrator | Go version: go1.22.11 2025-12-05 00:14:05.172015 | orchestrator | Git commit: 9f9e405 2025-12-05 00:14:05.172026 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-12-05 00:14:05.172039 | orchestrator | OS/Arch: linux/amd64 2025-12-05 00:14:05.172050 | orchestrator | Context: default 2025-12-05 00:14:05.172061 | orchestrator | 2025-12-05 00:14:05.172072 | orchestrator | Server: Docker Engine - Community 2025-12-05 00:14:05.172083 | orchestrator | Engine: 2025-12-05 00:14:05.172094 | orchestrator | Version: 27.5.1 2025-12-05 00:14:05.172106 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-12-05 00:14:05.172162 | orchestrator | Go version: go1.22.11 2025-12-05 00:14:05.172173 | orchestrator | Git commit: 4c9b3b0 2025-12-05 00:14:05.172184 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-12-05 00:14:05.172195 | orchestrator | OS/Arch: linux/amd64 2025-12-05 00:14:05.172206 | orchestrator | Experimental: false 2025-12-05 00:14:05.172217 | orchestrator | containerd: 2025-12-05 00:14:05.172227 | orchestrator | Version: v2.2.0 2025-12-05 00:14:05.172239 | orchestrator | GitCommit: 1c4457e00facac03ce1d75f7b6777a7a851e5c41 2025-12-05 00:14:05.172250 | orchestrator | runc: 2025-12-05 00:14:05.172261 | orchestrator | Version: 1.3.4 2025-12-05 00:14:05.172272 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2025-12-05 00:14:05.172283 | orchestrator | docker-init: 2025-12-05 00:14:05.172294 | orchestrator | Version: 0.19.0 2025-12-05 00:14:05.172305 | orchestrator | GitCommit: de40ad0 2025-12-05 00:14:05.175567 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-12-05 00:14:05.184654 | orchestrator | + set -e 2025-12-05 00:14:05.184681 | orchestrator | + source /opt/manager-vars.sh 2025-12-05 00:14:05.184697 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-12-05 00:14:05.184711 | orchestrator | ++ NUMBER_OF_NODES=6 2025-12-05 00:14:05.184723 | orchestrator | ++ export CEPH_VERSION=reef 2025-12-05 00:14:05.184734 | orchestrator | ++ CEPH_VERSION=reef 2025-12-05 00:14:05.184746 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-12-05 00:14:05.184758 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-12-05 00:14:05.184776 | orchestrator | ++ export MANAGER_VERSION=latest 2025-12-05 00:14:05.184788 | orchestrator | ++ MANAGER_VERSION=latest 2025-12-05 00:14:05.184799 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2025-12-05 00:14:05.184810 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2025-12-05 00:14:05.184854 | orchestrator | ++ export ARA=false 2025-12-05 00:14:05.184866 | orchestrator | ++ ARA=false 2025-12-05 00:14:05.184877 | orchestrator | ++ export DEPLOY_MODE=manager 2025-12-05 00:14:05.184889 | orchestrator | ++ DEPLOY_MODE=manager 2025-12-05 00:14:05.184899 | orchestrator | ++ export TEMPEST=true 2025-12-05 00:14:05.184910 | orchestrator | ++ TEMPEST=true 2025-12-05 00:14:05.184921 | orchestrator | ++ export IS_ZUUL=true 2025-12-05 00:14:05.184932 | orchestrator | ++ IS_ZUUL=true 2025-12-05 00:14:05.184943 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2025-12-05 00:14:05.184954 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2025-12-05 00:14:05.184965 | orchestrator | ++ export EXTERNAL_API=false 2025-12-05 00:14:05.184975 | orchestrator | ++ EXTERNAL_API=false 2025-12-05 00:14:05.184986 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-12-05 00:14:05.184997 | orchestrator | ++ IMAGE_USER=ubuntu 2025-12-05 00:14:05.185007 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-12-05 00:14:05.185018 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-12-05 00:14:05.185030 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-12-05 00:14:05.185041 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-12-05 00:14:05.185052 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-12-05 00:14:05.185067 | orchestrator | ++ export INTERACTIVE=false 2025-12-05 00:14:05.185078 | orchestrator | ++ INTERACTIVE=false 2025-12-05 00:14:05.185089 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-12-05 00:14:05.185105 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-12-05 00:14:05.185350 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-12-05 00:14:05.185366 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-12-05 00:14:05.185378 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-12-05 00:14:05.192428 | orchestrator | + set -e 2025-12-05 00:14:05.193024 | orchestrator | + VERSION=reef 2025-12-05 00:14:05.193719 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-12-05 00:14:05.199611 | orchestrator | + [[ -n ceph_version: reef ]] 2025-12-05 00:14:05.199640 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-12-05 00:14:05.205195 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2025-12-05 00:14:05.211455 | orchestrator | + set -e 2025-12-05 00:14:05.211501 | orchestrator | + VERSION=2025.1 2025-12-05 00:14:05.212477 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-12-05 00:14:05.216647 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-12-05 00:14:05.216714 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2025-12-05 00:14:05.221757 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-12-05 00:14:05.222605 | orchestrator | ++ semver latest 7.0.0 2025-12-05 00:14:05.287118 | orchestrator | + [[ -1 -ge 0 ]] 2025-12-05 00:14:05.287243 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-12-05 00:14:05.287262 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-12-05 00:14:05.287275 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-12-05 00:14:05.390591 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-12-05 00:14:05.391923 | orchestrator | + source /opt/venv/bin/activate 2025-12-05 00:14:05.393121 | orchestrator | ++ deactivate nondestructive 2025-12-05 00:14:05.393220 | orchestrator | ++ '[' -n '' ']' 2025-12-05 00:14:05.393235 | orchestrator | ++ '[' -n '' ']' 2025-12-05 00:14:05.393245 | orchestrator | ++ hash -r 2025-12-05 00:14:05.393260 | orchestrator | ++ '[' -n '' ']' 2025-12-05 00:14:05.393270 | orchestrator | ++ unset VIRTUAL_ENV 2025-12-05 00:14:05.393280 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-12-05 00:14:05.393509 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-12-05 00:14:05.393530 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-12-05 00:14:05.393543 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-12-05 00:14:05.393553 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-12-05 00:14:05.393563 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-12-05 00:14:05.393574 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-12-05 00:14:05.393618 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-12-05 00:14:05.393630 | orchestrator | ++ export PATH 2025-12-05 00:14:05.393677 | orchestrator | ++ '[' -n '' ']' 2025-12-05 00:14:05.393726 | orchestrator | ++ '[' -z '' ']' 2025-12-05 00:14:05.393738 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-12-05 00:14:05.393777 | orchestrator | ++ PS1='(venv) ' 2025-12-05 00:14:05.393789 | orchestrator | ++ export PS1 2025-12-05 00:14:05.393799 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-12-05 00:14:05.393860 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-12-05 00:14:05.394534 | orchestrator | ++ hash -r 2025-12-05 00:14:05.394570 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-12-05 00:14:06.559261 | orchestrator | 2025-12-05 00:14:06.559386 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-12-05 00:14:06.559405 | orchestrator | 2025-12-05 00:14:06.559418 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-12-05 00:14:07.110971 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:07.111095 | orchestrator | 2025-12-05 00:14:07.111112 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-12-05 00:14:08.069704 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:08.069870 | orchestrator | 2025-12-05 00:14:08.069891 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-12-05 00:14:08.069905 | orchestrator | 2025-12-05 00:14:08.069917 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-12-05 00:14:10.418223 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:10.418351 | orchestrator | 2025-12-05 00:14:10.418369 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-12-05 00:14:10.473942 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:10.474134 | orchestrator | 2025-12-05 00:14:10.474166 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-12-05 00:14:10.921373 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:10.921492 | orchestrator | 2025-12-05 00:14:10.921510 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-12-05 00:14:10.963548 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:14:10.963644 | orchestrator | 2025-12-05 00:14:10.963659 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-12-05 00:14:11.309110 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:11.309218 | orchestrator | 2025-12-05 00:14:11.309234 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-12-05 00:14:11.365351 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:14:11.365448 | orchestrator | 2025-12-05 00:14:11.365460 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-12-05 00:14:11.678284 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:11.678413 | orchestrator | 2025-12-05 00:14:11.678446 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-12-05 00:14:11.808179 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:14:11.808271 | orchestrator | 2025-12-05 00:14:11.808281 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-12-05 00:14:11.808289 | orchestrator | 2025-12-05 00:14:11.808299 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-12-05 00:14:13.461615 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:13.461740 | orchestrator | 2025-12-05 00:14:13.461758 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-12-05 00:14:13.568311 | orchestrator | included: osism.services.traefik for testbed-manager 2025-12-05 00:14:13.568421 | orchestrator | 2025-12-05 00:14:13.568437 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-12-05 00:14:13.633802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-12-05 00:14:13.633956 | orchestrator | 2025-12-05 00:14:13.633979 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-12-05 00:14:14.694503 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-12-05 00:14:14.694614 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-12-05 00:14:14.694631 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-12-05 00:14:14.694644 | orchestrator | 2025-12-05 00:14:14.694657 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-12-05 00:14:16.455196 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-12-05 00:14:16.455305 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-12-05 00:14:16.455325 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-12-05 00:14:16.455339 | orchestrator | 2025-12-05 00:14:16.455352 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-12-05 00:14:17.108969 | orchestrator | changed: [testbed-manager] => (item=None) 2025-12-05 00:14:17.109088 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:17.109106 | orchestrator | 2025-12-05 00:14:17.109120 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-12-05 00:14:17.729413 | orchestrator | changed: [testbed-manager] => (item=None) 2025-12-05 00:14:17.729522 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:17.729539 | orchestrator | 2025-12-05 00:14:17.729551 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-12-05 00:14:17.779763 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:14:17.779889 | orchestrator | 2025-12-05 00:14:17.779905 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-12-05 00:14:18.120875 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:18.120999 | orchestrator | 2025-12-05 00:14:18.121027 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-12-05 00:14:18.192686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-12-05 00:14:18.192778 | orchestrator | 2025-12-05 00:14:18.192792 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-12-05 00:14:20.683124 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:20.683164 | orchestrator | 2025-12-05 00:14:20.683173 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-12-05 00:14:21.478120 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:21.478287 | orchestrator | 2025-12-05 00:14:21.478307 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-12-05 00:14:32.967974 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:32.968124 | orchestrator | 2025-12-05 00:14:32.968961 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-12-05 00:14:33.026153 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:14:33.026239 | orchestrator | 2025-12-05 00:14:33.026249 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-12-05 00:14:33.026255 | orchestrator | 2025-12-05 00:14:33.026261 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-12-05 00:14:34.693036 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:34.693161 | orchestrator | 2025-12-05 00:14:34.693211 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-12-05 00:14:34.790673 | orchestrator | included: osism.services.manager for testbed-manager 2025-12-05 00:14:34.790760 | orchestrator | 2025-12-05 00:14:34.790766 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-12-05 00:14:34.853573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-12-05 00:14:34.853657 | orchestrator | 2025-12-05 00:14:34.853664 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-12-05 00:14:37.369420 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:37.369542 | orchestrator | 2025-12-05 00:14:37.369561 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-12-05 00:14:37.421750 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:37.421876 | orchestrator | 2025-12-05 00:14:37.421894 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-12-05 00:14:37.547508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-12-05 00:14:37.547618 | orchestrator | 2025-12-05 00:14:37.547631 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-12-05 00:14:40.319532 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-12-05 00:14:40.319660 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-12-05 00:14:40.319676 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-12-05 00:14:40.319689 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-12-05 00:14:40.319701 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-12-05 00:14:40.319713 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-12-05 00:14:40.319725 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-12-05 00:14:40.319737 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-12-05 00:14:40.319750 | orchestrator | 2025-12-05 00:14:40.319762 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-12-05 00:14:40.932758 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:40.932909 | orchestrator | 2025-12-05 00:14:40.932929 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-12-05 00:14:41.546111 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:41.546202 | orchestrator | 2025-12-05 00:14:41.546210 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-12-05 00:14:41.618225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-12-05 00:14:41.618285 | orchestrator | 2025-12-05 00:14:41.618298 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-12-05 00:14:42.800009 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-12-05 00:14:42.801571 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-12-05 00:14:42.801598 | orchestrator | 2025-12-05 00:14:42.801611 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-12-05 00:14:43.422362 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:43.422510 | orchestrator | 2025-12-05 00:14:43.422527 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-12-05 00:14:43.467217 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:14:43.467332 | orchestrator | 2025-12-05 00:14:43.467347 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-12-05 00:14:43.533077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-12-05 00:14:43.533196 | orchestrator | 2025-12-05 00:14:43.533214 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-12-05 00:14:44.135685 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:44.135831 | orchestrator | 2025-12-05 00:14:44.135846 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-12-05 00:14:44.191416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-12-05 00:14:44.191575 | orchestrator | 2025-12-05 00:14:44.191592 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-12-05 00:14:45.541485 | orchestrator | changed: [testbed-manager] => (item=None) 2025-12-05 00:14:45.541640 | orchestrator | changed: [testbed-manager] => (item=None) 2025-12-05 00:14:45.541659 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:45.541673 | orchestrator | 2025-12-05 00:14:45.541686 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-12-05 00:14:46.180847 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:46.180964 | orchestrator | 2025-12-05 00:14:46.180982 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-12-05 00:14:46.242627 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:14:46.242749 | orchestrator | 2025-12-05 00:14:46.242766 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-12-05 00:14:46.321927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-12-05 00:14:46.322110 | orchestrator | 2025-12-05 00:14:46.322130 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-12-05 00:14:46.825427 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:46.825555 | orchestrator | 2025-12-05 00:14:46.825579 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-12-05 00:14:47.208202 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:47.208336 | orchestrator | 2025-12-05 00:14:47.208354 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-12-05 00:14:48.404416 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-12-05 00:14:48.404516 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-12-05 00:14:48.404526 | orchestrator | 2025-12-05 00:14:48.404534 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-12-05 00:14:49.021662 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:49.021828 | orchestrator | 2025-12-05 00:14:49.021849 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-12-05 00:14:49.415963 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:49.416087 | orchestrator | 2025-12-05 00:14:49.416103 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-12-05 00:14:49.777552 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:49.777683 | orchestrator | 2025-12-05 00:14:49.777704 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-12-05 00:14:49.821629 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:14:49.821734 | orchestrator | 2025-12-05 00:14:49.821746 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-12-05 00:14:49.884205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-12-05 00:14:49.884313 | orchestrator | 2025-12-05 00:14:49.884326 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-12-05 00:14:49.921064 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:49.921200 | orchestrator | 2025-12-05 00:14:49.921225 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-12-05 00:14:51.921081 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-12-05 00:14:51.921175 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-12-05 00:14:51.921185 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-12-05 00:14:51.921193 | orchestrator | 2025-12-05 00:14:51.921201 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-12-05 00:14:52.631457 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:52.631600 | orchestrator | 2025-12-05 00:14:52.631622 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-12-05 00:14:53.327670 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:53.327827 | orchestrator | 2025-12-05 00:14:53.327845 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-12-05 00:14:54.033294 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:54.033412 | orchestrator | 2025-12-05 00:14:54.033422 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-12-05 00:14:54.096071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-12-05 00:14:54.096184 | orchestrator | 2025-12-05 00:14:54.096201 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-12-05 00:14:54.132409 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:54.132533 | orchestrator | 2025-12-05 00:14:54.132555 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-12-05 00:14:54.786558 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-12-05 00:14:54.786706 | orchestrator | 2025-12-05 00:14:54.786722 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-12-05 00:14:54.854614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-12-05 00:14:54.854732 | orchestrator | 2025-12-05 00:14:54.854748 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-12-05 00:14:55.555261 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:55.555356 | orchestrator | 2025-12-05 00:14:55.555368 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-12-05 00:14:56.115564 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:56.115686 | orchestrator | 2025-12-05 00:14:56.115703 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-12-05 00:14:56.160744 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:14:56.160865 | orchestrator | 2025-12-05 00:14:56.160877 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-12-05 00:14:56.205223 | orchestrator | ok: [testbed-manager] 2025-12-05 00:14:56.205328 | orchestrator | 2025-12-05 00:14:56.205338 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-12-05 00:14:56.992536 | orchestrator | changed: [testbed-manager] 2025-12-05 00:14:56.992661 | orchestrator | 2025-12-05 00:14:56.992679 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-12-05 00:16:11.041859 | orchestrator | ok: [testbed-manager] 2025-12-05 00:16:11.041982 | orchestrator | 2025-12-05 00:16:11.041999 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-12-05 00:16:12.074625 | orchestrator | ok: [testbed-manager] 2025-12-05 00:16:12.074775 | orchestrator | 2025-12-05 00:16:12.074792 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-12-05 00:16:12.130546 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:16:12.130651 | orchestrator | 2025-12-05 00:16:12.130669 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-12-05 00:16:14.515198 | orchestrator | changed: [testbed-manager] 2025-12-05 00:16:14.515321 | orchestrator | 2025-12-05 00:16:14.515339 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-12-05 00:16:14.598511 | orchestrator | ok: [testbed-manager] 2025-12-05 00:16:14.598658 | orchestrator | 2025-12-05 00:16:14.598687 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-12-05 00:16:14.598752 | orchestrator | 2025-12-05 00:16:14.598768 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-12-05 00:16:14.659610 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:16:14.659808 | orchestrator | 2025-12-05 00:16:14.659835 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-12-05 00:17:14.713426 | orchestrator | Pausing for 60 seconds 2025-12-05 00:17:14.713551 | orchestrator | changed: [testbed-manager] 2025-12-05 00:17:14.713563 | orchestrator | 2025-12-05 00:17:14.713574 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-12-05 00:17:18.005641 | orchestrator | changed: [testbed-manager] 2025-12-05 00:17:18.005814 | orchestrator | 2025-12-05 00:17:18.005834 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-12-05 00:18:20.010305 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-12-05 00:18:20.010470 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-12-05 00:18:20.010488 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2025-12-05 00:18:20.010530 | orchestrator | changed: [testbed-manager] 2025-12-05 00:18:20.010545 | orchestrator | 2025-12-05 00:18:20.010558 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-12-05 00:18:21.656143 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "cmd": "INTERACTIVE=false osism complete > /etc/bash_completion.d/osism", "delta": "0:00:01.246071", "end": "2025-12-05 00:18:21.582029", "failed_when_result": true, "msg": "non-zero return code", "rc": 1, "start": "2025-12-05 00:18:20.335958", "stderr": "WARNING: OSISM CLI should not be used as root user\n\nTraceback (most recent call last):\n File \"/usr/local/bin/osism\", line 4, in \n from osism.main import main\n File \"/usr/local/lib/python3.13/site-packages/osism/main.py\", line 5, in \n from cliff.app import App\n File \"/usr/local/lib/python3.13/site-packages/cliff/app.py\", line 25, in \n from . import complete\n File \"/usr/local/lib/python3.13/site-packages/cliff/complete.py\", line 190, in \n class CompleteCommand(_command.Command):\n ...<61 lines>...\n return 0\n File \"/usr/local/lib/python3.13/site-packages/cliff/complete.py\", line 194, in CompleteCommand\n _formatters: stevedore.ExtensionManager[CompleteShellBase]\n ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^\nTypeError: type 'ExtensionManager' is not subscriptable", "stderr_lines": ["WARNING: OSISM CLI should not be used as root user", "", "Traceback (most recent call last):", " File \"/usr/local/bin/osism\", line 4, in ", " from osism.main import main", " File \"/usr/local/lib/python3.13/site-packages/osism/main.py\", line 5, in ", " from cliff.app import App", " File \"/usr/local/lib/python3.13/site-packages/cliff/app.py\", line 25, in ", " from . import complete", " File \"/usr/local/lib/python3.13/site-packages/cliff/complete.py\", line 190, in ", " class CompleteCommand(_command.Command):", " ...<61 lines>...", " return 0", " File \"/usr/local/lib/python3.13/site-packages/cliff/complete.py\", line 194, in CompleteCommand", " _formatters: stevedore.ExtensionManager[CompleteShellBase]", " ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^", "TypeError: type 'ExtensionManager' is not subscriptable"], "stdout": "", "stdout_lines": []} 2025-12-05 00:18:21.656261 | orchestrator | ...ignoring 2025-12-05 00:18:21.656277 | orchestrator | 2025-12-05 00:18:21.656290 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-12-05 00:18:21.741722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-12-05 00:18:21.741830 | orchestrator | 2025-12-05 00:18:21.741845 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-12-05 00:18:21.741857 | orchestrator | 2025-12-05 00:18:21.741868 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-12-05 00:18:21.792925 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:18:21.793032 | orchestrator | 2025-12-05 00:18:21.793046 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-12-05 00:18:21.864038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-12-05 00:18:21.864147 | orchestrator | 2025-12-05 00:18:21.864162 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-12-05 00:18:22.646565 | orchestrator | changed: [testbed-manager] 2025-12-05 00:18:22.646735 | orchestrator | 2025-12-05 00:18:22.646760 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-12-05 00:18:25.843145 | orchestrator | ok: [testbed-manager] 2025-12-05 00:18:25.843260 | orchestrator | 2025-12-05 00:18:25.843274 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-12-05 00:18:25.904442 | orchestrator | ok: [testbed-manager] => { 2025-12-05 00:18:25.904502 | orchestrator | "version_check_result.stdout_lines": [ 2025-12-05 00:18:25.904537 | orchestrator | "=== OSISM Container Version Check ===", 2025-12-05 00:18:25.904545 | orchestrator | "Checking running containers against expected versions...", 2025-12-05 00:18:25.904554 | orchestrator | "", 2025-12-05 00:18:25.904563 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-12-05 00:18:25.904570 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2025-12-05 00:18:25.904578 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.904585 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2025-12-05 00:18:25.904593 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.904600 | orchestrator | "", 2025-12-05 00:18:25.904608 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-12-05 00:18:25.904823 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2025-12-05 00:18:25.904970 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.904988 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2025-12-05 00:18:25.905001 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905013 | orchestrator | "", 2025-12-05 00:18:25.905024 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-12-05 00:18:25.905036 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2025-12-05 00:18:25.905047 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905058 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2025-12-05 00:18:25.905069 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905080 | orchestrator | "", 2025-12-05 00:18:25.905091 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-12-05 00:18:25.905104 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2025-12-05 00:18:25.905115 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905127 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2025-12-05 00:18:25.905137 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905148 | orchestrator | "", 2025-12-05 00:18:25.905187 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-12-05 00:18:25.905199 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2025-12-05 00:18:25.905210 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905220 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2025-12-05 00:18:25.905231 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905242 | orchestrator | "", 2025-12-05 00:18:25.905252 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-12-05 00:18:25.905263 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-12-05 00:18:25.905275 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905285 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-12-05 00:18:25.905296 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905307 | orchestrator | "", 2025-12-05 00:18:25.905318 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-12-05 00:18:25.905329 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-12-05 00:18:25.905340 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905350 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-12-05 00:18:25.905361 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905372 | orchestrator | "", 2025-12-05 00:18:25.905383 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-12-05 00:18:25.905394 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-12-05 00:18:25.905404 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905415 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-12-05 00:18:25.905426 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905436 | orchestrator | "", 2025-12-05 00:18:25.905447 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-12-05 00:18:25.905462 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2025-12-05 00:18:25.905473 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905510 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2025-12-05 00:18:25.905521 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905532 | orchestrator | "", 2025-12-05 00:18:25.905543 | orchestrator | "Checking service: redis (Redis Cache)", 2025-12-05 00:18:25.905555 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-12-05 00:18:25.905565 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905576 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-12-05 00:18:25.905586 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905597 | orchestrator | "", 2025-12-05 00:18:25.905608 | orchestrator | "Checking service: api (OSISM API Service)", 2025-12-05 00:18:25.905619 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-12-05 00:18:25.905661 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905673 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-12-05 00:18:25.905684 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905694 | orchestrator | "", 2025-12-05 00:18:25.905705 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-12-05 00:18:25.905716 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-12-05 00:18:25.905727 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905738 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-12-05 00:18:25.905748 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905759 | orchestrator | "", 2025-12-05 00:18:25.905770 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-12-05 00:18:25.905781 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-12-05 00:18:25.905804 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905815 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-12-05 00:18:25.905826 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905837 | orchestrator | "", 2025-12-05 00:18:25.905848 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-12-05 00:18:25.905858 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-12-05 00:18:25.905869 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905880 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-12-05 00:18:25.905891 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.905901 | orchestrator | "", 2025-12-05 00:18:25.905913 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-12-05 00:18:25.905959 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-12-05 00:18:25.905970 | orchestrator | " Enabled: true", 2025-12-05 00:18:25.905981 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-12-05 00:18:25.905992 | orchestrator | " Status: ✅ MATCH", 2025-12-05 00:18:25.906003 | orchestrator | "", 2025-12-05 00:18:25.906134 | orchestrator | "=== Summary ===", 2025-12-05 00:18:25.906163 | orchestrator | "Errors (version mismatches): 0", 2025-12-05 00:18:25.906180 | orchestrator | "Warnings (expected containers not running): 0", 2025-12-05 00:18:25.906192 | orchestrator | "", 2025-12-05 00:18:25.906203 | orchestrator | "✅ All running containers match expected versions!" 2025-12-05 00:18:25.906214 | orchestrator | ] 2025-12-05 00:18:25.906225 | orchestrator | } 2025-12-05 00:18:25.906236 | orchestrator | 2025-12-05 00:18:25.906247 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-12-05 00:18:25.964752 | orchestrator | skipping: [testbed-manager] 2025-12-05 00:18:25.964853 | orchestrator | 2025-12-05 00:18:25.964867 | orchestrator | PLAY RECAP ********************************************************************* 2025-12-05 00:18:25.964880 | orchestrator | testbed-manager : ok=70 changed=36 unreachable=0 failed=0 skipped=13 rescued=0 ignored=1 2025-12-05 00:18:25.964892 | orchestrator | 2025-12-05 00:18:26.064108 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-12-05 00:18:26.064211 | orchestrator | + deactivate 2025-12-05 00:18:26.064225 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-12-05 00:18:26.064239 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-12-05 00:18:26.064281 | orchestrator | + export PATH 2025-12-05 00:18:26.064293 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-12-05 00:18:26.064305 | orchestrator | + '[' -n '' ']' 2025-12-05 00:18:26.064316 | orchestrator | + hash -r 2025-12-05 00:18:26.064327 | orchestrator | + '[' -n '' ']' 2025-12-05 00:18:26.064338 | orchestrator | + unset VIRTUAL_ENV 2025-12-05 00:18:26.064349 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-12-05 00:18:26.064361 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-12-05 00:18:26.064371 | orchestrator | + unset -f deactivate 2025-12-05 00:18:26.064383 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-12-05 00:18:26.072245 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-12-05 00:18:26.072293 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-12-05 00:18:26.072309 | orchestrator | + local max_attempts=60 2025-12-05 00:18:26.072324 | orchestrator | + local name=ceph-ansible 2025-12-05 00:18:26.072336 | orchestrator | + local attempt_num=1 2025-12-05 00:18:26.072906 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-12-05 00:18:26.100722 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-12-05 00:18:26.100785 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-12-05 00:18:26.100798 | orchestrator | + local max_attempts=60 2025-12-05 00:18:26.100810 | orchestrator | + local name=kolla-ansible 2025-12-05 00:18:26.100822 | orchestrator | + local attempt_num=1 2025-12-05 00:18:26.101753 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-12-05 00:18:26.137822 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-12-05 00:18:26.137886 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-12-05 00:18:26.137901 | orchestrator | + local max_attempts=60 2025-12-05 00:18:26.137914 | orchestrator | + local name=osism-ansible 2025-12-05 00:18:26.137925 | orchestrator | + local attempt_num=1 2025-12-05 00:18:26.138809 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-12-05 00:18:26.184784 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-12-05 00:18:26.184884 | orchestrator | + [[ true == \t\r\u\e ]] 2025-12-05 00:18:26.184899 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-12-05 00:18:26.889253 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-12-05 00:18:27.055400 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-12-05 00:18:27.055524 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-12-05 00:18:27.055540 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-12-05 00:18:27.055553 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Restarting (1) 17 seconds ago 2025-12-05 00:18:27.055587 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-12-05 00:18:27.055600 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Restarting (1) 17 seconds ago 2025-12-05 00:18:27.055611 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Restarting (1) 16 seconds ago 2025-12-05 00:18:27.055622 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2025-12-05 00:18:27.055685 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Restarting (1) 17 seconds ago 2025-12-05 00:18:27.055699 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-12-05 00:18:27.055734 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Restarting (1) 16 seconds ago 2025-12-05 00:18:27.055746 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-12-05 00:18:27.055757 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-12-05 00:18:27.055768 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2025-12-05 00:18:27.055781 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-12-05 00:18:27.055792 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-12-05 00:18:27.060825 | orchestrator | ++ semver latest 7.0.0 2025-12-05 00:18:27.115513 | orchestrator | + [[ -1 -ge 0 ]] 2025-12-05 00:18:27.115607 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-12-05 00:18:27.115624 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-12-05 00:18:27.121806 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-12-05 00:18:27.397978 | orchestrator | Traceback (most recent call last): 2025-12-05 00:18:27.399598 | orchestrator | File "/usr/local/bin/osism", line 4, in 2025-12-05 00:18:27.399686 | orchestrator | from osism.main import main 2025-12-05 00:18:27.399701 | orchestrator | File "/usr/local/lib/python3.13/site-packages/osism/main.py", line 5, in 2025-12-05 00:18:27.399715 | orchestrator | from cliff.app import App 2025-12-05 00:18:27.399727 | orchestrator | File "/usr/local/lib/python3.13/site-packages/cliff/app.py", line 25, in 2025-12-05 00:18:27.399739 | orchestrator | from . import complete 2025-12-05 00:18:27.399750 | orchestrator | File "/usr/local/lib/python3.13/site-packages/cliff/complete.py", line 190, in 2025-12-05 00:18:27.399762 | orchestrator | class CompleteCommand(_command.Command): 2025-12-05 00:18:27.399774 | orchestrator | ...<61 lines>... 2025-12-05 00:18:27.399785 | orchestrator | return 0 2025-12-05 00:18:27.399796 | orchestrator | File "/usr/local/lib/python3.13/site-packages/cliff/complete.py", line 194, in CompleteCommand 2025-12-05 00:18:27.399807 | orchestrator | _formatters: stevedore.ExtensionManager[CompleteShellBase] 2025-12-05 00:18:27.399818 | orchestrator | ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^ 2025-12-05 00:18:27.399829 | orchestrator | TypeError: type 'ExtensionManager' is not subscriptable 2025-12-05 00:18:27.517100 | orchestrator | ERROR 2025-12-05 00:18:27.517569 | orchestrator | { 2025-12-05 00:18:27.517683 | orchestrator | "delta": "0:04:25.476719", 2025-12-05 00:18:27.517757 | orchestrator | "end": "2025-12-05 00:18:27.440185", 2025-12-05 00:18:27.517818 | orchestrator | "msg": "non-zero return code", 2025-12-05 00:18:27.517872 | orchestrator | "rc": 1, 2025-12-05 00:18:27.517926 | orchestrator | "start": "2025-12-05 00:14:01.963466" 2025-12-05 00:18:27.517977 | orchestrator | } failure 2025-12-05 00:18:27.531984 | 2025-12-05 00:18:27.532110 | PLAY RECAP 2025-12-05 00:18:27.532197 | orchestrator | ok: 20 changed: 7 unreachable: 0 failed: 1 skipped: 2 rescued: 0 ignored: 0 2025-12-05 00:18:27.532238 | 2025-12-05 00:18:27.695505 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-12-05 00:18:27.697213 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-12-05 00:18:28.528165 | 2025-12-05 00:18:28.528425 | PLAY [Post output play] 2025-12-05 00:18:28.545324 | 2025-12-05 00:18:28.545520 | LOOP [stage-output : Register sources] 2025-12-05 00:18:28.601575 | 2025-12-05 00:18:28.601823 | TASK [stage-output : Check sudo] 2025-12-05 00:18:29.461695 | orchestrator | sudo: a password is required 2025-12-05 00:18:29.638081 | orchestrator | ok: Runtime: 0:00:00.019470 2025-12-05 00:18:29.646431 | 2025-12-05 00:18:29.646606 | LOOP [stage-output : Set source and destination for files and folders] 2025-12-05 00:18:29.682184 | 2025-12-05 00:18:29.682531 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-12-05 00:18:29.759695 | orchestrator | ok 2025-12-05 00:18:29.768830 | 2025-12-05 00:18:29.768998 | LOOP [stage-output : Ensure target folders exist] 2025-12-05 00:18:30.259583 | orchestrator | ok: "docs" 2025-12-05 00:18:30.259932 | 2025-12-05 00:18:30.523663 | orchestrator | ok: "artifacts" 2025-12-05 00:18:30.805017 | orchestrator | ok: "logs" 2025-12-05 00:18:30.829061 | 2025-12-05 00:18:30.829256 | LOOP [stage-output : Copy files and folders to staging folder] 2025-12-05 00:18:30.881816 | 2025-12-05 00:18:30.882201 | TASK [stage-output : Make all log files readable] 2025-12-05 00:18:31.164512 | orchestrator | ok 2025-12-05 00:18:31.173883 | 2025-12-05 00:18:31.174038 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-12-05 00:18:31.208920 | orchestrator | skipping: Conditional result was False 2025-12-05 00:18:31.226445 | 2025-12-05 00:18:31.226662 | TASK [stage-output : Discover log files for compression] 2025-12-05 00:18:31.251997 | orchestrator | skipping: Conditional result was False 2025-12-05 00:18:31.267244 | 2025-12-05 00:18:31.267415 | LOOP [stage-output : Archive everything from logs] 2025-12-05 00:18:31.312629 | 2025-12-05 00:18:31.312835 | PLAY [Post cleanup play] 2025-12-05 00:18:31.321828 | 2025-12-05 00:18:31.321964 | TASK [Set cloud fact (Zuul deployment)] 2025-12-05 00:18:31.382366 | orchestrator | ok 2025-12-05 00:18:31.395690 | 2025-12-05 00:18:31.395860 | TASK [Set cloud fact (local deployment)] 2025-12-05 00:18:31.432161 | orchestrator | skipping: Conditional result was False 2025-12-05 00:18:31.450573 | 2025-12-05 00:18:31.450752 | TASK [Clean the cloud environment] 2025-12-05 00:18:32.068728 | orchestrator | 2025-12-05 00:18:32 - clean up servers 2025-12-05 00:18:33.287481 | orchestrator | 2025-12-05 00:18:33 - testbed-manager 2025-12-05 00:18:33.368970 | orchestrator | 2025-12-05 00:18:33 - testbed-node-4 2025-12-05 00:18:33.455094 | orchestrator | 2025-12-05 00:18:33 - testbed-node-3 2025-12-05 00:18:33.541018 | orchestrator | 2025-12-05 00:18:33 - testbed-node-1 2025-12-05 00:18:33.639477 | orchestrator | 2025-12-05 00:18:33 - testbed-node-5 2025-12-05 00:18:33.732559 | orchestrator | 2025-12-05 00:18:33 - testbed-node-2 2025-12-05 00:18:33.829571 | orchestrator | 2025-12-05 00:18:33 - testbed-node-0 2025-12-05 00:18:33.919303 | orchestrator | 2025-12-05 00:18:33 - clean up keypairs 2025-12-05 00:18:33.941196 | orchestrator | 2025-12-05 00:18:33 - testbed 2025-12-05 00:18:33.966833 | orchestrator | 2025-12-05 00:18:33 - wait for servers to be gone 2025-12-05 00:18:44.775462 | orchestrator | 2025-12-05 00:18:44 - clean up ports 2025-12-05 00:18:44.972225 | orchestrator | 2025-12-05 00:18:44 - 0efd979f-04be-4783-975a-7064bd4fb7bd 2025-12-05 00:18:45.226739 | orchestrator | 2025-12-05 00:18:45 - 306d41de-1783-4f9a-9889-01fad5de5cfd 2025-12-05 00:18:45.483652 | orchestrator | 2025-12-05 00:18:45 - 3bc557c7-16c2-487e-83a5-3fff52bb4b02 2025-12-05 00:18:45.730771 | orchestrator | 2025-12-05 00:18:45 - 52966465-b54e-49a6-a79f-a67b458fc457 2025-12-05 00:18:45.942814 | orchestrator | 2025-12-05 00:18:45 - 6784a5d3-ac73-4fdf-9047-67b1ec649e6c 2025-12-05 00:18:46.578957 | orchestrator | 2025-12-05 00:18:46 - af0cf929-0a4f-4023-9e9b-33d556cf4a15 2025-12-05 00:18:47.454865 | orchestrator | 2025-12-05 00:18:47 - f53dc70b-5531-435b-b4fe-d14aae5636cd 2025-12-05 00:18:47.682411 | orchestrator | 2025-12-05 00:18:47 - clean up volumes 2025-12-05 00:18:47.813065 | orchestrator | 2025-12-05 00:18:47 - testbed-volume-0-node-base 2025-12-05 00:18:47.857546 | orchestrator | 2025-12-05 00:18:47 - testbed-volume-4-node-base 2025-12-05 00:18:47.901562 | orchestrator | 2025-12-05 00:18:47 - testbed-volume-2-node-base 2025-12-05 00:18:47.945349 | orchestrator | 2025-12-05 00:18:47 - testbed-volume-5-node-base 2025-12-05 00:18:47.985217 | orchestrator | 2025-12-05 00:18:47 - testbed-volume-1-node-base 2025-12-05 00:18:48.031608 | orchestrator | 2025-12-05 00:18:48 - testbed-volume-manager-base 2025-12-05 00:18:48.072294 | orchestrator | 2025-12-05 00:18:48 - testbed-volume-3-node-base 2025-12-05 00:18:48.131977 | orchestrator | 2025-12-05 00:18:48 - testbed-volume-8-node-5 2025-12-05 00:18:48.181570 | orchestrator | 2025-12-05 00:18:48 - testbed-volume-2-node-5 2025-12-05 00:18:48.222762 | orchestrator | 2025-12-05 00:18:48 - testbed-volume-1-node-4 2025-12-05 00:18:48.267669 | orchestrator | 2025-12-05 00:18:48 - testbed-volume-3-node-3 2025-12-05 00:18:48.311996 | orchestrator | 2025-12-05 00:18:48 - testbed-volume-7-node-4 2025-12-05 00:18:48.355385 | orchestrator | 2025-12-05 00:18:48 - testbed-volume-5-node-5 2025-12-05 00:18:48.399340 | orchestrator | 2025-12-05 00:18:48 - testbed-volume-0-node-3 2025-12-05 00:18:48.443168 | orchestrator | 2025-12-05 00:18:48 - testbed-volume-6-node-3 2025-12-05 00:18:48.481154 | orchestrator | 2025-12-05 00:18:48 - testbed-volume-4-node-4 2025-12-05 00:18:48.527960 | orchestrator | 2025-12-05 00:18:48 - disconnect routers 2025-12-05 00:18:48.685339 | orchestrator | 2025-12-05 00:18:48 - testbed 2025-12-05 00:18:49.693139 | orchestrator | 2025-12-05 00:18:49 - clean up subnets 2025-12-05 00:18:49.736105 | orchestrator | 2025-12-05 00:18:49 - subnet-testbed-management 2025-12-05 00:18:49.951304 | orchestrator | 2025-12-05 00:18:49 - clean up networks 2025-12-05 00:18:50.126977 | orchestrator | 2025-12-05 00:18:50 - net-testbed-management 2025-12-05 00:18:50.447647 | orchestrator | 2025-12-05 00:18:50 - clean up security groups 2025-12-05 00:18:50.493732 | orchestrator | 2025-12-05 00:18:50 - testbed-node 2025-12-05 00:18:50.604554 | orchestrator | 2025-12-05 00:18:50 - testbed-management 2025-12-05 00:18:50.756064 | orchestrator | 2025-12-05 00:18:50 - clean up floating ips 2025-12-05 00:18:50.786656 | orchestrator | 2025-12-05 00:18:50 - 81.163.193.182 2025-12-05 00:18:51.141590 | orchestrator | 2025-12-05 00:18:51 - clean up routers 2025-12-05 00:18:51.251911 | orchestrator | 2025-12-05 00:18:51 - testbed 2025-12-05 00:18:52.512862 | orchestrator | ok: Runtime: 0:00:20.331218 2025-12-05 00:18:52.518099 | 2025-12-05 00:18:52.518284 | PLAY RECAP 2025-12-05 00:18:52.518423 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-12-05 00:18:52.518679 | 2025-12-05 00:18:52.672914 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-12-05 00:18:52.674033 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-12-05 00:18:53.456154 | 2025-12-05 00:18:53.456334 | PLAY [Cleanup play] 2025-12-05 00:18:53.473695 | 2025-12-05 00:18:53.473879 | TASK [Set cloud fact (Zuul deployment)] 2025-12-05 00:18:53.532894 | orchestrator | ok 2025-12-05 00:18:53.541897 | 2025-12-05 00:18:53.542058 | TASK [Set cloud fact (local deployment)] 2025-12-05 00:18:53.588555 | orchestrator | skipping: Conditional result was False 2025-12-05 00:18:53.606816 | 2025-12-05 00:18:53.607027 | TASK [Clean the cloud environment] 2025-12-05 00:18:54.853203 | orchestrator | 2025-12-05 00:18:54 - clean up servers 2025-12-05 00:18:55.329585 | orchestrator | 2025-12-05 00:18:55 - clean up keypairs 2025-12-05 00:18:55.344800 | orchestrator | 2025-12-05 00:18:55 - wait for servers to be gone 2025-12-05 00:18:55.387299 | orchestrator | 2025-12-05 00:18:55 - clean up ports 2025-12-05 00:18:55.472690 | orchestrator | 2025-12-05 00:18:55 - clean up volumes 2025-12-05 00:18:55.547687 | orchestrator | 2025-12-05 00:18:55 - disconnect routers 2025-12-05 00:18:55.581073 | orchestrator | 2025-12-05 00:18:55 - clean up subnets 2025-12-05 00:18:55.599330 | orchestrator | 2025-12-05 00:18:55 - clean up networks 2025-12-05 00:18:55.761559 | orchestrator | 2025-12-05 00:18:55 - clean up security groups 2025-12-05 00:18:55.797192 | orchestrator | 2025-12-05 00:18:55 - clean up floating ips 2025-12-05 00:18:55.821960 | orchestrator | 2025-12-05 00:18:55 - clean up routers 2025-12-05 00:18:56.146061 | orchestrator | ok: Runtime: 0:00:01.381930 2025-12-05 00:18:56.151303 | 2025-12-05 00:18:56.151548 | PLAY RECAP 2025-12-05 00:18:56.151690 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-12-05 00:18:56.151763 | 2025-12-05 00:18:56.291152 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-12-05 00:18:56.292260 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-12-05 00:18:57.079113 | 2025-12-05 00:18:57.079303 | PLAY [Base post-fetch] 2025-12-05 00:18:57.096609 | 2025-12-05 00:18:57.096828 | TASK [fetch-output : Set log path for multiple nodes] 2025-12-05 00:18:57.162917 | orchestrator | skipping: Conditional result was False 2025-12-05 00:18:57.177037 | 2025-12-05 00:18:57.177289 | TASK [fetch-output : Set log path for single node] 2025-12-05 00:18:57.238678 | orchestrator | ok 2025-12-05 00:18:57.250709 | 2025-12-05 00:18:57.250913 | LOOP [fetch-output : Ensure local output dirs] 2025-12-05 00:18:57.782772 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/0f87daf7bba64c959b03fea26b5993d0/work/logs" 2025-12-05 00:18:58.088515 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0f87daf7bba64c959b03fea26b5993d0/work/artifacts" 2025-12-05 00:18:58.388252 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/0f87daf7bba64c959b03fea26b5993d0/work/docs" 2025-12-05 00:18:58.411530 | 2025-12-05 00:18:58.411697 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-12-05 00:18:59.397936 | orchestrator | changed: .d..t...... ./ 2025-12-05 00:18:59.398363 | orchestrator | changed: All items complete 2025-12-05 00:18:59.398438 | 2025-12-05 00:19:00.142547 | orchestrator | changed: .d..t...... ./ 2025-12-05 00:19:00.900957 | orchestrator | changed: .d..t...... ./ 2025-12-05 00:19:00.933336 | 2025-12-05 00:19:00.933565 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-12-05 00:19:00.971349 | orchestrator | skipping: Conditional result was False 2025-12-05 00:19:00.974948 | orchestrator | skipping: Conditional result was False 2025-12-05 00:19:00.997302 | 2025-12-05 00:19:00.997432 | PLAY RECAP 2025-12-05 00:19:00.997528 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-12-05 00:19:00.997569 | 2025-12-05 00:19:01.142277 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-12-05 00:19:01.143370 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-12-05 00:19:01.934596 | 2025-12-05 00:19:01.934762 | PLAY [Base post] 2025-12-05 00:19:01.949432 | 2025-12-05 00:19:01.949625 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-12-05 00:19:03.380853 | orchestrator | changed 2025-12-05 00:19:03.393392 | 2025-12-05 00:19:03.393565 | PLAY RECAP 2025-12-05 00:19:03.393650 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-12-05 00:19:03.393729 | 2025-12-05 00:19:03.518711 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-12-05 00:19:03.522017 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-12-05 00:19:04.345097 | 2025-12-05 00:19:04.345280 | PLAY [Base post-logs] 2025-12-05 00:19:04.356698 | 2025-12-05 00:19:04.356860 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-12-05 00:19:04.835510 | localhost | changed 2025-12-05 00:19:04.854175 | 2025-12-05 00:19:04.854387 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-12-05 00:19:04.895312 | localhost | ok 2025-12-05 00:19:04.903729 | 2025-12-05 00:19:04.903967 | TASK [Set zuul-log-path fact] 2025-12-05 00:19:04.934958 | localhost | ok 2025-12-05 00:19:04.950930 | 2025-12-05 00:19:04.951205 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-12-05 00:19:04.991534 | localhost | ok 2025-12-05 00:19:04.998779 | 2025-12-05 00:19:04.999057 | TASK [upload-logs : Create log directories] 2025-12-05 00:19:05.543792 | localhost | changed 2025-12-05 00:19:05.549143 | 2025-12-05 00:19:05.549309 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-12-05 00:19:06.085529 | localhost -> localhost | ok: Runtime: 0:00:00.007028 2025-12-05 00:19:06.094445 | 2025-12-05 00:19:06.094663 | TASK [upload-logs : Upload logs to log server] 2025-12-05 00:19:06.670783 | localhost | Output suppressed because no_log was given 2025-12-05 00:19:06.675192 | 2025-12-05 00:19:06.675380 | LOOP [upload-logs : Compress console log and json output] 2025-12-05 00:19:06.736674 | localhost | skipping: Conditional result was False 2025-12-05 00:19:06.742643 | localhost | skipping: Conditional result was False 2025-12-05 00:19:06.759748 | 2025-12-05 00:19:06.760071 | LOOP [upload-logs : Upload compressed console log and json output] 2025-12-05 00:19:06.816975 | localhost | skipping: Conditional result was False 2025-12-05 00:19:06.817831 | 2025-12-05 00:19:06.821482 | localhost | skipping: Conditional result was False 2025-12-05 00:19:06.835391 | 2025-12-05 00:19:06.835701 | LOOP [upload-logs : Upload console log and json output]