2025-07-25 00:00:07.484909 | Job console starting 2025-07-25 00:00:07.497294 | Updating git repos 2025-07-25 00:00:07.567432 | Cloning repos into workspace 2025-07-25 00:00:07.846329 | Restoring repo states 2025-07-25 00:00:07.876772 | Merging changes 2025-07-25 00:00:07.876797 | Checking out repos 2025-07-25 00:00:08.270098 | Preparing playbooks 2025-07-25 00:00:09.082746 | Running Ansible setup 2025-07-25 00:00:15.019579 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-25 00:00:16.681783 | 2025-07-25 00:00:16.681893 | PLAY [Base pre] 2025-07-25 00:00:16.712403 | 2025-07-25 00:00:16.712513 | TASK [Setup log path fact] 2025-07-25 00:00:16.750390 | orchestrator | ok 2025-07-25 00:00:16.779225 | 2025-07-25 00:00:16.779381 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-25 00:00:16.827127 | orchestrator | ok 2025-07-25 00:00:16.849117 | 2025-07-25 00:00:16.849237 | TASK [emit-job-header : Print job information] 2025-07-25 00:00:16.918412 | # Job Information 2025-07-25 00:00:16.918545 | Ansible Version: 2.16.14 2025-07-25 00:00:16.918574 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-07-25 00:00:16.918602 | Pipeline: periodic-midnight 2025-07-25 00:00:16.918621 | Executor: 521e9411259a 2025-07-25 00:00:16.918638 | Triggered by: https://github.com/osism/testbed 2025-07-25 00:00:16.918656 | Event ID: d9a1f4aa7db84e45a10acc204424e732 2025-07-25 00:00:16.929070 | 2025-07-25 00:00:16.929183 | LOOP [emit-job-header : Print node information] 2025-07-25 00:00:17.102056 | orchestrator | ok: 2025-07-25 00:00:17.102328 | orchestrator | # Node Information 2025-07-25 00:00:17.102396 | orchestrator | Inventory Hostname: orchestrator 2025-07-25 00:00:17.102425 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-25 00:00:17.102447 | orchestrator | Username: zuul-testbed01 2025-07-25 00:00:17.102469 | orchestrator | Distro: Debian 12.11 2025-07-25 00:00:17.102493 | orchestrator | Provider: static-testbed 2025-07-25 00:00:17.102514 | orchestrator | Region: 2025-07-25 00:00:17.102535 | orchestrator | Label: testbed-orchestrator 2025-07-25 00:00:17.102554 | orchestrator | Product Name: OpenStack Nova 2025-07-25 00:00:17.102574 | orchestrator | Interface IP: 81.163.193.140 2025-07-25 00:00:17.118651 | 2025-07-25 00:00:17.118748 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-25 00:00:18.557551 | orchestrator -> localhost | changed 2025-07-25 00:00:18.563885 | 2025-07-25 00:00:18.563973 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-25 00:00:20.415846 | orchestrator -> localhost | changed 2025-07-25 00:00:20.434999 | 2025-07-25 00:00:20.435100 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-25 00:00:20.947794 | orchestrator -> localhost | ok 2025-07-25 00:00:20.953353 | 2025-07-25 00:00:20.953447 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-25 00:00:21.002957 | orchestrator | ok 2025-07-25 00:00:21.031530 | orchestrator | included: /var/lib/zuul/builds/d3b177b786864e9fa5b133c6bb8ca532/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-25 00:00:21.048274 | 2025-07-25 00:00:21.048374 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-25 00:00:23.297062 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-25 00:00:23.297250 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/d3b177b786864e9fa5b133c6bb8ca532/work/d3b177b786864e9fa5b133c6bb8ca532_id_rsa 2025-07-25 00:00:23.297282 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/d3b177b786864e9fa5b133c6bb8ca532/work/d3b177b786864e9fa5b133c6bb8ca532_id_rsa.pub 2025-07-25 00:00:23.297304 | orchestrator -> localhost | The key fingerprint is: 2025-07-25 00:00:23.297326 | orchestrator -> localhost | SHA256:Vfy5oHCLBOE2uBeHKmrylgU7apHs0QUU7zDG1Vd6Wtg zuul-build-sshkey 2025-07-25 00:00:23.297344 | orchestrator -> localhost | The key's randomart image is: 2025-07-25 00:00:23.297369 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-25 00:00:23.297387 | orchestrator -> localhost | | .o..o. .o. | 2025-07-25 00:00:23.297405 | orchestrator -> localhost | | ..oo.o .+.. | 2025-07-25 00:00:23.297421 | orchestrator -> localhost | | =o.*.oo.E . . | 2025-07-25 00:00:23.297438 | orchestrator -> localhost | | o += +o.= . o | 2025-07-25 00:00:23.297454 | orchestrator -> localhost | |. +o+...S= o . . | 2025-07-25 00:00:23.297474 | orchestrator -> localhost | | *oo.. . o . | 2025-07-25 00:00:23.297492 | orchestrator -> localhost | |+oo+ | 2025-07-25 00:00:23.297508 | orchestrator -> localhost | |++o | 2025-07-25 00:00:23.297525 | orchestrator -> localhost | |... | 2025-07-25 00:00:23.297541 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-25 00:00:23.297580 | orchestrator -> localhost | ok: Runtime: 0:00:01.179801 2025-07-25 00:00:23.303827 | 2025-07-25 00:00:23.303916 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-25 00:00:23.351898 | orchestrator | ok 2025-07-25 00:00:23.370350 | orchestrator | included: /var/lib/zuul/builds/d3b177b786864e9fa5b133c6bb8ca532/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-25 00:00:23.386054 | 2025-07-25 00:00:23.386166 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-25 00:00:23.441753 | orchestrator | skipping: Conditional result was False 2025-07-25 00:00:23.448159 | 2025-07-25 00:00:23.448247 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-25 00:00:24.338357 | orchestrator | changed 2025-07-25 00:00:24.343342 | 2025-07-25 00:00:24.343419 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-25 00:00:24.622634 | orchestrator | ok 2025-07-25 00:00:24.627694 | 2025-07-25 00:00:24.627773 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-25 00:00:25.408657 | orchestrator | ok 2025-07-25 00:00:25.413552 | 2025-07-25 00:00:25.413633 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-25 00:00:25.953205 | orchestrator | ok 2025-07-25 00:00:25.958058 | 2025-07-25 00:00:25.958161 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-25 00:00:26.032708 | orchestrator | skipping: Conditional result was False 2025-07-25 00:00:26.039214 | 2025-07-25 00:00:26.039327 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-25 00:00:26.987890 | orchestrator -> localhost | changed 2025-07-25 00:00:26.998692 | 2025-07-25 00:00:26.998784 | TASK [add-build-sshkey : Add back temp key] 2025-07-25 00:00:27.709562 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/d3b177b786864e9fa5b133c6bb8ca532/work/d3b177b786864e9fa5b133c6bb8ca532_id_rsa (zuul-build-sshkey) 2025-07-25 00:00:27.709744 | orchestrator -> localhost | ok: Runtime: 0:00:00.023584 2025-07-25 00:00:27.715429 | 2025-07-25 00:00:27.715520 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-25 00:00:28.209811 | orchestrator | ok 2025-07-25 00:00:28.222369 | 2025-07-25 00:00:28.222476 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-25 00:00:28.246114 | orchestrator | skipping: Conditional result was False 2025-07-25 00:00:28.396739 | 2025-07-25 00:00:28.396842 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-25 00:00:28.931629 | orchestrator | ok 2025-07-25 00:00:28.965306 | 2025-07-25 00:00:28.965418 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-25 00:00:29.013874 | orchestrator | ok 2025-07-25 00:00:29.023533 | 2025-07-25 00:00:29.023638 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-25 00:00:29.877403 | orchestrator -> localhost | ok 2025-07-25 00:00:29.884746 | 2025-07-25 00:00:29.884841 | TASK [validate-host : Collect information about the host] 2025-07-25 00:00:31.377215 | orchestrator | ok 2025-07-25 00:00:31.402704 | 2025-07-25 00:00:31.402814 | TASK [validate-host : Sanitize hostname] 2025-07-25 00:00:31.460322 | orchestrator | ok 2025-07-25 00:00:31.465450 | 2025-07-25 00:00:31.465546 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-25 00:00:32.480755 | orchestrator -> localhost | changed 2025-07-25 00:00:32.486935 | 2025-07-25 00:00:32.487031 | TASK [validate-host : Collect information about zuul worker] 2025-07-25 00:00:33.084767 | orchestrator | ok 2025-07-25 00:00:33.089802 | 2025-07-25 00:00:33.089890 | TASK [validate-host : Write out all zuul information for each host] 2025-07-25 00:00:34.193763 | orchestrator -> localhost | changed 2025-07-25 00:00:34.209730 | 2025-07-25 00:00:34.209834 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-25 00:00:34.530145 | orchestrator | ok 2025-07-25 00:00:34.537083 | 2025-07-25 00:00:34.537204 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-25 00:01:08.736202 | orchestrator | changed: 2025-07-25 00:01:08.736440 | orchestrator | .d..t...... src/ 2025-07-25 00:01:08.736476 | orchestrator | .d..t...... src/github.com/ 2025-07-25 00:01:08.736501 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-25 00:01:08.736523 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-25 00:01:08.736544 | orchestrator | RedHat.yml 2025-07-25 00:01:08.767325 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-25 00:01:08.767342 | orchestrator | RedHat.yml 2025-07-25 00:01:08.767395 | orchestrator | = 1.53.0"... 2025-07-25 00:01:24.054834 | orchestrator | 00:01:24.054 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-07-25 00:01:24.572731 | orchestrator | 00:01:24.572 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-25 00:01:25.425869 | orchestrator | 00:01:25.425 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-25 00:01:25.814291 | orchestrator | 00:01:25.813 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-07-25 00:01:26.709659 | orchestrator | 00:01:26.709 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-07-25 00:01:27.106092 | orchestrator | 00:01:27.105 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-25 00:01:27.708178 | orchestrator | 00:01:27.707 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-25 00:01:27.708295 | orchestrator | 00:01:27.708 STDOUT terraform: Providers are signed by their developers. 2025-07-25 00:01:27.708313 | orchestrator | 00:01:27.708 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-25 00:01:27.708326 | orchestrator | 00:01:27.708 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-25 00:01:27.708379 | orchestrator | 00:01:27.708 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-25 00:01:27.708459 | orchestrator | 00:01:27.708 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-25 00:01:27.708483 | orchestrator | 00:01:27.708 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-25 00:01:27.708499 | orchestrator | 00:01:27.708 STDOUT terraform: you run "tofu init" in the future. 2025-07-25 00:01:27.709045 | orchestrator | 00:01:27.708 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-25 00:01:27.709128 | orchestrator | 00:01:27.709 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-25 00:01:27.709195 | orchestrator | 00:01:27.709 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-25 00:01:27.709208 | orchestrator | 00:01:27.709 STDOUT terraform: should now work. 2025-07-25 00:01:27.709245 | orchestrator | 00:01:27.709 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-25 00:01:27.709306 | orchestrator | 00:01:27.709 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-25 00:01:27.709355 | orchestrator | 00:01:27.709 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-25 00:01:27.828434 | orchestrator | 00:01:27.828 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-07-25 00:01:27.828533 | orchestrator | 00:01:27.828 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-25 00:01:28.059729 | orchestrator | 00:01:28.059 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-25 00:01:28.059793 | orchestrator | 00:01:28.059 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-25 00:01:28.059802 | orchestrator | 00:01:28.059 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-25 00:01:28.059807 | orchestrator | 00:01:28.059 STDOUT terraform: for this configuration. 2025-07-25 00:01:28.203834 | orchestrator | 00:01:28.203 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-07-25 00:01:28.203972 | orchestrator | 00:01:28.203 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-25 00:01:28.321413 | orchestrator | 00:01:28.321 STDOUT terraform: ci.auto.tfvars 2025-07-25 00:01:28.327153 | orchestrator | 00:01:28.327 STDOUT terraform: default_custom.tf 2025-07-25 00:01:28.499401 | orchestrator | 00:01:28.499 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-07-25 00:01:29.538104 | orchestrator | 00:01:29.534 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-25 00:01:30.073834 | orchestrator | 00:01:30.073 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-25 00:01:30.422104 | orchestrator | 00:01:30.420 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-25 00:01:30.422242 | orchestrator | 00:01:30.420 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-25 00:01:30.422251 | orchestrator | 00:01:30.421 STDOUT terraform:  + create 2025-07-25 00:01:30.422257 | orchestrator | 00:01:30.421 STDOUT terraform:  <= read (data resources) 2025-07-25 00:01:30.422262 | orchestrator | 00:01:30.421 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-25 00:01:30.422275 | orchestrator | 00:01:30.421 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-25 00:01:30.422279 | orchestrator | 00:01:30.422 STDOUT terraform:  # (config refers to values not yet known) 2025-07-25 00:01:30.422284 | orchestrator | 00:01:30.422 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-25 00:01:30.422288 | orchestrator | 00:01:30.422 STDOUT terraform:  + checksum = (known after apply) 2025-07-25 00:01:30.422292 | orchestrator | 00:01:30.422 STDOUT terraform:  + created_at = (known after apply) 2025-07-25 00:01:30.422297 | orchestrator | 00:01:30.422 STDOUT terraform:  + file = (known after apply) 2025-07-25 00:01:30.426197 | orchestrator | 00:01:30.422 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.426253 | orchestrator | 00:01:30.422 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.426282 | orchestrator | 00:01:30.422 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-25 00:01:30.426290 | orchestrator | 00:01:30.422 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-25 00:01:30.426299 | orchestrator | 00:01:30.422 STDOUT terraform:  + most_recent = true 2025-07-25 00:01:30.426307 | orchestrator | 00:01:30.422 STDOUT terraform:  + name = (known after apply) 2025-07-25 00:01:30.426314 | orchestrator | 00:01:30.422 STDOUT terraform:  + protected = (known after apply) 2025-07-25 00:01:30.426320 | orchestrator | 00:01:30.422 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.426327 | orchestrator | 00:01:30.422 STDOUT terraform:  + schema = (known after apply) 2025-07-25 00:01:30.426334 | orchestrator | 00:01:30.422 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-25 00:01:30.426340 | orchestrator | 00:01:30.422 STDOUT terraform:  + tags = (known after apply) 2025-07-25 00:01:30.426347 | orchestrator | 00:01:30.422 STDOUT terraform:  + updated_at = (known after apply) 2025-07-25 00:01:30.426354 | orchestrator | 00:01:30.422 STDOUT terraform:  } 2025-07-25 00:01:30.426366 | orchestrator | 00:01:30.424 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-25 00:01:30.426375 | orchestrator | 00:01:30.424 STDOUT terraform:  # (config refers to values not yet known) 2025-07-25 00:01:30.426381 | orchestrator | 00:01:30.424 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-25 00:01:30.426388 | orchestrator | 00:01:30.424 STDOUT terraform:  + checksum = (known after apply) 2025-07-25 00:01:30.426395 | orchestrator | 00:01:30.424 STDOUT terraform:  + created_at = (known after apply) 2025-07-25 00:01:30.426402 | orchestrator | 00:01:30.424 STDOUT terraform:  + file = (known after apply) 2025-07-25 00:01:30.426408 | orchestrator | 00:01:30.424 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.426415 | orchestrator | 00:01:30.424 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.426422 | orchestrator | 00:01:30.424 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-25 00:01:30.426429 | orchestrator | 00:01:30.424 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-25 00:01:30.426446 | orchestrator | 00:01:30.424 STDOUT terraform:  + most_recent = true 2025-07-25 00:01:30.426454 | orchestrator | 00:01:30.424 STDOUT terraform:  + name = (known after apply) 2025-07-25 00:01:30.426461 | orchestrator | 00:01:30.424 STDOUT terraform:  + protected = (known after apply) 2025-07-25 00:01:30.426468 | orchestrator | 00:01:30.424 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.426474 | orchestrator | 00:01:30.424 STDOUT terraform:  + schema = (known after apply) 2025-07-25 00:01:30.426481 | orchestrator | 00:01:30.424 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-25 00:01:30.426488 | orchestrator | 00:01:30.424 STDOUT terraform:  + tags = (known after apply) 2025-07-25 00:01:30.426495 | orchestrator | 00:01:30.424 STDOUT terraform:  + updated_at = (known after apply) 2025-07-25 00:01:30.426501 | orchestrator | 00:01:30.425 STDOUT terraform:  } 2025-07-25 00:01:30.428293 | orchestrator | 00:01:30.427 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-25 00:01:30.428478 | orchestrator | 00:01:30.427 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-25 00:01:30.428618 | orchestrator | 00:01:30.427 STDOUT terraform:  + content = (known after apply) 2025-07-25 00:01:30.428653 | orchestrator | 00:01:30.427 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-25 00:01:30.428720 | orchestrator | 00:01:30.427 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-25 00:01:30.428727 | orchestrator | 00:01:30.427 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-25 00:01:30.428734 | orchestrator | 00:01:30.427 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-25 00:01:30.428741 | orchestrator | 00:01:30.427 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-25 00:01:30.428748 | orchestrator | 00:01:30.427 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-25 00:01:30.428755 | orchestrator | 00:01:30.427 STDOUT terraform:  + directory_permission = "0777" 2025-07-25 00:01:30.428762 | orchestrator | 00:01:30.427 STDOUT terraform:  + file_permission = "0644" 2025-07-25 00:01:30.428768 | orchestrator | 00:01:30.427 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-25 00:01:30.428775 | orchestrator | 00:01:30.427 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.428782 | orchestrator | 00:01:30.427 STDOUT terraform:  } 2025-07-25 00:01:30.430115 | orchestrator | 00:01:30.429 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-25 00:01:30.430168 | orchestrator | 00:01:30.429 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-25 00:01:30.430174 | orchestrator | 00:01:30.429 STDOUT terraform:  + content = (known after apply) 2025-07-25 00:01:30.430178 | orchestrator | 00:01:30.429 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-25 00:01:30.430182 | orchestrator | 00:01:30.429 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-25 00:01:30.430186 | orchestrator | 00:01:30.429 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-25 00:01:30.430190 | orchestrator | 00:01:30.429 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-25 00:01:30.430193 | orchestrator | 00:01:30.429 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-25 00:01:30.430197 | orchestrator | 00:01:30.429 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-25 00:01:30.430201 | orchestrator | 00:01:30.429 STDOUT terraform:  + directory_permission = "0777" 2025-07-25 00:01:30.430205 | orchestrator | 00:01:30.429 STDOUT terraform:  + file_permission = "0644" 2025-07-25 00:01:30.430209 | orchestrator | 00:01:30.429 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-25 00:01:30.430219 | orchestrator | 00:01:30.430 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.430223 | orchestrator | 00:01:30.430 STDOUT terraform:  } 2025-07-25 00:01:30.431292 | orchestrator | 00:01:30.431 STDOUT terraform:  # local_file.inventory will be created 2025-07-25 00:01:30.431334 | orchestrator | 00:01:30.431 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-25 00:01:30.431393 | orchestrator | 00:01:30.431 STDOUT terraform:  + content = (known after apply) 2025-07-25 00:01:30.431448 | orchestrator | 00:01:30.431 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-25 00:01:30.431511 | orchestrator | 00:01:30.431 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-25 00:01:30.431607 | orchestrator | 00:01:30.431 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-25 00:01:30.431649 | orchestrator | 00:01:30.431 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-25 00:01:30.431714 | orchestrator | 00:01:30.431 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-25 00:01:30.431776 | orchestrator | 00:01:30.431 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-25 00:01:30.431817 | orchestrator | 00:01:30.431 STDOUT terraform:  + directory_permission = "0777" 2025-07-25 00:01:30.431861 | orchestrator | 00:01:30.431 STDOUT terraform:  + file_permission = "0644" 2025-07-25 00:01:30.431947 | orchestrator | 00:01:30.431 STDOUT terraform:  + filename = "inventory.ci" 2025-07-25 00:01:30.432013 | orchestrator | 00:01:30.431 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.432031 | orchestrator | 00:01:30.432 STDOUT terraform:  } 2025-07-25 00:01:30.433318 | orchestrator | 00:01:30.433 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-25 00:01:30.433368 | orchestrator | 00:01:30.433 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-25 00:01:30.433415 | orchestrator | 00:01:30.433 STDOUT terraform:  + content = (sensitive value) 2025-07-25 00:01:30.433488 | orchestrator | 00:01:30.433 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-25 00:01:30.433536 | orchestrator | 00:01:30.433 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-25 00:01:30.433602 | orchestrator | 00:01:30.433 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-25 00:01:30.433666 | orchestrator | 00:01:30.433 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-25 00:01:30.433731 | orchestrator | 00:01:30.433 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-25 00:01:30.433827 | orchestrator | 00:01:30.433 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-25 00:01:30.433851 | orchestrator | 00:01:30.433 STDOUT terraform:  + directory_permission = "0700" 2025-07-25 00:01:30.433909 | orchestrator | 00:01:30.433 STDOUT terraform:  + file_permission = "0600" 2025-07-25 00:01:30.433961 | orchestrator | 00:01:30.433 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-25 00:01:30.434046 | orchestrator | 00:01:30.433 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.434062 | orchestrator | 00:01:30.434 STDOUT terraform:  } 2025-07-25 00:01:30.435242 | orchestrator | 00:01:30.434 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-25 00:01:30.435298 | orchestrator | 00:01:30.435 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-25 00:01:30.435347 | orchestrator | 00:01:30.435 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.435354 | orchestrator | 00:01:30.435 STDOUT terraform:  } 2025-07-25 00:01:30.436049 | orchestrator | 00:01:30.435 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-25 00:01:30.436141 | orchestrator | 00:01:30.436 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-25 00:01:30.436206 | orchestrator | 00:01:30.436 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.436250 | orchestrator | 00:01:30.436 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.436321 | orchestrator | 00:01:30.436 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.436390 | orchestrator | 00:01:30.436 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.436457 | orchestrator | 00:01:30.436 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.436545 | orchestrator | 00:01:30.436 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-25 00:01:30.436613 | orchestrator | 00:01:30.436 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.436652 | orchestrator | 00:01:30.436 STDOUT terraform:  + size = 80 2025-07-25 00:01:30.436698 | orchestrator | 00:01:30.436 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.436767 | orchestrator | 00:01:30.436 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.436774 | orchestrator | 00:01:30.436 STDOUT terraform:  } 2025-07-25 00:01:30.438104 | orchestrator | 00:01:30.437 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-25 00:01:30.438195 | orchestrator | 00:01:30.438 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-25 00:01:30.438243 | orchestrator | 00:01:30.438 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.438288 | orchestrator | 00:01:30.438 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.438359 | orchestrator | 00:01:30.438 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.438441 | orchestrator | 00:01:30.438 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.438502 | orchestrator | 00:01:30.438 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.438587 | orchestrator | 00:01:30.438 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-25 00:01:30.438657 | orchestrator | 00:01:30.438 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.438698 | orchestrator | 00:01:30.438 STDOUT terraform:  + size = 80 2025-07-25 00:01:30.438739 | orchestrator | 00:01:30.438 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.438785 | orchestrator | 00:01:30.438 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.438810 | orchestrator | 00:01:30.438 STDOUT terraform:  } 2025-07-25 00:01:30.444745 | orchestrator | 00:01:30.439 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-25 00:01:30.444801 | orchestrator | 00:01:30.439 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-25 00:01:30.444807 | orchestrator | 00:01:30.440 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.444825 | orchestrator | 00:01:30.440 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.444829 | orchestrator | 00:01:30.440 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.444834 | orchestrator | 00:01:30.440 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.444838 | orchestrator | 00:01:30.440 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.444842 | orchestrator | 00:01:30.440 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-25 00:01:30.444846 | orchestrator | 00:01:30.440 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.444850 | orchestrator | 00:01:30.440 STDOUT terraform:  + size = 80 2025-07-25 00:01:30.444854 | orchestrator | 00:01:30.440 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.444858 | orchestrator | 00:01:30.440 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.444862 | orchestrator | 00:01:30.440 STDOUT terraform:  } 2025-07-25 00:01:30.444866 | orchestrator | 00:01:30.441 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-25 00:01:30.444870 | orchestrator | 00:01:30.441 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-25 00:01:30.444874 | orchestrator | 00:01:30.441 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.444883 | orchestrator | 00:01:30.441 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.444888 | orchestrator | 00:01:30.441 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.444891 | orchestrator | 00:01:30.441 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.444895 | orchestrator | 00:01:30.441 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.444899 | orchestrator | 00:01:30.441 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-25 00:01:30.444903 | orchestrator | 00:01:30.442 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.444907 | orchestrator | 00:01:30.442 STDOUT terraform:  + size = 80 2025-07-25 00:01:30.444911 | orchestrator | 00:01:30.442 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.444914 | orchestrator | 00:01:30.442 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.444918 | orchestrator | 00:01:30.442 STDOUT terraform:  } 2025-07-25 00:01:30.447117 | orchestrator | 00:01:30.446 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-25 00:01:30.447203 | orchestrator | 00:01:30.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-25 00:01:30.447270 | orchestrator | 00:01:30.447 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.447316 | orchestrator | 00:01:30.447 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.447375 | orchestrator | 00:01:30.447 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.447438 | orchestrator | 00:01:30.447 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.447506 | orchestrator | 00:01:30.447 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.447588 | orchestrator | 00:01:30.447 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-25 00:01:30.447652 | orchestrator | 00:01:30.447 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.447688 | orchestrator | 00:01:30.447 STDOUT terraform:  + size = 80 2025-07-25 00:01:30.447740 | orchestrator | 00:01:30.447 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.447791 | orchestrator | 00:01:30.447 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.447798 | orchestrator | 00:01:30.447 STDOUT terraform:  } 2025-07-25 00:01:30.447896 | orchestrator | 00:01:30.447 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-25 00:01:30.448006 | orchestrator | 00:01:30.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-25 00:01:30.448071 | orchestrator | 00:01:30.447 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.448109 | orchestrator | 00:01:30.448 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.448183 | orchestrator | 00:01:30.448 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.448252 | orchestrator | 00:01:30.448 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.448321 | orchestrator | 00:01:30.448 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.448405 | orchestrator | 00:01:30.448 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-25 00:01:30.448483 | orchestrator | 00:01:30.448 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.448555 | orchestrator | 00:01:30.448 STDOUT terraform:  + size = 80 2025-07-25 00:01:30.448597 | orchestrator | 00:01:30.448 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.448644 | orchestrator | 00:01:30.448 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.448667 | orchestrator | 00:01:30.448 STDOUT terraform:  } 2025-07-25 00:01:30.448758 | orchestrator | 00:01:30.448 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-25 00:01:30.448844 | orchestrator | 00:01:30.448 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-25 00:01:30.448910 | orchestrator | 00:01:30.448 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.448964 | orchestrator | 00:01:30.448 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.449037 | orchestrator | 00:01:30.448 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.449095 | orchestrator | 00:01:30.449 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.449160 | orchestrator | 00:01:30.449 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.449241 | orchestrator | 00:01:30.449 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-25 00:01:30.449308 | orchestrator | 00:01:30.449 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.449360 | orchestrator | 00:01:30.449 STDOUT terraform:  + size = 80 2025-07-25 00:01:30.449394 | orchestrator | 00:01:30.449 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.449448 | orchestrator | 00:01:30.449 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.449455 | orchestrator | 00:01:30.449 STDOUT terraform:  } 2025-07-25 00:01:30.449534 | orchestrator | 00:01:30.449 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-25 00:01:30.449613 | orchestrator | 00:01:30.449 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-25 00:01:30.449692 | orchestrator | 00:01:30.449 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.449716 | orchestrator | 00:01:30.449 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.449782 | orchestrator | 00:01:30.449 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.449854 | orchestrator | 00:01:30.449 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.451907 | orchestrator | 00:01:30.449 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-25 00:01:30.451923 | orchestrator | 00:01:30.449 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.451964 | orchestrator | 00:01:30.449 STDOUT terraform:  + size = 20 2025-07-25 00:01:30.451969 | orchestrator | 00:01:30.450 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.451973 | orchestrator | 00:01:30.450 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.451977 | orchestrator | 00:01:30.450 STDOUT terraform:  } 2025-07-25 00:01:30.451981 | orchestrator | 00:01:30.450 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-25 00:01:30.451985 | orchestrator | 00:01:30.451 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-25 00:01:30.451988 | orchestrator | 00:01:30.451 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.451992 | orchestrator | 00:01:30.451 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.451996 | orchestrator | 00:01:30.451 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.452000 | orchestrator | 00:01:30.451 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.452004 | orchestrator | 00:01:30.451 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-25 00:01:30.452007 | orchestrator | 00:01:30.451 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.452014 | orchestrator | 00:01:30.451 STDOUT terraform:  + size = 20 2025-07-25 00:01:30.452018 | orchestrator | 00:01:30.451 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.452022 | orchestrator | 00:01:30.451 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.452025 | orchestrator | 00:01:30.451 STDOUT terraform:  } 2025-07-25 00:01:30.452029 | orchestrator | 00:01:30.451 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-25 00:01:30.452033 | orchestrator | 00:01:30.451 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-25 00:01:30.452037 | orchestrator | 00:01:30.451 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.452052 | orchestrator | 00:01:30.451 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.452058 | orchestrator | 00:01:30.451 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.452062 | orchestrator | 00:01:30.451 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.452090 | orchestrator | 00:01:30.452 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-25 00:01:30.452166 | orchestrator | 00:01:30.452 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.452198 | orchestrator | 00:01:30.452 STDOUT terraform:  + size = 20 2025-07-25 00:01:30.452247 | orchestrator | 00:01:30.452 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.452287 | orchestrator | 00:01:30.452 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.452312 | orchestrator | 00:01:30.452 STDOUT terraform:  } 2025-07-25 00:01:30.452398 | orchestrator | 00:01:30.452 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-25 00:01:30.452492 | orchestrator | 00:01:30.452 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-25 00:01:30.452549 | orchestrator | 00:01:30.452 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.452595 | orchestrator | 00:01:30.452 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.452665 | orchestrator | 00:01:30.452 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.452731 | orchestrator | 00:01:30.452 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.452804 | orchestrator | 00:01:30.452 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-25 00:01:30.452873 | orchestrator | 00:01:30.452 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.452911 | orchestrator | 00:01:30.452 STDOUT terraform:  + size = 20 2025-07-25 00:01:30.452982 | orchestrator | 00:01:30.452 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.453026 | orchestrator | 00:01:30.452 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.453049 | orchestrator | 00:01:30.453 STDOUT terraform:  } 2025-07-25 00:01:30.453129 | orchestrator | 00:01:30.453 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-25 00:01:30.453208 | orchestrator | 00:01:30.453 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-25 00:01:30.453272 | orchestrator | 00:01:30.453 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.453313 | orchestrator | 00:01:30.453 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.453380 | orchestrator | 00:01:30.453 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.453443 | orchestrator | 00:01:30.453 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.453511 | orchestrator | 00:01:30.453 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-25 00:01:30.453578 | orchestrator | 00:01:30.453 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.453610 | orchestrator | 00:01:30.453 STDOUT terraform:  + size = 20 2025-07-25 00:01:30.453652 | orchestrator | 00:01:30.453 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.453696 | orchestrator | 00:01:30.453 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.453717 | orchestrator | 00:01:30.453 STDOUT terraform:  } 2025-07-25 00:01:30.453796 | orchestrator | 00:01:30.453 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-25 00:01:30.453872 | orchestrator | 00:01:30.453 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-25 00:01:30.453957 | orchestrator | 00:01:30.453 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.453989 | orchestrator | 00:01:30.453 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.462259 | orchestrator | 00:01:30.453 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.462336 | orchestrator | 00:01:30.458 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.462345 | orchestrator | 00:01:30.458 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-25 00:01:30.462353 | orchestrator | 00:01:30.458 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.462371 | orchestrator | 00:01:30.458 STDOUT terraform:  + size = 20 2025-07-25 00:01:30.462380 | orchestrator | 00:01:30.458 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.462387 | orchestrator | 00:01:30.458 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.462395 | orchestrator | 00:01:30.458 STDOUT terraform:  } 2025-07-25 00:01:30.462402 | orchestrator | 00:01:30.458 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-25 00:01:30.462410 | orchestrator | 00:01:30.458 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-25 00:01:30.462417 | orchestrator | 00:01:30.458 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.462424 | orchestrator | 00:01:30.459 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.462431 | orchestrator | 00:01:30.459 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.462438 | orchestrator | 00:01:30.459 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.462445 | orchestrator | 00:01:30.459 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-25 00:01:30.462452 | orchestrator | 00:01:30.459 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.462459 | orchestrator | 00:01:30.459 STDOUT terraform:  + size = 20 2025-07-25 00:01:30.462466 | orchestrator | 00:01:30.459 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.462473 | orchestrator | 00:01:30.459 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.462479 | orchestrator | 00:01:30.459 STDOUT terraform:  } 2025-07-25 00:01:30.462486 | orchestrator | 00:01:30.459 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-25 00:01:30.462492 | orchestrator | 00:01:30.459 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-25 00:01:30.462523 | orchestrator | 00:01:30.459 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.462530 | orchestrator | 00:01:30.459 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.462537 | orchestrator | 00:01:30.459 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.462543 | orchestrator | 00:01:30.459 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.462549 | orchestrator | 00:01:30.459 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-25 00:01:30.462555 | orchestrator | 00:01:30.459 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.462561 | orchestrator | 00:01:30.459 STDOUT terraform:  + size = 20 2025-07-25 00:01:30.462567 | orchestrator | 00:01:30.459 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.462574 | orchestrator | 00:01:30.459 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.462580 | orchestrator | 00:01:30.459 STDOUT terraform:  } 2025-07-25 00:01:30.462587 | orchestrator | 00:01:30.460 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-25 00:01:30.462593 | orchestrator | 00:01:30.460 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-25 00:01:30.462600 | orchestrator | 00:01:30.460 STDOUT terraform:  + attachment = (known after apply) 2025-07-25 00:01:30.462606 | orchestrator | 00:01:30.460 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.462613 | orchestrator | 00:01:30.460 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.462634 | orchestrator | 00:01:30.460 STDOUT terraform:  + metadata = (known after apply) 2025-07-25 00:01:30.462642 | orchestrator | 00:01:30.460 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-25 00:01:30.462648 | orchestrator | 00:01:30.460 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.462655 | orchestrator | 00:01:30.460 STDOUT terraform:  + size = 20 2025-07-25 00:01:30.462661 | orchestrator | 00:01:30.460 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-25 00:01:30.462667 | orchestrator | 00:01:30.460 STDOUT terraform:  + volume_type = "ssd" 2025-07-25 00:01:30.462674 | orchestrator | 00:01:30.460 STDOUT terraform:  } 2025-07-25 00:01:30.462688 | orchestrator | 00:01:30.460 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-25 00:01:30.462696 | orchestrator | 00:01:30.460 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-25 00:01:30.462703 | orchestrator | 00:01:30.460 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-25 00:01:30.462710 | orchestrator | 00:01:30.460 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-25 00:01:30.462717 | orchestrator | 00:01:30.460 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-25 00:01:30.462724 | orchestrator | 00:01:30.460 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.462730 | orchestrator | 00:01:30.460 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.462745 | orchestrator | 00:01:30.460 STDOUT terraform:  + config_drive = true 2025-07-25 00:01:30.462752 | orchestrator | 00:01:30.460 STDOUT terraform:  + created = (known after apply) 2025-07-25 00:01:30.462758 | orchestrator | 00:01:30.461 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-25 00:01:30.462764 | orchestrator | 00:01:30.461 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-25 00:01:30.462771 | orchestrator | 00:01:30.461 STDOUT terraform:  + force_delete = false 2025-07-25 00:01:30.462777 | orchestrator | 00:01:30.461 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-25 00:01:30.462784 | orchestrator | 00:01:30.461 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.462790 | orchestrator | 00:01:30.461 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.462797 | orchestrator | 00:01:30.461 STDOUT terraform:  + image_name = (known after apply) 2025-07-25 00:01:30.462803 | orchestrator | 00:01:30.461 STDOUT terraform:  + key_pair = "testbed" 2025-07-25 00:01:30.462809 | orchestrator | 00:01:30.461 STDOUT terraform:  + name = "testbed-manager" 2025-07-25 00:01:30.462816 | orchestrator | 00:01:30.461 STDOUT terraform:  + power_state = "active" 2025-07-25 00:01:30.462823 | orchestrator | 00:01:30.461 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.462829 | orchestrator | 00:01:30.461 STDOUT terraform:  + security_groups = (known after apply) 2025-07-25 00:01:30.462836 | orchestrator | 00:01:30.461 STDOUT terraform:  + stop_before_destroy = false 2025-07-25 00:01:30.462843 | orchestrator | 00:01:30.461 STDOUT terraform:  + updated = (known after apply) 2025-07-25 00:01:30.462850 | orchestrator | 00:01:30.461 STDOUT terraform:  + user_data = (sensitive value) 2025-07-25 00:01:30.462857 | orchestrator | 00:01:30.461 STDOUT terraform:  + block_device { 2025-07-25 00:01:30.462864 | orchestrator | 00:01:30.461 STDOUT terraform:  + boot_index = 0 2025-07-25 00:01:30.462870 | orchestrator | 00:01:30.461 STDOUT terraform:  + delete_on_termination = false 2025-07-25 00:01:30.462877 | orchestrator | 00:01:30.461 STDOUT terraform:  + destination_type = "volume" 2025-07-25 00:01:30.462883 | orchestrator | 00:01:30.461 STDOUT terraform:  + multiattach = false 2025-07-25 00:01:30.463787 | orchestrator | 00:01:30.462 STDOUT terraform:  + source_type = "volume" 2025-07-25 00:01:30.463830 | orchestrator | 00:01:30.463 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.463839 | orchestrator | 00:01:30.463 STDOUT terraform:  } 2025-07-25 00:01:30.463846 | orchestrator | 00:01:30.463 STDOUT terraform:  + network { 2025-07-25 00:01:30.463853 | orchestrator | 00:01:30.463 STDOUT terraform:  + access_network = false 2025-07-25 00:01:30.463859 | orchestrator | 00:01:30.463 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-25 00:01:30.463865 | orchestrator | 00:01:30.463 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-25 00:01:30.463871 | orchestrator | 00:01:30.463 STDOUT terraform:  + mac = (known after apply) 2025-07-25 00:01:30.463890 | orchestrator | 00:01:30.463 STDOUT terraform:  + name = (known after apply) 2025-07-25 00:01:30.463896 | orchestrator | 00:01:30.463 STDOUT terraform:  + port = (known after apply) 2025-07-25 00:01:30.463903 | orchestrator | 00:01:30.463 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.463909 | orchestrator | 00:01:30.463 STDOUT terraform:  } 2025-07-25 00:01:30.463915 | orchestrator | 00:01:30.463 STDOUT terraform:  } 2025-07-25 00:01:30.463921 | orchestrator | 00:01:30.463 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-25 00:01:30.463952 | orchestrator | 00:01:30.463 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-25 00:01:30.463959 | orchestrator | 00:01:30.463 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-25 00:01:30.466082 | orchestrator | 00:01:30.463 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-25 00:01:30.466110 | orchestrator | 00:01:30.464 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-25 00:01:30.466130 | orchestrator | 00:01:30.464 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.466137 | orchestrator | 00:01:30.464 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.466144 | orchestrator | 00:01:30.464 STDOUT terraform:  + config_drive = true 2025-07-25 00:01:30.466150 | orchestrator | 00:01:30.464 STDOUT terraform:  + created = (known after apply) 2025-07-25 00:01:30.466156 | orchestrator | 00:01:30.464 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-25 00:01:30.466162 | orchestrator | 00:01:30.464 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-25 00:01:30.466169 | orchestrator | 00:01:30.464 STDOUT terraform:  + force_delete = false 2025-07-25 00:01:30.466175 | orchestrator | 00:01:30.464 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-25 00:01:30.466182 | orchestrator | 00:01:30.464 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.466188 | orchestrator | 00:01:30.464 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.466193 | orchestrator | 00:01:30.464 STDOUT terraform:  + image_name = (known after apply) 2025-07-25 00:01:30.466200 | orchestrator | 00:01:30.464 STDOUT terraform:  + key_pair = "testbed" 2025-07-25 00:01:30.466206 | orchestrator | 00:01:30.464 STDOUT terraform:  + name = "testbed-node-0" 2025-07-25 00:01:30.466212 | orchestrator | 00:01:30.464 STDOUT terraform:  + power_state = "active" 2025-07-25 00:01:30.466218 | orchestrator | 00:01:30.464 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.466224 | orchestrator | 00:01:30.464 STDOUT terraform:  + security_groups = (known after apply) 2025-07-25 00:01:30.466230 | orchestrator | 00:01:30.464 STDOUT terraform:  + stop_before_destroy = false 2025-07-25 00:01:30.466237 | orchestrator | 00:01:30.464 STDOUT terraform:  + updated = (known after apply) 2025-07-25 00:01:30.466243 | orchestrator | 00:01:30.464 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-25 00:01:30.466250 | orchestrator | 00:01:30.465 STDOUT terraform:  + block_device { 2025-07-25 00:01:30.466266 | orchestrator | 00:01:30.465 STDOUT terraform:  + boot_index = 0 2025-07-25 00:01:30.466272 | orchestrator | 00:01:30.465 STDOUT terraform:  + delete_on_termination = false 2025-07-25 00:01:30.466279 | orchestrator | 00:01:30.465 STDOUT terraform:  + destination_type = "volume" 2025-07-25 00:01:30.466285 | orchestrator | 00:01:30.465 STDOUT terraform:  + multiattach = false 2025-07-25 00:01:30.466316 | orchestrator | 00:01:30.465 STDOUT terraform:  + source_type = "volume" 2025-07-25 00:01:30.466323 | orchestrator | 00:01:30.465 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.466330 | orchestrator | 00:01:30.465 STDOUT terraform:  } 2025-07-25 00:01:30.466337 | orchestrator | 00:01:30.465 STDOUT terraform:  + network { 2025-07-25 00:01:30.466343 | orchestrator | 00:01:30.465 STDOUT terraform:  + access_network = false 2025-07-25 00:01:30.466349 | orchestrator | 00:01:30.465 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-25 00:01:30.466356 | orchestrator | 00:01:30.465 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-25 00:01:30.466362 | orchestrator | 00:01:30.465 STDOUT terraform:  + mac = (known after apply) 2025-07-25 00:01:30.466369 | orchestrator | 00:01:30.465 STDOUT terraform:  + name = (known after apply) 2025-07-25 00:01:30.466375 | orchestrator | 00:01:30.465 STDOUT terraform:  + port = (known after apply) 2025-07-25 00:01:30.466381 | orchestrator | 00:01:30.465 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.466388 | orchestrator | 00:01:30.465 STDOUT terraform:  } 2025-07-25 00:01:30.466408 | orchestrator | 00:01:30.465 STDOUT terraform:  } 2025-07-25 00:01:30.466414 | orchestrator | 00:01:30.465 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-25 00:01:30.466421 | orchestrator | 00:01:30.465 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-25 00:01:30.467981 | orchestrator | 00:01:30.465 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-25 00:01:30.468016 | orchestrator | 00:01:30.466 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-25 00:01:30.468024 | orchestrator | 00:01:30.466 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-25 00:01:30.468030 | orchestrator | 00:01:30.466 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.468038 | orchestrator | 00:01:30.466 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.468045 | orchestrator | 00:01:30.466 STDOUT terraform:  + config_drive = true 2025-07-25 00:01:30.468052 | orchestrator | 00:01:30.466 STDOUT terraform:  + created = (known after apply) 2025-07-25 00:01:30.468059 | orchestrator | 00:01:30.466 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-25 00:01:30.468066 | orchestrator | 00:01:30.466 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-25 00:01:30.468073 | orchestrator | 00:01:30.466 STDOUT terraform:  + force_delete = false 2025-07-25 00:01:30.468089 | orchestrator | 00:01:30.466 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-25 00:01:30.468105 | orchestrator | 00:01:30.467 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.468112 | orchestrator | 00:01:30.467 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.468120 | orchestrator | 00:01:30.467 STDOUT terraform:  + image_name = (known after apply) 2025-07-25 00:01:30.468127 | orchestrator | 00:01:30.467 STDOUT terraform:  + key_pair = "testbed" 2025-07-25 00:01:30.468134 | orchestrator | 00:01:30.467 STDOUT terraform:  + name = "testbed-node-1" 2025-07-25 00:01:30.468141 | orchestrator | 00:01:30.467 STDOUT terraform:  + power_state = "active" 2025-07-25 00:01:30.468148 | orchestrator | 00:01:30.467 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.468155 | orchestrator | 00:01:30.467 STDOUT terraform:  + security_groups = (known after apply) 2025-07-25 00:01:30.468162 | orchestrator | 00:01:30.467 STDOUT terraform:  + stop_before_destroy = false 2025-07-25 00:01:30.468169 | orchestrator | 00:01:30.467 STDOUT terraform:  + updated = (known after apply) 2025-07-25 00:01:30.468176 | orchestrator | 00:01:30.467 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-25 00:01:30.468184 | orchestrator | 00:01:30.467 STDOUT terraform:  + block_device { 2025-07-25 00:01:30.468190 | orchestrator | 00:01:30.467 STDOUT terraform:  + boot_index = 0 2025-07-25 00:01:30.468205 | orchestrator | 00:01:30.467 STDOUT terraform:  + delete_on_termination = false 2025-07-25 00:01:30.468211 | orchestrator | 00:01:30.467 STDOUT terraform:  + destination_type = "volume" 2025-07-25 00:01:30.468218 | orchestrator | 00:01:30.467 STDOUT terraform:  + multiattach = false 2025-07-25 00:01:30.468224 | orchestrator | 00:01:30.467 STDOUT terraform:  + source_type = "volume" 2025-07-25 00:01:30.468375 | orchestrator | 00:01:30.467 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.468442 | orchestrator | 00:01:30.468 STDOUT terraform:  } 2025-07-25 00:01:30.468483 | orchestrator | 00:01:30.468 STDOUT terraform:  + network { 2025-07-25 00:01:30.468532 | orchestrator | 00:01:30.468 STDOUT terraform:  + access_network = false 2025-07-25 00:01:30.468594 | orchestrator | 00:01:30.468 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-25 00:01:30.468657 | orchestrator | 00:01:30.468 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-25 00:01:30.468739 | orchestrator | 00:01:30.468 STDOUT terraform:  + mac = (known after apply) 2025-07-25 00:01:30.468816 | orchestrator | 00:01:30.468 STDOUT terraform:  + name = (known after apply) 2025-07-25 00:01:30.468881 | orchestrator | 00:01:30.468 STDOUT terraform:  + port = (known after apply) 2025-07-25 00:01:30.468964 | orchestrator | 00:01:30.468 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.469002 | orchestrator | 00:01:30.468 STDOUT terraform:  } 2025-07-25 00:01:30.469035 | orchestrator | 00:01:30.469 STDOUT terraform:  } 2025-07-25 00:01:30.469143 | orchestrator | 00:01:30.469 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-25 00:01:30.469231 | orchestrator | 00:01:30.469 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-25 00:01:30.469314 | orchestrator | 00:01:30.469 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-25 00:01:30.469400 | orchestrator | 00:01:30.469 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-25 00:01:30.469485 | orchestrator | 00:01:30.469 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-25 00:01:30.469560 | orchestrator | 00:01:30.469 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.469612 | orchestrator | 00:01:30.469 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.469661 | orchestrator | 00:01:30.469 STDOUT terraform:  + config_drive = true 2025-07-25 00:01:30.469748 | orchestrator | 00:01:30.469 STDOUT terraform:  + created = (known after apply) 2025-07-25 00:01:30.469862 | orchestrator | 00:01:30.469 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-25 00:01:30.469946 | orchestrator | 00:01:30.469 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-25 00:01:30.470002 | orchestrator | 00:01:30.469 STDOUT terraform:  + force_delete = false 2025-07-25 00:01:30.470105 | orchestrator | 00:01:30.470 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-25 00:01:30.470198 | orchestrator | 00:01:30.470 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.470274 | orchestrator | 00:01:30.470 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.470356 | orchestrator | 00:01:30.470 STDOUT terraform:  + image_name = (known after apply) 2025-07-25 00:01:30.470413 | orchestrator | 00:01:30.470 STDOUT terraform:  + key_pair = "testbed" 2025-07-25 00:01:30.470493 | orchestrator | 00:01:30.470 STDOUT terraform:  + name = "testbed-node-2" 2025-07-25 00:01:30.470557 | orchestrator | 00:01:30.470 STDOUT terraform:  + power_state = "active" 2025-07-25 00:01:30.470648 | orchestrator | 00:01:30.470 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.470717 | orchestrator | 00:01:30.470 STDOUT terraform:  + security_groups = (known after apply) 2025-07-25 00:01:30.470782 | orchestrator | 00:01:30.470 STDOUT terraform:  + stop_before_destroy = false 2025-07-25 00:01:30.470868 | orchestrator | 00:01:30.470 STDOUT terraform:  + updated = (known after apply) 2025-07-25 00:01:30.471033 | orchestrator | 00:01:30.470 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-25 00:01:30.471108 | orchestrator | 00:01:30.471 STDOUT terraform:  + block_device { 2025-07-25 00:01:30.471166 | orchestrator | 00:01:30.471 STDOUT terraform:  + boot_index = 0 2025-07-25 00:01:30.471221 | orchestrator | 00:01:30.471 STDOUT terraform:  + delete_on_termination = false 2025-07-25 00:01:30.471312 | orchestrator | 00:01:30.471 STDOUT terraform:  + destination_type = "volume" 2025-07-25 00:01:30.471401 | orchestrator | 00:01:30.471 STDOUT terraform:  + multiattach = false 2025-07-25 00:01:30.471494 | orchestrator | 00:01:30.471 STDOUT terraform:  + source_type = "volume" 2025-07-25 00:01:30.471591 | orchestrator | 00:01:30.471 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.471641 | orchestrator | 00:01:30.471 STDOUT terraform:  } 2025-07-25 00:01:30.471681 | orchestrator | 00:01:30.471 STDOUT terraform:  + network { 2025-07-25 00:01:30.471731 | orchestrator | 00:01:30.471 STDOUT terraform:  + access_network = false 2025-07-25 00:01:30.471812 | orchestrator | 00:01:30.471 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-25 00:01:30.471879 | orchestrator | 00:01:30.471 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-25 00:01:30.471972 | orchestrator | 00:01:30.471 STDOUT terraform:  + mac = (known after apply) 2025-07-25 00:01:30.472047 | orchestrator | 00:01:30.471 STDOUT terraform:  + name = (known after apply) 2025-07-25 00:01:30.472120 | orchestrator | 00:01:30.472 STDOUT terraform:  + port = (known after apply) 2025-07-25 00:01:30.472193 | orchestrator | 00:01:30.472 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.472229 | orchestrator | 00:01:30.472 STDOUT terraform:  } 2025-07-25 00:01:30.472268 | orchestrator | 00:01:30.472 STDOUT terraform:  } 2025-07-25 00:01:30.472362 | orchestrator | 00:01:30.472 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-25 00:01:30.472455 | orchestrator | 00:01:30.472 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-25 00:01:30.472530 | orchestrator | 00:01:30.472 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-25 00:01:30.472601 | orchestrator | 00:01:30.472 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-25 00:01:30.472677 | orchestrator | 00:01:30.472 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-25 00:01:30.472758 | orchestrator | 00:01:30.472 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.472816 | orchestrator | 00:01:30.472 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.472867 | orchestrator | 00:01:30.472 STDOUT terraform:  + config_drive = true 2025-07-25 00:01:30.472968 | orchestrator | 00:01:30.472 STDOUT terraform:  + created = (known after apply) 2025-07-25 00:01:30.473051 | orchestrator | 00:01:30.472 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-25 00:01:30.473112 | orchestrator | 00:01:30.473 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-25 00:01:30.473168 | orchestrator | 00:01:30.473 STDOUT terraform:  + force_delete = false 2025-07-25 00:01:30.473233 | orchestrator | 00:01:30.473 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-25 00:01:30.473302 | orchestrator | 00:01:30.473 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.473381 | orchestrator | 00:01:30.473 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.473460 | orchestrator | 00:01:30.473 STDOUT terraform:  + image_name = (known after apply) 2025-07-25 00:01:30.473517 | orchestrator | 00:01:30.473 STDOUT terraform:  + key_pair = "testbed" 2025-07-25 00:01:30.473580 | orchestrator | 00:01:30.473 STDOUT terraform:  + name = "testbed-node-3" 2025-07-25 00:01:30.473631 | orchestrator | 00:01:30.473 STDOUT terraform:  + power_state = "active" 2025-07-25 00:01:30.473719 | orchestrator | 00:01:30.473 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.473794 | orchestrator | 00:01:30.473 STDOUT terraform:  + security_groups = (known after apply) 2025-07-25 00:01:30.473992 | orchestrator | 00:01:30.473 STDOUT terraform:  + stop_before_destroy = false 2025-07-25 00:01:30.474096 | orchestrator | 00:01:30.474 STDOUT terraform:  + updated = (known after apply) 2025-07-25 00:01:30.474204 | orchestrator | 00:01:30.474 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-25 00:01:30.474255 | orchestrator | 00:01:30.474 STDOUT terraform:  + block_device { 2025-07-25 00:01:30.474314 | orchestrator | 00:01:30.474 STDOUT terraform:  + boot_index = 0 2025-07-25 00:01:30.474392 | orchestrator | 00:01:30.474 STDOUT terraform:  + delete_on_termination = false 2025-07-25 00:01:30.474465 | orchestrator | 00:01:30.474 STDOUT terraform:  + destination_type = "volume" 2025-07-25 00:01:30.474526 | orchestrator | 00:01:30.474 STDOUT terraform:  + multiattach = false 2025-07-25 00:01:30.474586 | orchestrator | 00:01:30.474 STDOUT terraform:  + source_type = "volume" 2025-07-25 00:01:30.474661 | orchestrator | 00:01:30.474 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.474700 | orchestrator | 00:01:30.474 STDOUT terraform:  } 2025-07-25 00:01:30.474736 | orchestrator | 00:01:30.474 STDOUT terraform:  + network { 2025-07-25 00:01:30.474796 | orchestrator | 00:01:30.474 STDOUT terraform:  + access_network = false 2025-07-25 00:01:30.474875 | orchestrator | 00:01:30.474 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-25 00:01:30.474963 | orchestrator | 00:01:30.474 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-25 00:01:30.475033 | orchestrator | 00:01:30.474 STDOUT terraform:  + mac = (known after apply) 2025-07-25 00:01:30.475100 | orchestrator | 00:01:30.475 STDOUT terraform:  + name = (known after apply) 2025-07-25 00:01:30.475172 | orchestrator | 00:01:30.475 STDOUT terraform:  + port = (known after apply) 2025-07-25 00:01:30.475269 | orchestrator | 00:01:30.475 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.475306 | orchestrator | 00:01:30.475 STDOUT terraform:  } 2025-07-25 00:01:30.475343 | orchestrator | 00:01:30.475 STDOUT terraform:  } 2025-07-25 00:01:30.475431 | orchestrator | 00:01:30.475 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-25 00:01:30.475512 | orchestrator | 00:01:30.475 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-25 00:01:30.475582 | orchestrator | 00:01:30.475 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-25 00:01:30.475666 | orchestrator | 00:01:30.475 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-25 00:01:30.475752 | orchestrator | 00:01:30.475 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-25 00:01:30.475821 | orchestrator | 00:01:30.475 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.475870 | orchestrator | 00:01:30.475 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.475976 | orchestrator | 00:01:30.475 STDOUT terraform:  + config_drive = true 2025-07-25 00:01:30.476061 | orchestrator | 00:01:30.475 STDOUT terraform:  + created = (known after apply) 2025-07-25 00:01:30.476149 | orchestrator | 00:01:30.476 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-25 00:01:30.476232 | orchestrator | 00:01:30.476 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-25 00:01:30.476288 | orchestrator | 00:01:30.476 STDOUT terraform:  + force_delete = false 2025-07-25 00:01:30.476347 | orchestrator | 00:01:30.476 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-25 00:01:30.482242 | orchestrator | 00:01:30.482 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.482379 | orchestrator | 00:01:30.482 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.482477 | orchestrator | 00:01:30.482 STDOUT terraform:  + image_name = (known after apply) 2025-07-25 00:01:30.482544 | orchestrator | 00:01:30.482 STDOUT terraform:  + key_pair = "testbed" 2025-07-25 00:01:30.482599 | orchestrator | 00:01:30.482 STDOUT terraform:  + name = "testbed-node-4" 2025-07-25 00:01:30.482651 | orchestrator | 00:01:30.482 STDOUT terraform:  + power_state = "active" 2025-07-25 00:01:30.482729 | orchestrator | 00:01:30.482 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.482806 | orchestrator | 00:01:30.482 STDOUT terraform:  + security_groups = (known after apply) 2025-07-25 00:01:30.482876 | orchestrator | 00:01:30.482 STDOUT terraform:  + stop_before_destroy = false 2025-07-25 00:01:30.482964 | orchestrator | 00:01:30.482 STDOUT terraform:  + updated = (known after apply) 2025-07-25 00:01:30.483025 | orchestrator | 00:01:30.482 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-25 00:01:30.483055 | orchestrator | 00:01:30.483 STDOUT terraform:  + block_device { 2025-07-25 00:01:30.483090 | orchestrator | 00:01:30.483 STDOUT terraform:  + boot_index = 0 2025-07-25 00:01:30.483124 | orchestrator | 00:01:30.483 STDOUT terraform:  + delete_on_termination = false 2025-07-25 00:01:30.483159 | orchestrator | 00:01:30.483 STDOUT terraform:  + destination_type = "volume" 2025-07-25 00:01:30.483194 | orchestrator | 00:01:30.483 STDOUT terraform:  + multiattach = false 2025-07-25 00:01:30.483234 | orchestrator | 00:01:30.483 STDOUT terraform:  + source_type = "volume" 2025-07-25 00:01:30.483278 | orchestrator | 00:01:30.483 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.483300 | orchestrator | 00:01:30.483 STDOUT terraform:  } 2025-07-25 00:01:30.483320 | orchestrator | 00:01:30.483 STDOUT terraform:  + network { 2025-07-25 00:01:30.483347 | orchestrator | 00:01:30.483 STDOUT terraform:  + access_network = false 2025-07-25 00:01:30.483422 | orchestrator | 00:01:30.483 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-25 00:01:30.483464 | orchestrator | 00:01:30.483 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-25 00:01:30.483514 | orchestrator | 00:01:30.483 STDOUT terraform:  + mac = (known after apply) 2025-07-25 00:01:30.483562 | orchestrator | 00:01:30.483 STDOUT terraform:  + name = (known after apply) 2025-07-25 00:01:30.483600 | orchestrator | 00:01:30.483 STDOUT terraform:  + port = (known after apply) 2025-07-25 00:01:30.483638 | orchestrator | 00:01:30.483 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.483661 | orchestrator | 00:01:30.483 STDOUT terraform:  } 2025-07-25 00:01:30.483683 | orchestrator | 00:01:30.483 STDOUT terraform:  } 2025-07-25 00:01:30.483790 | orchestrator | 00:01:30.483 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-25 00:01:30.483839 | orchestrator | 00:01:30.483 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-25 00:01:30.483880 | orchestrator | 00:01:30.483 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-25 00:01:30.483962 | orchestrator | 00:01:30.483 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-25 00:01:30.484014 | orchestrator | 00:01:30.483 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-25 00:01:30.484055 | orchestrator | 00:01:30.484 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.484088 | orchestrator | 00:01:30.484 STDOUT terraform:  + availability_zone = "nova" 2025-07-25 00:01:30.484115 | orchestrator | 00:01:30.484 STDOUT terraform:  + config_drive = true 2025-07-25 00:01:30.484158 | orchestrator | 00:01:30.484 STDOUT terraform:  + created = (known after apply) 2025-07-25 00:01:30.484198 | orchestrator | 00:01:30.484 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-25 00:01:30.484235 | orchestrator | 00:01:30.484 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-25 00:01:30.484265 | orchestrator | 00:01:30.484 STDOUT terraform:  + force_delete = false 2025-07-25 00:01:30.484303 | orchestrator | 00:01:30.484 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-25 00:01:30.484344 | orchestrator | 00:01:30.484 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.484385 | orchestrator | 00:01:30.484 STDOUT terraform:  + image_id = (known after apply) 2025-07-25 00:01:30.484435 | orchestrator | 00:01:30.484 STDOUT terraform:  + image_name = (known after apply) 2025-07-25 00:01:30.484478 | orchestrator | 00:01:30.484 STDOUT terraform:  + key_pair = "testbed" 2025-07-25 00:01:30.484515 | orchestrator | 00:01:30.484 STDOUT terraform:  + name = "testbed-node-5" 2025-07-25 00:01:30.484545 | orchestrator | 00:01:30.484 STDOUT terraform:  + power_state = "active" 2025-07-25 00:01:30.484587 | orchestrator | 00:01:30.484 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.484627 | orchestrator | 00:01:30.484 STDOUT terraform:  + security_groups = (known after apply) 2025-07-25 00:01:30.484656 | orchestrator | 00:01:30.484 STDOUT terraform:  + stop_before_destroy = false 2025-07-25 00:01:30.484695 | orchestrator | 00:01:30.484 STDOUT terraform:  + updated = (known after apply) 2025-07-25 00:01:30.484750 | orchestrator | 00:01:30.484 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-25 00:01:30.484774 | orchestrator | 00:01:30.484 STDOUT terraform:  + block_device { 2025-07-25 00:01:30.484811 | orchestrator | 00:01:30.484 STDOUT terraform:  + boot_index = 0 2025-07-25 00:01:30.484845 | orchestrator | 00:01:30.484 STDOUT terraform:  + delete_on_termination = false 2025-07-25 00:01:30.484881 | orchestrator | 00:01:30.484 STDOUT terraform:  + destination_type = "volume" 2025-07-25 00:01:30.484923 | orchestrator | 00:01:30.484 STDOUT terraform:  + multiattach = false 2025-07-25 00:01:30.485018 | orchestrator | 00:01:30.484 STDOUT terraform:  + source_type = "volume" 2025-07-25 00:01:30.485064 | orchestrator | 00:01:30.485 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.485086 | orchestrator | 00:01:30.485 STDOUT terraform:  } 2025-07-25 00:01:30.485108 | orchestrator | 00:01:30.485 STDOUT terraform:  + network { 2025-07-25 00:01:30.485134 | orchestrator | 00:01:30.485 STDOUT terraform:  + access_network = false 2025-07-25 00:01:30.485170 | orchestrator | 00:01:30.485 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-25 00:01:30.485206 | orchestrator | 00:01:30.485 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-25 00:01:30.486056 | orchestrator | 00:01:30.485 STDOUT terraform:  + mac = (known after apply) 2025-07-25 00:01:30.486139 | orchestrator | 00:01:30.486 STDOUT terraform:  + name = (known after apply) 2025-07-25 00:01:30.486191 | orchestrator | 00:01:30.486 STDOUT terraform:  + port = (known after apply) 2025-07-25 00:01:30.486230 | orchestrator | 00:01:30.486 STDOUT terraform:  + uuid = (known after apply) 2025-07-25 00:01:30.486260 | orchestrator | 00:01:30.486 STDOUT terraform:  } 2025-07-25 00:01:30.486286 | orchestrator | 00:01:30.486 STDOUT terraform:  } 2025-07-25 00:01:30.486338 | orchestrator | 00:01:30.486 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-25 00:01:30.486381 | orchestrator | 00:01:30.486 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-25 00:01:30.486417 | orchestrator | 00:01:30.486 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-25 00:01:30.486487 | orchestrator | 00:01:30.486 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.486524 | orchestrator | 00:01:30.486 STDOUT terraform:  + name = "testbed" 2025-07-25 00:01:30.486566 | orchestrator | 00:01:30.486 STDOUT terraform:  + private_key = (sensitive value) 2025-07-25 00:01:30.486603 | orchestrator | 00:01:30.486 STDOUT terraform:  + public_key = (known after apply) 2025-07-25 00:01:30.486641 | orchestrator | 00:01:30.486 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.486680 | orchestrator | 00:01:30.486 STDOUT terraform:  + user_id = (known after apply) 2025-07-25 00:01:30.486711 | orchestrator | 00:01:30.486 STDOUT terraform:  } 2025-07-25 00:01:30.486777 | orchestrator | 00:01:30.486 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-25 00:01:30.486833 | orchestrator | 00:01:30.486 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-25 00:01:30.486867 | orchestrator | 00:01:30.486 STDOUT terraform:  + device = (known after apply) 2025-07-25 00:01:30.486901 | orchestrator | 00:01:30.486 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.487008 | orchestrator | 00:01:30.486 STDOUT terraform:  + instance_id = (known after apply) 2025-07-25 00:01:30.487047 | orchestrator | 00:01:30.487 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.487082 | orchestrator | 00:01:30.487 STDOUT terraform:  + volume_id = (known after apply) 2025-07-25 00:01:30.487105 | orchestrator | 00:01:30.487 STDOUT terraform:  } 2025-07-25 00:01:30.487173 | orchestrator | 00:01:30.487 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-25 00:01:30.487238 | orchestrator | 00:01:30.487 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-25 00:01:30.487275 | orchestrator | 00:01:30.487 STDOUT terraform:  + device = (known after apply) 2025-07-25 00:01:30.487311 | orchestrator | 00:01:30.487 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.487348 | orchestrator | 00:01:30.487 STDOUT terraform:  + instance_id = (known after apply) 2025-07-25 00:01:30.487391 | orchestrator | 00:01:30.487 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.487426 | orchestrator | 00:01:30.487 STDOUT terraform:  + volume_id = (known after apply) 2025-07-25 00:01:30.487446 | orchestrator | 00:01:30.487 STDOUT terraform:  } 2025-07-25 00:01:30.487510 | orchestrator | 00:01:30.487 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-25 00:01:30.487564 | orchestrator | 00:01:30.487 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-25 00:01:30.487607 | orchestrator | 00:01:30.487 STDOUT terraform:  + device = (known after apply) 2025-07-25 00:01:30.487641 | orchestrator | 00:01:30.487 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.487685 | orchestrator | 00:01:30.487 STDOUT terraform:  + instance_id = (known after apply) 2025-07-25 00:01:30.487719 | orchestrator | 00:01:30.487 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.487752 | orchestrator | 00:01:30.487 STDOUT terraform:  + volume_id = (known after apply) 2025-07-25 00:01:30.487771 | orchestrator | 00:01:30.487 STDOUT terraform:  } 2025-07-25 00:01:30.487824 | orchestrator | 00:01:30.487 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-25 00:01:30.487886 | orchestrator | 00:01:30.487 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-25 00:01:30.487945 | orchestrator | 00:01:30.487 STDOUT terraform:  + device = (known after apply) 2025-07-25 00:01:30.487982 | orchestrator | 00:01:30.487 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.488016 | orchestrator | 00:01:30.487 STDOUT terraform:  + instance_id = (known after apply) 2025-07-25 00:01:30.488050 | orchestrator | 00:01:30.488 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.488117 | orchestrator | 00:01:30.488 STDOUT terraform:  + volume_id = (known after apply) 2025-07-25 00:01:30.488157 | orchestrator | 00:01:30.488 STDOUT terraform:  } 2025-07-25 00:01:30.488223 | orchestrator | 00:01:30.488 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-25 00:01:30.488290 | orchestrator | 00:01:30.488 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-25 00:01:30.488327 | orchestrator | 00:01:30.488 STDOUT terraform:  + device = (known after apply) 2025-07-25 00:01:30.488373 | orchestrator | 00:01:30.488 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.488417 | orchestrator | 00:01:30.488 STDOUT terraform:  + instance_id = (known after apply) 2025-07-25 00:01:30.488452 | orchestrator | 00:01:30.488 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.488488 | orchestrator | 00:01:30.488 STDOUT terraform:  + volume_id = (known after apply) 2025-07-25 00:01:30.488509 | orchestrator | 00:01:30.488 STDOUT terraform:  } 2025-07-25 00:01:30.488571 | orchestrator | 00:01:30.488 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-25 00:01:30.488627 | orchestrator | 00:01:30.488 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-25 00:01:30.488674 | orchestrator | 00:01:30.488 STDOUT terraform:  + device = (known after apply) 2025-07-25 00:01:30.488711 | orchestrator | 00:01:30.488 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.488747 | orchestrator | 00:01:30.488 STDOUT terraform:  + instance_id = (known after apply) 2025-07-25 00:01:30.488783 | orchestrator | 00:01:30.488 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.488819 | orchestrator | 00:01:30.488 STDOUT terraform:  + volume_id = (known after apply) 2025-07-25 00:01:30.488841 | orchestrator | 00:01:30.488 STDOUT terraform:  } 2025-07-25 00:01:30.488896 | orchestrator | 00:01:30.488 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-25 00:01:30.488986 | orchestrator | 00:01:30.488 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-25 00:01:30.489026 | orchestrator | 00:01:30.488 STDOUT terraform:  + device = (known after apply) 2025-07-25 00:01:30.489070 | orchestrator | 00:01:30.489 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.489105 | orchestrator | 00:01:30.489 STDOUT terraform:  + instance_id = (known after apply) 2025-07-25 00:01:30.489141 | orchestrator | 00:01:30.489 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.489175 | orchestrator | 00:01:30.489 STDOUT terraform:  + volume_id = (known after apply) 2025-07-25 00:01:30.489195 | orchestrator | 00:01:30.489 STDOUT terraform:  } 2025-07-25 00:01:30.489250 | orchestrator | 00:01:30.489 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-25 00:01:30.489303 | orchestrator | 00:01:30.489 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-25 00:01:30.489338 | orchestrator | 00:01:30.489 STDOUT terraform:  + device = (known after apply) 2025-07-25 00:01:30.489373 | orchestrator | 00:01:30.489 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.489414 | orchestrator | 00:01:30.489 STDOUT terraform:  + instance_id = (known after apply) 2025-07-25 00:01:30.489451 | orchestrator | 00:01:30.489 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.489502 | orchestrator | 00:01:30.489 STDOUT terraform:  + volume_id = (known after apply) 2025-07-25 00:01:30.489524 | orchestrator | 00:01:30.489 STDOUT terraform:  } 2025-07-25 00:01:30.489578 | orchestrator | 00:01:30.489 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-25 00:01:30.489631 | orchestrator | 00:01:30.489 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-25 00:01:30.489670 | orchestrator | 00:01:30.489 STDOUT terraform:  + device = (known after apply) 2025-07-25 00:01:30.489705 | orchestrator | 00:01:30.489 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.489740 | orchestrator | 00:01:30.489 STDOUT terraform:  + instance_id = (known after apply) 2025-07-25 00:01:30.489774 | orchestrator | 00:01:30.489 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.489809 | orchestrator | 00:01:30.489 STDOUT terraform:  + volume_id = (known after apply) 2025-07-25 00:01:30.489848 | orchestrator | 00:01:30.489 STDOUT terraform:  } 2025-07-25 00:01:30.489953 | orchestrator | 00:01:30.489 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-25 00:01:30.490035 | orchestrator | 00:01:30.489 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-25 00:01:30.490074 | orchestrator | 00:01:30.490 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-25 00:01:30.490109 | orchestrator | 00:01:30.490 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-25 00:01:30.490144 | orchestrator | 00:01:30.490 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.490180 | orchestrator | 00:01:30.490 STDOUT terraform:  + port_id = (known after apply) 2025-07-25 00:01:30.490216 | orchestrator | 00:01:30.490 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.490237 | orchestrator | 00:01:30.490 STDOUT terraform:  } 2025-07-25 00:01:30.490298 | orchestrator | 00:01:30.490 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-25 00:01:30.490353 | orchestrator | 00:01:30.490 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-25 00:01:30.490395 | orchestrator | 00:01:30.490 STDOUT terraform:  + address = (known after apply) 2025-07-25 00:01:30.490426 | orchestrator | 00:01:30.490 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.490457 | orchestrator | 00:01:30.490 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-25 00:01:30.490488 | orchestrator | 00:01:30.490 STDOUT terraform:  + dns_name = (known after apply) 2025-07-25 00:01:30.490518 | orchestrator | 00:01:30.490 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-25 00:01:30.490550 | orchestrator | 00:01:30.490 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.490577 | orchestrator | 00:01:30.490 STDOUT terraform:  + pool = "public" 2025-07-25 00:01:30.490608 | orchestrator | 00:01:30.490 STDOUT terraform:  + port_id = (known after apply) 2025-07-25 00:01:30.490640 | orchestrator | 00:01:30.490 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.490672 | orchestrator | 00:01:30.490 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-25 00:01:30.490717 | orchestrator | 00:01:30.490 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.490738 | orchestrator | 00:01:30.490 STDOUT terraform:  } 2025-07-25 00:01:30.490795 | orchestrator | 00:01:30.490 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-25 00:01:30.490845 | orchestrator | 00:01:30.490 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-25 00:01:30.490887 | orchestrator | 00:01:30.490 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-25 00:01:30.490944 | orchestrator | 00:01:30.490 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.490977 | orchestrator | 00:01:30.490 STDOUT terraform:  + availability_zone_hints = [ 2025-07-25 00:01:30.490998 | orchestrator | 00:01:30.490 STDOUT terraform:  + "nova", 2025-07-25 00:01:30.491018 | orchestrator | 00:01:30.491 STDOUT terraform:  ] 2025-07-25 00:01:30.491060 | orchestrator | 00:01:30.491 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-25 00:01:30.491102 | orchestrator | 00:01:30.491 STDOUT terraform:  + external = (known after apply) 2025-07-25 00:01:30.491153 | orchestrator | 00:01:30.491 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.491205 | orchestrator | 00:01:30.491 STDOUT terraform:  + mtu = (known after apply) 2025-07-25 00:01:30.491249 | orchestrator | 00:01:30.491 STDOUT terraform:  + name = "net-testbed-management" 2025-07-25 00:01:30.491289 | orchestrator | 00:01:30.491 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-25 00:01:30.491330 | orchestrator | 00:01:30.491 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-25 00:01:30.491371 | orchestrator | 00:01:30.491 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.491418 | orchestrator | 00:01:30.491 STDOUT terraform:  + shared = (known after apply) 2025-07-25 00:01:30.491460 | orchestrator | 00:01:30.491 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.491500 | orchestrator | 00:01:30.491 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-25 00:01:30.491538 | orchestrator | 00:01:30.491 STDOUT terraform:  + segments (known after apply) 2025-07-25 00:01:30.491559 | orchestrator | 00:01:30.491 STDOUT terraform:  } 2025-07-25 00:01:30.491619 | orchestrator | 00:01:30.491 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-25 00:01:30.491670 | orchestrator | 00:01:30.491 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-25 00:01:30.491711 | orchestrator | 00:01:30.491 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-25 00:01:30.491752 | orchestrator | 00:01:30.491 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-25 00:01:30.491792 | orchestrator | 00:01:30.491 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-25 00:01:30.491833 | orchestrator | 00:01:30.491 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.491875 | orchestrator | 00:01:30.491 STDOUT terraform:  + device_id = (known after apply) 2025-07-25 00:01:30.491937 | orchestrator | 00:01:30.491 STDOUT terraform:  + device_owner = (known after apply) 2025-07-25 00:01:30.491981 | orchestrator | 00:01:30.491 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-25 00:01:30.492032 | orchestrator | 00:01:30.491 STDOUT terraform:  + dns_name = (known after apply) 2025-07-25 00:01:30.492073 | orchestrator | 00:01:30.492 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.492114 | orchestrator | 00:01:30.492 STDOUT terraform:  + mac_address = (known after apply) 2025-07-25 00:01:30.492156 | orchestrator | 00:01:30.492 STDOUT terraform:  + network_id = (known after apply) 2025-07-25 00:01:30.492197 | orchestrator | 00:01:30.492 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-25 00:01:30.492242 | orchestrator | 00:01:30.492 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-25 00:01:30.492283 | orchestrator | 00:01:30.492 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.492323 | orchestrator | 00:01:30.492 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-25 00:01:30.492370 | orchestrator | 00:01:30.492 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.492396 | orchestrator | 00:01:30.492 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.492442 | orchestrator | 00:01:30.492 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-25 00:01:30.492466 | orchestrator | 00:01:30.492 STDOUT terraform:  } 2025-07-25 00:01:30.492492 | orchestrator | 00:01:30.492 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.492526 | orchestrator | 00:01:30.492 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-25 00:01:30.492548 | orchestrator | 00:01:30.492 STDOUT terraform:  } 2025-07-25 00:01:30.492577 | orchestrator | 00:01:30.492 STDOUT terraform:  + binding (known after apply) 2025-07-25 00:01:30.492600 | orchestrator | 00:01:30.492 STDOUT terraform:  + fixed_ip { 2025-07-25 00:01:30.492630 | orchestrator | 00:01:30.492 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-25 00:01:30.492665 | orchestrator | 00:01:30.492 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-25 00:01:30.492686 | orchestrator | 00:01:30.492 STDOUT terraform:  } 2025-07-25 00:01:30.492707 | orchestrator | 00:01:30.492 STDOUT terraform:  } 2025-07-25 00:01:30.492758 | orchestrator | 00:01:30.492 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-25 00:01:30.492816 | orchestrator | 00:01:30.492 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-25 00:01:30.492858 | orchestrator | 00:01:30.492 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-25 00:01:30.492911 | orchestrator | 00:01:30.492 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-25 00:01:30.492995 | orchestrator | 00:01:30.492 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-25 00:01:30.493042 | orchestrator | 00:01:30.493 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.493084 | orchestrator | 00:01:30.493 STDOUT terraform:  + device_id = (known after apply) 2025-07-25 00:01:30.493132 | orchestrator | 00:01:30.493 STDOUT terraform:  + device_owner = (known after apply) 2025-07-25 00:01:30.493173 | orchestrator | 00:01:30.493 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-25 00:01:30.493216 | orchestrator | 00:01:30.493 STDOUT terraform:  + dns_name = (known after apply) 2025-07-25 00:01:30.493269 | orchestrator | 00:01:30.493 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.493319 | orchestrator | 00:01:30.493 STDOUT terraform:  + mac_address = (known after apply) 2025-07-25 00:01:30.493372 | orchestrator | 00:01:30.493 STDOUT terraform:  + network_id = (known after apply) 2025-07-25 00:01:30.493413 | orchestrator | 00:01:30.493 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-25 00:01:30.493454 | orchestrator | 00:01:30.493 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-25 00:01:30.493496 | orchestrator | 00:01:30.493 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.493537 | orchestrator | 00:01:30.493 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-25 00:01:30.493578 | orchestrator | 00:01:30.493 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.493618 | orchestrator | 00:01:30.493 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.493661 | orchestrator | 00:01:30.493 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-25 00:01:30.493684 | orchestrator | 00:01:30.493 STDOUT terraform:  } 2025-07-25 00:01:30.493715 | orchestrator | 00:01:30.493 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.493751 | orchestrator | 00:01:30.493 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-25 00:01:30.493775 | orchestrator | 00:01:30.493 STDOUT terraform:  } 2025-07-25 00:01:30.493802 | orchestrator | 00:01:30.493 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.493838 | orchestrator | 00:01:30.493 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-25 00:01:30.493859 | orchestrator | 00:01:30.493 STDOUT terraform:  } 2025-07-25 00:01:30.493886 | orchestrator | 00:01:30.493 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.493944 | orchestrator | 00:01:30.493 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-25 00:01:30.494039 | orchestrator | 00:01:30.493 STDOUT terraform:  } 2025-07-25 00:01:30.494074 | orchestrator | 00:01:30.494 STDOUT terraform:  + binding (known after apply) 2025-07-25 00:01:30.494099 | orchestrator | 00:01:30.494 STDOUT terraform:  + fixed_ip { 2025-07-25 00:01:30.494132 | orchestrator | 00:01:30.494 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-25 00:01:30.494178 | orchestrator | 00:01:30.494 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-25 00:01:30.494202 | orchestrator | 00:01:30.494 STDOUT terraform:  } 2025-07-25 00:01:30.494225 | orchestrator | 00:01:30.494 STDOUT terraform:  } 2025-07-25 00:01:30.494278 | orchestrator | 00:01:30.494 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-25 00:01:30.494329 | orchestrator | 00:01:30.494 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-25 00:01:30.494388 | orchestrator | 00:01:30.494 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-25 00:01:30.494432 | orchestrator | 00:01:30.494 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-25 00:01:30.494473 | orchestrator | 00:01:30.494 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-25 00:01:30.494524 | orchestrator | 00:01:30.494 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.494567 | orchestrator | 00:01:30.494 STDOUT terraform:  + device_id = (known after apply) 2025-07-25 00:01:30.494610 | orchestrator | 00:01:30.494 STDOUT terraform:  + device_owner = (known after apply) 2025-07-25 00:01:30.494654 | orchestrator | 00:01:30.494 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-25 00:01:30.494696 | orchestrator | 00:01:30.494 STDOUT terraform:  + dns_name = (known after apply) 2025-07-25 00:01:30.494740 | orchestrator | 00:01:30.494 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.494783 | orchestrator | 00:01:30.494 STDOUT terraform:  + mac_address = (known after apply) 2025-07-25 00:01:30.494826 | orchestrator | 00:01:30.494 STDOUT terraform:  + network_id = (known after apply) 2025-07-25 00:01:30.494867 | orchestrator | 00:01:30.494 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-25 00:01:30.494910 | orchestrator | 00:01:30.494 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-25 00:01:30.495050 | orchestrator | 00:01:30.494 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.495103 | orchestrator | 00:01:30.495 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-25 00:01:30.495146 | orchestrator | 00:01:30.495 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.495174 | orchestrator | 00:01:30.495 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.495209 | orchestrator | 00:01:30.495 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-25 00:01:30.495230 | orchestrator | 00:01:30.495 STDOUT terraform:  } 2025-07-25 00:01:30.495256 | orchestrator | 00:01:30.495 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.495291 | orchestrator | 00:01:30.495 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-25 00:01:30.495311 | orchestrator | 00:01:30.495 STDOUT terraform:  } 2025-07-25 00:01:30.495337 | orchestrator | 00:01:30.495 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.495371 | orchestrator | 00:01:30.495 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-25 00:01:30.495394 | orchestrator | 00:01:30.495 STDOUT terraform:  } 2025-07-25 00:01:30.495420 | orchestrator | 00:01:30.495 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.495454 | orchestrator | 00:01:30.495 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-25 00:01:30.495474 | orchestrator | 00:01:30.495 STDOUT terraform:  } 2025-07-25 00:01:30.495505 | orchestrator | 00:01:30.495 STDOUT terraform:  + binding (known after apply) 2025-07-25 00:01:30.495528 | orchestrator | 00:01:30.495 STDOUT terraform:  + fixed_ip { 2025-07-25 00:01:30.495566 | orchestrator | 00:01:30.495 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-25 00:01:30.495603 | orchestrator | 00:01:30.495 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-25 00:01:30.495625 | orchestrator | 00:01:30.495 STDOUT terraform:  } 2025-07-25 00:01:30.495645 | orchestrator | 00:01:30.495 STDOUT terraform:  } 2025-07-25 00:01:30.495697 | orchestrator | 00:01:30.495 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-25 00:01:30.495751 | orchestrator | 00:01:30.495 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-25 00:01:30.495793 | orchestrator | 00:01:30.495 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-25 00:01:30.495841 | orchestrator | 00:01:30.495 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-25 00:01:30.495882 | orchestrator | 00:01:30.495 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-25 00:01:30.495943 | orchestrator | 00:01:30.495 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.495987 | orchestrator | 00:01:30.495 STDOUT terraform:  + device_id = (known after apply) 2025-07-25 00:01:30.496028 | orchestrator | 00:01:30.495 STDOUT terraform:  + device_owner = (known after apply) 2025-07-25 00:01:30.496069 | orchestrator | 00:01:30.496 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-25 00:01:30.496110 | orchestrator | 00:01:30.496 STDOUT terraform:  + dns_name = (known after apply) 2025-07-25 00:01:30.496152 | orchestrator | 00:01:30.496 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.496195 | orchestrator | 00:01:30.496 STDOUT terraform:  + mac_address = (known after apply) 2025-07-25 00:01:30.496237 | orchestrator | 00:01:30.496 STDOUT terraform:  + network_id = (known after apply) 2025-07-25 00:01:30.496278 | orchestrator | 00:01:30.496 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-25 00:01:30.496318 | orchestrator | 00:01:30.496 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-25 00:01:30.496359 | orchestrator | 00:01:30.496 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.496399 | orchestrator | 00:01:30.496 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-25 00:01:30.496440 | orchestrator | 00:01:30.496 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.496468 | orchestrator | 00:01:30.496 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.496502 | orchestrator | 00:01:30.496 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-25 00:01:30.496543 | orchestrator | 00:01:30.496 STDOUT terraform:  } 2025-07-25 00:01:30.496570 | orchestrator | 00:01:30.496 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.496607 | orchestrator | 00:01:30.496 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-25 00:01:30.496629 | orchestrator | 00:01:30.496 STDOUT terraform:  } 2025-07-25 00:01:30.496655 | orchestrator | 00:01:30.496 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.496689 | orchestrator | 00:01:30.496 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-25 00:01:30.496715 | orchestrator | 00:01:30.496 STDOUT terraform:  } 2025-07-25 00:01:30.496741 | orchestrator | 00:01:30.496 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.496775 | orchestrator | 00:01:30.496 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-25 00:01:30.496795 | orchestrator | 00:01:30.496 STDOUT terraform:  } 2025-07-25 00:01:30.496826 | orchestrator | 00:01:30.496 STDOUT terraform:  + binding (known after apply) 2025-07-25 00:01:30.496847 | orchestrator | 00:01:30.496 STDOUT terraform:  + fixed_ip { 2025-07-25 00:01:30.496879 | orchestrator | 00:01:30.496 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-25 00:01:30.496915 | orchestrator | 00:01:30.496 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-25 00:01:30.496955 | orchestrator | 00:01:30.496 STDOUT terraform:  } 2025-07-25 00:01:30.496988 | orchestrator | 00:01:30.496 STDOUT terraform:  } 2025-07-25 00:01:30.497042 | orchestrator | 00:01:30.496 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-25 00:01:30.497093 | orchestrator | 00:01:30.497 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-25 00:01:30.497135 | orchestrator | 00:01:30.497 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-25 00:01:30.497176 | orchestrator | 00:01:30.497 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-25 00:01:30.497216 | orchestrator | 00:01:30.497 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-25 00:01:30.497258 | orchestrator | 00:01:30.497 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.497299 | orchestrator | 00:01:30.497 STDOUT terraform:  + device_id = (known after apply) 2025-07-25 00:01:30.497379 | orchestrator | 00:01:30.497 STDOUT terraform:  + device_owner = (known after apply) 2025-07-25 00:01:30.497428 | orchestrator | 00:01:30.497 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-25 00:01:30.497473 | orchestrator | 00:01:30.497 STDOUT terraform:  + dns_name = (known after apply) 2025-07-25 00:01:30.497519 | orchestrator | 00:01:30.497 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.497561 | orchestrator | 00:01:30.497 STDOUT terraform:  + mac_address = (known after apply) 2025-07-25 00:01:30.497602 | orchestrator | 00:01:30.497 STDOUT terraform:  + network_id = (known after apply) 2025-07-25 00:01:30.497643 | orchestrator | 00:01:30.497 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-25 00:01:30.497687 | orchestrator | 00:01:30.497 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-25 00:01:30.497730 | orchestrator | 00:01:30.497 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.497773 | orchestrator | 00:01:30.497 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-25 00:01:30.497815 | orchestrator | 00:01:30.497 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.497842 | orchestrator | 00:01:30.497 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.497877 | orchestrator | 00:01:30.497 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-25 00:01:30.497904 | orchestrator | 00:01:30.497 STDOUT terraform:  } 2025-07-25 00:01:30.497964 | orchestrator | 00:01:30.497 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.498002 | orchestrator | 00:01:30.497 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-25 00:01:30.498039 | orchestrator | 00:01:30.498 STDOUT terraform:  } 2025-07-25 00:01:30.498067 | orchestrator | 00:01:30.498 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.498104 | orchestrator | 00:01:30.498 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-25 00:01:30.498126 | orchestrator | 00:01:30.498 STDOUT terraform:  } 2025-07-25 00:01:30.498155 | orchestrator | 00:01:30.498 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.498190 | orchestrator | 00:01:30.498 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-25 00:01:30.498211 | orchestrator | 00:01:30.498 STDOUT terraform:  } 2025-07-25 00:01:30.498240 | orchestrator | 00:01:30.498 STDOUT terraform:  + binding (known after apply) 2025-07-25 00:01:30.498261 | orchestrator | 00:01:30.498 STDOUT terraform:  + fixed_ip { 2025-07-25 00:01:30.498292 | orchestrator | 00:01:30.498 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-25 00:01:30.498327 | orchestrator | 00:01:30.498 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-25 00:01:30.498348 | orchestrator | 00:01:30.498 STDOUT terraform:  } 2025-07-25 00:01:30.498368 | orchestrator | 00:01:30.498 STDOUT terraform:  } 2025-07-25 00:01:30.498419 | orchestrator | 00:01:30.498 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-25 00:01:30.498469 | orchestrator | 00:01:30.498 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-25 00:01:30.498512 | orchestrator | 00:01:30.498 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-25 00:01:30.498554 | orchestrator | 00:01:30.498 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-25 00:01:30.498594 | orchestrator | 00:01:30.498 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-25 00:01:30.498636 | orchestrator | 00:01:30.498 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.498682 | orchestrator | 00:01:30.498 STDOUT terraform:  + device_id = (known after apply) 2025-07-25 00:01:30.498723 | orchestrator | 00:01:30.498 STDOUT terraform:  + device_owner = (known after apply) 2025-07-25 00:01:30.498764 | orchestrator | 00:01:30.498 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-25 00:01:30.498806 | orchestrator | 00:01:30.498 STDOUT terraform:  + dns_name = (known after apply) 2025-07-25 00:01:30.498847 | orchestrator | 00:01:30.498 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.498889 | orchestrator | 00:01:30.498 STDOUT terraform:  + mac_address = (known after apply) 2025-07-25 00:01:30.498944 | orchestrator | 00:01:30.498 STDOUT terraform:  + network_id = (known after apply) 2025-07-25 00:01:30.498994 | orchestrator | 00:01:30.498 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-25 00:01:30.499042 | orchestrator | 00:01:30.499 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-25 00:01:30.499084 | orchestrator | 00:01:30.499 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.499126 | orchestrator | 00:01:30.499 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-25 00:01:30.499167 | orchestrator | 00:01:30.499 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.499195 | orchestrator | 00:01:30.499 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.499229 | orchestrator | 00:01:30.499 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-25 00:01:30.499251 | orchestrator | 00:01:30.499 STDOUT terraform:  } 2025-07-25 00:01:30.499280 | orchestrator | 00:01:30.499 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.499315 | orchestrator | 00:01:30.499 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-25 00:01:30.499337 | orchestrator | 00:01:30.499 STDOUT terraform:  } 2025-07-25 00:01:30.499363 | orchestrator | 00:01:30.499 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.499397 | orchestrator | 00:01:30.499 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-25 00:01:30.499419 | orchestrator | 00:01:30.499 STDOUT terraform:  } 2025-07-25 00:01:30.499446 | orchestrator | 00:01:30.499 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.499480 | orchestrator | 00:01:30.499 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-25 00:01:30.499500 | orchestrator | 00:01:30.499 STDOUT terraform:  } 2025-07-25 00:01:30.499529 | orchestrator | 00:01:30.499 STDOUT terraform:  + binding (known after apply) 2025-07-25 00:01:30.499550 | orchestrator | 00:01:30.499 STDOUT terraform:  + fixed_ip { 2025-07-25 00:01:30.499581 | orchestrator | 00:01:30.499 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-25 00:01:30.499616 | orchestrator | 00:01:30.499 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-25 00:01:30.499639 | orchestrator | 00:01:30.499 STDOUT terraform:  } 2025-07-25 00:01:30.499659 | orchestrator | 00:01:30.499 STDOUT terraform:  } 2025-07-25 00:01:30.499711 | orchestrator | 00:01:30.499 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-25 00:01:30.499761 | orchestrator | 00:01:30.499 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-25 00:01:30.499804 | orchestrator | 00:01:30.499 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-25 00:01:30.499845 | orchestrator | 00:01:30.499 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-25 00:01:30.499886 | orchestrator | 00:01:30.499 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-25 00:01:30.499938 | orchestrator | 00:01:30.499 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.499982 | orchestrator | 00:01:30.499 STDOUT terraform:  + device_id = (known after apply) 2025-07-25 00:01:30.500023 | orchestrator | 00:01:30.499 STDOUT terraform:  + device_owner = (known after apply) 2025-07-25 00:01:30.500066 | orchestrator | 00:01:30.500 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-25 00:01:30.500113 | orchestrator | 00:01:30.500 STDOUT terraform:  + dns_name = (known after apply) 2025-07-25 00:01:30.500158 | orchestrator | 00:01:30.500 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.500202 | orchestrator | 00:01:30.500 STDOUT terraform:  + mac_address = (known after apply) 2025-07-25 00:01:30.500244 | orchestrator | 00:01:30.500 STDOUT terraform:  + network_id = (known after apply) 2025-07-25 00:01:30.500285 | orchestrator | 00:01:30.500 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-25 00:01:30.500327 | orchestrator | 00:01:30.500 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-25 00:01:30.500369 | orchestrator | 00:01:30.500 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.500410 | orchestrator | 00:01:30.500 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-25 00:01:30.500454 | orchestrator | 00:01:30.500 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.500483 | orchestrator | 00:01:30.500 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.500520 | orchestrator | 00:01:30.500 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-25 00:01:30.500541 | orchestrator | 00:01:30.500 STDOUT terraform:  } 2025-07-25 00:01:30.500568 | orchestrator | 00:01:30.500 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.500603 | orchestrator | 00:01:30.500 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-25 00:01:30.500624 | orchestrator | 00:01:30.500 STDOUT terraform:  } 2025-07-25 00:01:30.500652 | orchestrator | 00:01:30.500 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.500686 | orchestrator | 00:01:30.500 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-25 00:01:30.500709 | orchestrator | 00:01:30.500 STDOUT terraform:  } 2025-07-25 00:01:30.500736 | orchestrator | 00:01:30.500 STDOUT terraform:  + allowed_address_pairs { 2025-07-25 00:01:30.500770 | orchestrator | 00:01:30.500 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-25 00:01:30.500792 | orchestrator | 00:01:30.500 STDOUT terraform:  } 2025-07-25 00:01:30.500821 | orchestrator | 00:01:30.500 STDOUT terraform:  + binding (known after apply) 2025-07-25 00:01:30.500842 | orchestrator | 00:01:30.500 STDOUT terraform:  + fixed_ip { 2025-07-25 00:01:30.500873 | orchestrator | 00:01:30.500 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-25 00:01:30.500909 | orchestrator | 00:01:30.500 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-25 00:01:30.500955 | orchestrator | 00:01:30.500 STDOUT terraform:  } 2025-07-25 00:01:30.500978 | orchestrator | 00:01:30.500 STDOUT terraform:  } 2025-07-25 00:01:30.501031 | orchestrator | 00:01:30.500 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-25 00:01:30.501084 | orchestrator | 00:01:30.501 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-25 00:01:30.501110 | orchestrator | 00:01:30.501 STDOUT terraform:  + force_destroy = false 2025-07-25 00:01:30.501145 | orchestrator | 00:01:30.501 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.501188 | orchestrator | 00:01:30.501 STDOUT terraform:  + port_id = (known after apply) 2025-07-25 00:01:30.501223 | orchestrator | 00:01:30.501 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.501259 | orchestrator | 00:01:30.501 STDOUT terraform:  + router_id = (known after apply) 2025-07-25 00:01:30.501293 | orchestrator | 00:01:30.501 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-25 00:01:30.501313 | orchestrator | 00:01:30.501 STDOUT terraform:  } 2025-07-25 00:01:30.501355 | orchestrator | 00:01:30.501 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-25 00:01:30.501398 | orchestrator | 00:01:30.501 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-25 00:01:30.501441 | orchestrator | 00:01:30.501 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-25 00:01:30.501483 | orchestrator | 00:01:30.501 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.501512 | orchestrator | 00:01:30.501 STDOUT terraform:  + availability_zone_hints = [ 2025-07-25 00:01:30.501539 | orchestrator | 00:01:30.501 STDOUT terraform:  + "nova", 2025-07-25 00:01:30.501560 | orchestrator | 00:01:30.501 STDOUT terraform:  ] 2025-07-25 00:01:30.501602 | orchestrator | 00:01:30.501 STDOUT terraform:  + distributed = (known after apply) 2025-07-25 00:01:30.501644 | orchestrator | 00:01:30.501 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-25 00:01:30.501700 | orchestrator | 00:01:30.501 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-25 00:01:30.501748 | orchestrator | 00:01:30.501 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-25 00:01:30.501793 | orchestrator | 00:01:30.501 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.501830 | orchestrator | 00:01:30.501 STDOUT terraform:  + name = "testbed" 2025-07-25 00:01:30.501872 | orchestrator | 00:01:30.501 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.501914 | orchestrator | 00:01:30.501 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.501963 | orchestrator | 00:01:30.501 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-25 00:01:30.501985 | orchestrator | 00:01:30.501 STDOUT terraform:  } 2025-07-25 00:01:30.502062 | orchestrator | 00:01:30.501 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-25 00:01:30.502122 | orchestrator | 00:01:30.502 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-25 00:01:30.502153 | orchestrator | 00:01:30.502 STDOUT terraform:  + description = "ssh" 2025-07-25 00:01:30.502188 | orchestrator | 00:01:30.502 STDOUT terraform:  + direction = "ingress" 2025-07-25 00:01:30.502221 | orchestrator | 00:01:30.502 STDOUT terraform:  + ethertype = "IPv4" 2025-07-25 00:01:30.502272 | orchestrator | 00:01:30.502 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.502303 | orchestrator | 00:01:30.502 STDOUT terraform:  + port_range_max = 22 2025-07-25 00:01:30.502334 | orchestrator | 00:01:30.502 STDOUT terraform:  + port_range_min = 22 2025-07-25 00:01:30.502374 | orchestrator | 00:01:30.502 STDOUT terraform:  + protocol = "tcp" 2025-07-25 00:01:30.502419 | orchestrator | 00:01:30.502 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.502459 | orchestrator | 00:01:30.502 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-25 00:01:30.502501 | orchestrator | 00:01:30.502 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-25 00:01:30.502538 | orchestrator | 00:01:30.502 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-25 00:01:30.502581 | orchestrator | 00:01:30.502 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-25 00:01:30.502623 | orchestrator | 00:01:30.502 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.502643 | orchestrator | 00:01:30.502 STDOUT terraform:  } 2025-07-25 00:01:30.502701 | orchestrator | 00:01:30.502 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-25 00:01:30.502759 | orchestrator | 00:01:30.502 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-25 00:01:30.502794 | orchestrator | 00:01:30.502 STDOUT terraform:  + description = "wireguard" 2025-07-25 00:01:30.502834 | orchestrator | 00:01:30.502 STDOUT terraform:  + direction = "ingress" 2025-07-25 00:01:30.502869 | orchestrator | 00:01:30.502 STDOUT terraform:  + ethertype = "IPv4" 2025-07-25 00:01:30.502914 | orchestrator | 00:01:30.502 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.502962 | orchestrator | 00:01:30.502 STDOUT terraform:  + port_range_max = 51820 2025-07-25 00:01:30.502994 | orchestrator | 00:01:30.502 STDOUT terraform:  + port_range_min = 51820 2025-07-25 00:01:30.503027 | orchestrator | 00:01:30.503 STDOUT terraform:  + protocol = "udp" 2025-07-25 00:01:30.503070 | orchestrator | 00:01:30.503 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.503110 | orchestrator | 00:01:30.503 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-25 00:01:30.503152 | orchestrator | 00:01:30.503 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-25 00:01:30.503188 | orchestrator | 00:01:30.503 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-25 00:01:30.503234 | orchestrator | 00:01:30.503 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-25 00:01:30.503280 | orchestrator | 00:01:30.503 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.503301 | orchestrator | 00:01:30.503 STDOUT terraform:  } 2025-07-25 00:01:30.503359 | orchestrator | 00:01:30.503 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-25 00:01:30.503419 | orchestrator | 00:01:30.503 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-25 00:01:30.503454 | orchestrator | 00:01:30.503 STDOUT terraform:  + direction = "ingress" 2025-07-25 00:01:30.503485 | orchestrator | 00:01:30.503 STDOUT terraform:  + ethertype = "IPv4" 2025-07-25 00:01:30.503532 | orchestrator | 00:01:30.503 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.503568 | orchestrator | 00:01:30.503 STDOUT terraform:  + protocol = "tcp" 2025-07-25 00:01:30.503610 | orchestrator | 00:01:30.503 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.503652 | orchestrator | 00:01:30.503 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-25 00:01:30.503694 | orchestrator | 00:01:30.503 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-25 00:01:30.503737 | orchestrator | 00:01:30.503 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-25 00:01:30.503780 | orchestrator | 00:01:30.503 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-25 00:01:30.503824 | orchestrator | 00:01:30.503 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.503845 | orchestrator | 00:01:30.503 STDOUT terraform:  } 2025-07-25 00:01:30.503903 | orchestrator | 00:01:30.503 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-25 00:01:30.503992 | orchestrator | 00:01:30.503 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-25 00:01:30.504041 | orchestrator | 00:01:30.504 STDOUT terraform:  + direction = "ingress" 2025-07-25 00:01:30.504081 | orchestrator | 00:01:30.504 STDOUT terraform:  + ethertype = "IPv4" 2025-07-25 00:01:30.504127 | orchestrator | 00:01:30.504 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.504159 | orchestrator | 00:01:30.504 STDOUT terraform:  + protocol = "udp" 2025-07-25 00:01:30.504205 | orchestrator | 00:01:30.504 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.504246 | orchestrator | 00:01:30.504 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-25 00:01:30.504288 | orchestrator | 00:01:30.504 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-25 00:01:30.504329 | orchestrator | 00:01:30.504 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-25 00:01:30.504371 | orchestrator | 00:01:30.504 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-25 00:01:30.504413 | orchestrator | 00:01:30.504 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.504435 | orchestrator | 00:01:30.504 STDOUT terraform:  } 2025-07-25 00:01:30.504493 | orchestrator | 00:01:30.504 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-25 00:01:30.504554 | orchestrator | 00:01:30.504 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-25 00:01:30.504589 | orchestrator | 00:01:30.504 STDOUT terraform:  + direction = "ingress" 2025-07-25 00:01:30.504621 | orchestrator | 00:01:30.504 STDOUT terraform:  + ethertype = "IPv4" 2025-07-25 00:01:30.504665 | orchestrator | 00:01:30.504 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.504697 | orchestrator | 00:01:30.504 STDOUT terraform:  + protocol = "icmp" 2025-07-25 00:01:30.504749 | orchestrator | 00:01:30.504 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.504790 | orchestrator | 00:01:30.504 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-25 00:01:30.504832 | orchestrator | 00:01:30.504 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-25 00:01:30.504868 | orchestrator | 00:01:30.504 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-25 00:01:30.504911 | orchestrator | 00:01:30.504 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-25 00:01:30.504963 | orchestrator | 00:01:30.504 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.504984 | orchestrator | 00:01:30.504 STDOUT terraform:  } 2025-07-25 00:01:30.505040 | orchestrator | 00:01:30.504 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-25 00:01:30.505098 | orchestrator | 00:01:30.505 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-25 00:01:30.505134 | orchestrator | 00:01:30.505 STDOUT terraform:  + direction = "ingress" 2025-07-25 00:01:30.505165 | orchestrator | 00:01:30.505 STDOUT terraform:  + ethertype = "IPv4" 2025-07-25 00:01:30.505208 | orchestrator | 00:01:30.505 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.505244 | orchestrator | 00:01:30.505 STDOUT terraform:  + protocol = "tcp" 2025-07-25 00:01:30.505290 | orchestrator | 00:01:30.505 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.505331 | orchestrator | 00:01:30.505 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-25 00:01:30.505374 | orchestrator | 00:01:30.505 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-25 00:01:30.505410 | orchestrator | 00:01:30.505 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-25 00:01:30.506482 | orchestrator | 00:01:30.505 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-25 00:01:30.508252 | orchestrator | 00:01:30.508 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.508278 | orchestrator | 00:01:30.508 STDOUT terraform:  } 2025-07-25 00:01:30.508304 | orchestrator | 00:01:30.508 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-25 00:01:30.508351 | orchestrator | 00:01:30.508 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-25 00:01:30.508375 | orchestrator | 00:01:30.508 STDOUT terraform:  + direction = "ingress" 2025-07-25 00:01:30.508398 | orchestrator | 00:01:30.508 STDOUT terraform:  + ethertype = "IPv4" 2025-07-25 00:01:30.508491 | orchestrator | 00:01:30.508 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.508501 | orchestrator | 00:01:30.508 STDOUT terraform:  + protocol = "udp" 2025-07-25 00:01:30.508509 | orchestrator | 00:01:30.508 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.508518 | orchestrator | 00:01:30.508 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-25 00:01:30.508587 | orchestrator | 00:01:30.508 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-25 00:01:30.508610 | orchestrator | 00:01:30.508 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-25 00:01:30.508620 | orchestrator | 00:01:30.508 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-25 00:01:30.508640 | orchestrator | 00:01:30.508 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.508653 | orchestrator | 00:01:30.508 STDOUT terraform:  } 2025-07-25 00:01:30.508743 | orchestrator | 00:01:30.508 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-25 00:01:30.508753 | orchestrator | 00:01:30.508 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-25 00:01:30.508761 | orchestrator | 00:01:30.508 STDOUT terraform:  + direction = "ingress" 2025-07-25 00:01:30.508824 | orchestrator | 00:01:30.508 STDOUT terraform:  + ethertype = "IPv4" 2025-07-25 00:01:30.508834 | orchestrator | 00:01:30.508 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.508843 | orchestrator | 00:01:30.508 STDOUT terraform:  + protocol = "icmp" 2025-07-25 00:01:30.508880 | orchestrator | 00:01:30.508 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.508908 | orchestrator | 00:01:30.508 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-25 00:01:30.509004 | orchestrator | 00:01:30.508 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-25 00:01:30.509015 | orchestrator | 00:01:30.508 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-25 00:01:30.509025 | orchestrator | 00:01:30.508 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-25 00:01:30.509063 | orchestrator | 00:01:30.509 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.509075 | orchestrator | 00:01:30.509 STDOUT terraform:  } 2025-07-25 00:01:30.509132 | orchestrator | 00:01:30.509 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-25 00:01:30.509171 | orchestrator | 00:01:30.509 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-25 00:01:30.509182 | orchestrator | 00:01:30.509 STDOUT terraform:  + description = "vrrp" 2025-07-25 00:01:30.509217 | orchestrator | 00:01:30.509 STDOUT terraform:  + direction = "ingress" 2025-07-25 00:01:30.509245 | orchestrator | 00:01:30.509 STDOUT terraform:  + ethertype = "IPv4" 2025-07-25 00:01:30.509299 | orchestrator | 00:01:30.509 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.509310 | orchestrator | 00:01:30.509 STDOUT terraform:  + protocol = "112" 2025-07-25 00:01:30.509319 | orchestrator | 00:01:30.509 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.509381 | orchestrator | 00:01:30.509 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-25 00:01:30.509390 | orchestrator | 00:01:30.509 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-25 00:01:30.509460 | orchestrator | 00:01:30.509 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-25 00:01:30.509476 | orchestrator | 00:01:30.509 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-25 00:01:30.509489 | orchestrator | 00:01:30.509 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.509496 | orchestrator | 00:01:30.509 STDOUT terraform:  } 2025-07-25 00:01:30.509543 | orchestrator | 00:01:30.509 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-25 00:01:30.509577 | orchestrator | 00:01:30.509 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-25 00:01:30.509609 | orchestrator | 00:01:30.509 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.509636 | orchestrator | 00:01:30.509 STDOUT terraform:  + description = "management security group" 2025-07-25 00:01:30.509694 | orchestrator | 00:01:30.509 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.509704 | orchestrator | 00:01:30.509 STDOUT terraform:  + name = "testbed-management" 2025-07-25 00:01:30.509713 | orchestrator | 00:01:30.509 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.509757 | orchestrator | 00:01:30.509 STDOUT terraform:  + stateful = (known after apply) 2025-07-25 00:01:30.509769 | orchestrator | 00:01:30.509 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.509776 | orchestrator | 00:01:30.509 STDOUT terraform:  } 2025-07-25 00:01:30.509816 | orchestrator | 00:01:30.509 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-25 00:01:30.509857 | orchestrator | 00:01:30.509 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-25 00:01:30.509904 | orchestrator | 00:01:30.509 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.509916 | orchestrator | 00:01:30.509 STDOUT terraform:  + description = "node security group" 2025-07-25 00:01:30.509940 | orchestrator | 00:01:30.509 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.509950 | orchestrator | 00:01:30.509 STDOUT terraform:  + name = "testbed-node" 2025-07-25 00:01:30.509983 | orchestrator | 00:01:30.509 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.510008 | orchestrator | 00:01:30.509 STDOUT terraform:  + stateful = (known after apply) 2025-07-25 00:01:30.510046 | orchestrator | 00:01:30.510 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.510056 | orchestrator | 00:01:30.510 STDOUT terraform:  } 2025-07-25 00:01:30.510100 | orchestrator | 00:01:30.510 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-25 00:01:30.510138 | orchestrator | 00:01:30.510 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-25 00:01:30.510183 | orchestrator | 00:01:30.510 STDOUT terraform:  + all_tags = (known after apply) 2025-07-25 00:01:30.510193 | orchestrator | 00:01:30.510 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-25 00:01:30.510245 | orchestrator | 00:01:30.510 STDOUT terraform:  + dns_nameservers = [ 2025-07-25 00:01:30.510255 | orchestrator | 00:01:30.510 STDOUT terraform:  + "8.8.8.8", 2025-07-25 00:01:30.510262 | orchestrator | 00:01:30.510 STDOUT terraform:  + "9.9.9.9", 2025-07-25 00:01:30.510274 | orchestrator | 00:01:30.510 STDOUT terraform:  ] 2025-07-25 00:01:30.510283 | orchestrator | 00:01:30.510 STDOUT terraform:  + enable_dhcp = true 2025-07-25 00:01:30.510290 | orchestrator | 00:01:30.510 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-25 00:01:30.510348 | orchestrator | 00:01:30.510 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.513525 | orchestrator | 00:01:30.510 STDOUT terraform:  + ip_version = 4 2025-07-25 00:01:30.513565 | orchestrator | 00:01:30.513 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-25 00:01:30.513575 | orchestrator | 00:01:30.513 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-25 00:01:30.513620 | orchestrator | 00:01:30.513 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-25 00:01:30.513702 | orchestrator | 00:01:30.513 STDOUT terraform:  + network_id = (known after apply) 2025-07-25 00:01:30.513713 | orchestrator | 00:01:30.513 STDOUT terraform:  + no_gateway = false 2025-07-25 00:01:30.513720 | orchestrator | 00:01:30.513 STDOUT terraform:  + region = (known after apply) 2025-07-25 00:01:30.513730 | orchestrator | 00:01:30.513 STDOUT terraform:  + service_types = (known after apply) 2025-07-25 00:01:30.513738 | orchestrator | 00:01:30.513 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-25 00:01:30.513747 | orchestrator | 00:01:30.513 STDOUT terraform:  + allocation_pool { 2025-07-25 00:01:30.513787 | orchestrator | 00:01:30.513 STDOUT terraform:  + end = "192.168.31.250" 2025-07-25 00:01:30.513799 | orchestrator | 00:01:30.513 STDOUT terraform:  + start = "192.168.31.200" 2025-07-25 00:01:30.513807 | orchestrator | 00:01:30.513 STDOUT terraform:  } 2025-07-25 00:01:30.513816 | orchestrator | 00:01:30.513 STDOUT terraform:  } 2025-07-25 00:01:30.513826 | orchestrator | 00:01:30.513 STDOUT terraform:  # terraform_data.image will be created 2025-07-25 00:01:30.513858 | orchestrator | 00:01:30.513 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-25 00:01:30.513868 | orchestrator | 00:01:30.513 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.513896 | orchestrator | 00:01:30.513 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-25 00:01:30.513910 | orchestrator | 00:01:30.513 STDOUT terraform:  + output = (known after apply) 2025-07-25 00:01:30.513919 | orchestrator | 00:01:30.513 STDOUT terraform:  } 2025-07-25 00:01:30.513992 | orchestrator | 00:01:30.513 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-25 00:01:30.514030 | orchestrator | 00:01:30.513 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-25 00:01:30.514053 | orchestrator | 00:01:30.513 STDOUT terraform:  + id = (known after apply) 2025-07-25 00:01:30.514060 | orchestrator | 00:01:30.513 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-25 00:01:30.514070 | orchestrator | 00:01:30.514 STDOUT terraform:  + output = (known after apply) 2025-07-25 00:01:30.514077 | orchestrator | 00:01:30.514 STDOUT terraform:  } 2025-07-25 00:01:30.514121 | orchestrator | 00:01:30.514 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-25 00:01:30.522042 | orchestrator | 00:01:30.514 STDOUT terraform: Changes to Outputs: 2025-07-25 00:01:30.522092 | orchestrator | 00:01:30.521 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-25 00:01:30.522098 | orchestrator | 00:01:30.521 STDOUT terraform:  + private_key = (sensitive value) 2025-07-25 00:01:30.688701 | orchestrator | 00:01:30.688 STDOUT terraform: terraform_data.image: Creating... 2025-07-25 00:01:30.688755 | orchestrator | 00:01:30.688 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-25 00:01:30.688763 | orchestrator | 00:01:30.688 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=6ac05c1d-1302-3dbe-24ac-2af7539ab01e] 2025-07-25 00:01:30.688776 | orchestrator | 00:01:30.688 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=ada2cac4-106a-51c7-8e98-dc47c90c77e4] 2025-07-25 00:01:30.699890 | orchestrator | 00:01:30.699 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-25 00:01:30.707056 | orchestrator | 00:01:30.706 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-25 00:01:30.724498 | orchestrator | 00:01:30.722 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-25 00:01:30.724545 | orchestrator | 00:01:30.722 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-25 00:01:30.724552 | orchestrator | 00:01:30.722 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-25 00:01:30.725529 | orchestrator | 00:01:30.725 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-25 00:01:30.730263 | orchestrator | 00:01:30.729 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-25 00:01:30.732810 | orchestrator | 00:01:30.732 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-25 00:01:30.742190 | orchestrator | 00:01:30.742 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-25 00:01:30.758027 | orchestrator | 00:01:30.757 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-25 00:01:31.201362 | orchestrator | 00:01:31.201 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-25 00:01:31.205340 | orchestrator | 00:01:31.205 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-25 00:01:31.213520 | orchestrator | 00:01:31.213 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-25 00:01:31.214161 | orchestrator | 00:01:31.214 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-07-25 00:01:31.220271 | orchestrator | 00:01:31.219 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-25 00:01:31.220366 | orchestrator | 00:01:31.220 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-25 00:01:31.830297 | orchestrator | 00:01:31.829 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=4adcae48-f003-4a79-9140-5b45be884e9b] 2025-07-25 00:01:31.837593 | orchestrator | 00:01:31.837 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-25 00:01:34.352331 | orchestrator | 00:01:34.352 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=ab9d4293-a852-4593-bc10-c80f3c63e376] 2025-07-25 00:01:34.355837 | orchestrator | 00:01:34.355 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=e75e6bce-eb82-4984-bdc0-bdceee94e470] 2025-07-25 00:01:34.361163 | orchestrator | 00:01:34.361 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-25 00:01:34.361383 | orchestrator | 00:01:34.361 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=ea9d6498-9525-451b-9457-c5998a4784b9] 2025-07-25 00:01:34.367612 | orchestrator | 00:01:34.367 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-25 00:01:34.374129 | orchestrator | 00:01:34.373 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-25 00:01:34.378749 | orchestrator | 00:01:34.377 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=3614eeff7e52ada9033df54855faacb563947123] 2025-07-25 00:01:34.386765 | orchestrator | 00:01:34.386 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-25 00:01:34.392268 | orchestrator | 00:01:34.392 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=8bf299c1-9944-47e0-9e3a-9967da5044db] 2025-07-25 00:01:34.394559 | orchestrator | 00:01:34.394 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=a823f8dd-ef96-4d24-a855-e812dea7b16a] 2025-07-25 00:01:34.402090 | orchestrator | 00:01:34.399 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-25 00:01:34.405853 | orchestrator | 00:01:34.405 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-25 00:01:34.416527 | orchestrator | 00:01:34.415 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=f8c90ea7-d546-4eec-89d1-fd769a96fd43] 2025-07-25 00:01:34.423565 | orchestrator | 00:01:34.423 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-25 00:01:34.461035 | orchestrator | 00:01:34.460 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=d75b026d-3a2c-465a-89fa-0446a095128a] 2025-07-25 00:01:34.475659 | orchestrator | 00:01:34.475 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-25 00:01:34.481343 | orchestrator | 00:01:34.481 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=426606122e00362e3f324c018893a58411390ecb] 2025-07-25 00:01:34.491783 | orchestrator | 00:01:34.491 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-25 00:01:34.498599 | orchestrator | 00:01:34.498 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=4db98c90-37d4-4a4c-a0c5-0d0f06fe8ada] 2025-07-25 00:01:34.525397 | orchestrator | 00:01:34.525 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=8e589102-ad16-49a5-b4fe-353bebc4b712] 2025-07-25 00:01:35.196976 | orchestrator | 00:01:35.196 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=557a65c2-2cb6-4c65-ac47-a7fdc611d864] 2025-07-25 00:01:35.388827 | orchestrator | 00:01:35.388 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=9d4e750f-04e9-4460-a70e-4754f411707b] 2025-07-25 00:01:35.401030 | orchestrator | 00:01:35.400 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-25 00:01:37.767886 | orchestrator | 00:01:37.767 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=13776449-cd89-42c8-8e32-fdc7c65fca0c] 2025-07-25 00:01:37.805168 | orchestrator | 00:01:37.804 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=35845ad8-e10d-4e84-a44b-e75a85114fe6] 2025-07-25 00:01:37.839387 | orchestrator | 00:01:37.839 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=c73c0ea1-8b7f-42bb-be5a-ed4191934764] 2025-07-25 00:01:37.840425 | orchestrator | 00:01:37.840 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=77536660-e98f-4311-b034-d3c025327ffc] 2025-07-25 00:01:37.873452 | orchestrator | 00:01:37.873 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=4a1ff0a0-5034-44f8-a5eb-737b1a4d4c06] 2025-07-25 00:01:37.906729 | orchestrator | 00:01:37.906 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=f6e4dd3a-6118-4341-989d-b1380a143def] 2025-07-25 00:01:38.177010 | orchestrator | 00:01:38.176 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=d00959ba-a33c-4656-a150-7d506c7f2176] 2025-07-25 00:01:38.186503 | orchestrator | 00:01:38.186 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-25 00:01:38.188135 | orchestrator | 00:01:38.187 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-25 00:01:38.189661 | orchestrator | 00:01:38.189 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-25 00:01:38.371997 | orchestrator | 00:01:38.371 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=c70dbb4e-435e-4caf-865f-c3eb6e2c6a61] 2025-07-25 00:01:38.383782 | orchestrator | 00:01:38.383 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-25 00:01:38.383889 | orchestrator | 00:01:38.383 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-25 00:01:38.384001 | orchestrator | 00:01:38.383 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-25 00:01:38.389183 | orchestrator | 00:01:38.389 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-25 00:01:38.389916 | orchestrator | 00:01:38.389 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-25 00:01:38.391068 | orchestrator | 00:01:38.390 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-25 00:01:38.443125 | orchestrator | 00:01:38.442 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=063c0ccd-3b1f-4ded-bd08-4628ab0606e7] 2025-07-25 00:01:38.463350 | orchestrator | 00:01:38.463 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-25 00:01:38.464683 | orchestrator | 00:01:38.464 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-25 00:01:38.464792 | orchestrator | 00:01:38.464 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-25 00:01:38.596438 | orchestrator | 00:01:38.596 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=461a1eb6-87c5-4927-992e-87f0b85e7cc1] 2025-07-25 00:01:38.603653 | orchestrator | 00:01:38.603 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-25 00:01:38.787326 | orchestrator | 00:01:38.786 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=231a61ef-9df0-4511-908b-3d83b597ee1b] 2025-07-25 00:01:38.805305 | orchestrator | 00:01:38.805 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-25 00:01:38.844782 | orchestrator | 00:01:38.844 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=487a7261-c12b-47d7-a743-8494013ad28b] 2025-07-25 00:01:38.858634 | orchestrator | 00:01:38.858 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-25 00:01:38.968541 | orchestrator | 00:01:38.968 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=5e139581-8cf3-4081-9050-1951f6d4099b] 2025-07-25 00:01:38.989037 | orchestrator | 00:01:38.988 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-25 00:01:39.032436 | orchestrator | 00:01:39.032 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=5d010e7d-deb2-4558-9da1-935567454433] 2025-07-25 00:01:39.057644 | orchestrator | 00:01:39.057 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-25 00:01:39.119520 | orchestrator | 00:01:39.119 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=f58ea015-415b-4222-896d-ce7dae2ee823] 2025-07-25 00:01:39.124009 | orchestrator | 00:01:39.123 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=e8ba19ce-3b3c-4e38-82c6-89588f1ca7ea] 2025-07-25 00:01:39.128588 | orchestrator | 00:01:39.128 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-25 00:01:39.134626 | orchestrator | 00:01:39.134 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-25 00:01:39.168887 | orchestrator | 00:01:39.168 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=c05ca2b5-3389-4866-8abc-16645348220c] 2025-07-25 00:01:39.224059 | orchestrator | 00:01:39.223 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=f9f4601b-4d1c-4b89-b8f2-1ff93c40e7c4] 2025-07-25 00:01:39.323163 | orchestrator | 00:01:39.322 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=f86d897c-215c-4f0e-9a41-88498e60e7cf] 2025-07-25 00:01:39.394015 | orchestrator | 00:01:39.393 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=9c40de34-8044-4256-b441-f4e99818b093] 2025-07-25 00:01:39.609698 | orchestrator | 00:01:39.609 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=dfaa182a-fe74-4cae-b769-0bb8506e1cf1] 2025-07-25 00:01:39.660670 | orchestrator | 00:01:39.660 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=e38f99b8-2c31-4c30-8b09-ba991aec4970] 2025-07-25 00:01:39.715726 | orchestrator | 00:01:39.715 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=9cf3cb16-df3b-486d-b9c8-45f823756a91] 2025-07-25 00:01:39.747713 | orchestrator | 00:01:39.747 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=303da216-d145-4511-b554-af2fe1014875] 2025-07-25 00:01:39.907306 | orchestrator | 00:01:39.906 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=907f7683-97a6-49e1-9916-ef84e50cbbd6] 2025-07-25 00:01:41.153042 | orchestrator | 00:01:41.152 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=dd9660b4-bb46-4154-b49a-a335e922923a] 2025-07-25 00:01:41.170708 | orchestrator | 00:01:41.170 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-25 00:01:41.177692 | orchestrator | 00:01:41.177 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-25 00:01:41.196693 | orchestrator | 00:01:41.196 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-25 00:01:41.197208 | orchestrator | 00:01:41.197 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-25 00:01:41.199143 | orchestrator | 00:01:41.199 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-25 00:01:41.201317 | orchestrator | 00:01:41.201 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-25 00:01:41.214415 | orchestrator | 00:01:41.214 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-25 00:01:42.769854 | orchestrator | 00:01:42.769 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=34adaa85-dc26-48eb-a05c-0e491c6d2fed] 2025-07-25 00:01:42.993631 | orchestrator | 00:01:42.779 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-25 00:01:42.993713 | orchestrator | 00:01:42.786 STDOUT terraform: local_file.inventory: Creating... 2025-07-25 00:01:42.993737 | orchestrator | 00:01:42.786 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-25 00:01:42.993783 | orchestrator | 00:01:42.796 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=7a06d4c99b6a5b8baf1c975a6a4f93b828caf69c] 2025-07-25 00:01:42.993798 | orchestrator | 00:01:42.796 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=c337be7b2eac9d38f3ba7a94570848e489d96f12] 2025-07-25 00:01:44.037451 | orchestrator | 00:01:44.037 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=34adaa85-dc26-48eb-a05c-0e491c6d2fed] 2025-07-25 00:01:51.186359 | orchestrator | 00:01:51.185 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-25 00:01:51.199437 | orchestrator | 00:01:51.199 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-25 00:01:51.199672 | orchestrator | 00:01:51.199 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-25 00:01:51.207379 | orchestrator | 00:01:51.207 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-25 00:01:51.207444 | orchestrator | 00:01:51.207 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-25 00:01:51.215797 | orchestrator | 00:01:51.215 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-25 00:02:01.189005 | orchestrator | 00:02:01.188 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-25 00:02:01.200137 | orchestrator | 00:02:01.199 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-25 00:02:01.200247 | orchestrator | 00:02:01.200 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-25 00:02:01.208395 | orchestrator | 00:02:01.208 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-25 00:02:01.208488 | orchestrator | 00:02:01.208 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-25 00:02:01.216976 | orchestrator | 00:02:01.216 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-25 00:02:01.771515 | orchestrator | 00:02:01.771 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=939c69f4-6a03-47e0-ba7d-48590216a047] 2025-07-25 00:02:01.793172 | orchestrator | 00:02:01.792 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=a1a69798-0887-4271-8aa5-6614156bb265] 2025-07-25 00:02:02.023647 | orchestrator | 00:02:02.023 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=46218efd-4aca-480a-995e-090af768e5a8] 2025-07-25 00:02:11.200595 | orchestrator | 00:02:11.200 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-07-25 00:02:11.208596 | orchestrator | 00:02:11.208 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-07-25 00:02:11.208704 | orchestrator | 00:02:11.208 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-25 00:02:12.365098 | orchestrator | 00:02:12.364 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=4e0899ab-bc83-447c-bdab-c8dc86e88f2b] 2025-07-25 00:02:12.447227 | orchestrator | 00:02:12.446 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=0fba3a88-b57c-4f36-a96d-53891b709ae2] 2025-07-25 00:02:12.556019 | orchestrator | 00:02:12.555 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 32s [id=bd151376-9d8b-49f7-bdab-387e61d03c7a] 2025-07-25 00:02:12.571402 | orchestrator | 00:02:12.571 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-25 00:02:12.571917 | orchestrator | 00:02:12.571 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-25 00:02:12.579255 | orchestrator | 00:02:12.579 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-25 00:02:12.582355 | orchestrator | 00:02:12.582 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=400972587654878521] 2025-07-25 00:02:12.582910 | orchestrator | 00:02:12.582 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-25 00:02:12.597344 | orchestrator | 00:02:12.597 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-25 00:02:12.598715 | orchestrator | 00:02:12.598 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-25 00:02:12.602966 | orchestrator | 00:02:12.602 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-25 00:02:12.607692 | orchestrator | 00:02:12.607 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-25 00:02:12.612326 | orchestrator | 00:02:12.612 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-25 00:02:12.614467 | orchestrator | 00:02:12.614 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-25 00:02:12.616343 | orchestrator | 00:02:12.616 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-25 00:02:15.967800 | orchestrator | 00:02:15.966 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=0fba3a88-b57c-4f36-a96d-53891b709ae2/8e589102-ad16-49a5-b4fe-353bebc4b712] 2025-07-25 00:02:15.977924 | orchestrator | 00:02:15.977 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=4e0899ab-bc83-447c-bdab-c8dc86e88f2b/ab9d4293-a852-4593-bc10-c80f3c63e376] 2025-07-25 00:02:15.991794 | orchestrator | 00:02:15.991 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=939c69f4-6a03-47e0-ba7d-48590216a047/4db98c90-37d4-4a4c-a0c5-0d0f06fe8ada] 2025-07-25 00:02:16.100345 | orchestrator | 00:02:16.099 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=0fba3a88-b57c-4f36-a96d-53891b709ae2/8bf299c1-9944-47e0-9e3a-9967da5044db] 2025-07-25 00:02:16.132243 | orchestrator | 00:02:16.131 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=4e0899ab-bc83-447c-bdab-c8dc86e88f2b/e75e6bce-eb82-4984-bdc0-bdceee94e470] 2025-07-25 00:02:16.185871 | orchestrator | 00:02:16.185 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=939c69f4-6a03-47e0-ba7d-48590216a047/f8c90ea7-d546-4eec-89d1-fd769a96fd43] 2025-07-25 00:02:17.872432 | orchestrator | 00:02:17.872 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=0fba3a88-b57c-4f36-a96d-53891b709ae2/d75b026d-3a2c-465a-89fa-0446a095128a] 2025-07-25 00:02:22.203422 | orchestrator | 00:02:22.202 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=939c69f4-6a03-47e0-ba7d-48590216a047/a823f8dd-ef96-4d24-a855-e812dea7b16a] 2025-07-25 00:02:22.232990 | orchestrator | 00:02:22.232 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=4e0899ab-bc83-447c-bdab-c8dc86e88f2b/ea9d6498-9525-451b-9457-c5998a4784b9] 2025-07-25 00:02:22.615379 | orchestrator | 00:02:22.615 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-25 00:02:32.616242 | orchestrator | 00:02:32.615 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-25 00:02:32.895267 | orchestrator | 00:02:32.894 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=e6cac5ac-6ee2-4ca8-894c-0ff65119c301] 2025-07-25 00:02:32.913108 | orchestrator | 00:02:32.912 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-25 00:02:32.913193 | orchestrator | 00:02:32.913 STDOUT terraform: Outputs: 2025-07-25 00:02:32.913205 | orchestrator | 00:02:32.913 STDOUT terraform: manager_address = 2025-07-25 00:02:32.913213 | orchestrator | 00:02:32.913 STDOUT terraform: private_key = 2025-07-25 00:02:33.037399 | orchestrator | ok: Runtime: 0:01:09.393626 2025-07-25 00:02:33.061524 | 2025-07-25 00:02:33.061647 | TASK [Create infrastructure (stable)] 2025-07-25 00:02:33.594637 | orchestrator | skipping: Conditional result was False 2025-07-25 00:02:33.603885 | 2025-07-25 00:02:33.604020 | TASK [Fetch manager address] 2025-07-25 00:02:34.046629 | orchestrator | ok 2025-07-25 00:02:34.054999 | 2025-07-25 00:02:34.055133 | TASK [Set manager_host address] 2025-07-25 00:02:34.126860 | orchestrator | ok 2025-07-25 00:02:34.134062 | 2025-07-25 00:02:34.134221 | LOOP [Update ansible collections] 2025-07-25 00:02:37.046975 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-25 00:02:37.047309 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-25 00:02:37.047357 | orchestrator | Starting galaxy collection install process 2025-07-25 00:02:37.047388 | orchestrator | Process install dependency map 2025-07-25 00:02:37.047415 | orchestrator | Starting collection install process 2025-07-25 00:02:37.047440 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2025-07-25 00:02:37.047470 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2025-07-25 00:02:37.047499 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-25 00:02:37.047561 | orchestrator | ok: Item: commons Runtime: 0:00:02.580931 2025-07-25 00:02:38.190131 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-25 00:02:38.190449 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-25 00:02:38.191252 | orchestrator | Starting galaxy collection install process 2025-07-25 00:02:38.191357 | orchestrator | Process install dependency map 2025-07-25 00:02:38.191406 | orchestrator | Starting collection install process 2025-07-25 00:02:38.191453 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2025-07-25 00:02:38.191496 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2025-07-25 00:02:38.191535 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-25 00:02:38.191607 | orchestrator | ok: Item: services Runtime: 0:00:00.862236 2025-07-25 00:02:38.221468 | 2025-07-25 00:02:38.221591 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-25 00:02:48.798994 | orchestrator | ok 2025-07-25 00:02:48.812487 | 2025-07-25 00:02:48.812641 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-25 00:03:48.862016 | orchestrator | ok 2025-07-25 00:03:48.874896 | 2025-07-25 00:03:48.875126 | TASK [Fetch manager ssh hostkey] 2025-07-25 00:03:50.449623 | orchestrator | Output suppressed because no_log was given 2025-07-25 00:03:50.465791 | 2025-07-25 00:03:50.465973 | TASK [Get ssh keypair from terraform environment] 2025-07-25 00:03:51.018208 | orchestrator | ok: Runtime: 0:00:00.011001 2025-07-25 00:03:51.034036 | 2025-07-25 00:03:51.034234 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-25 00:03:51.072732 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-25 00:03:51.086662 | 2025-07-25 00:03:51.086821 | TASK [Run manager part 0] 2025-07-25 00:03:52.333728 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-25 00:03:52.424497 | orchestrator | 2025-07-25 00:03:52.424547 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-25 00:03:52.424554 | orchestrator | 2025-07-25 00:03:52.424568 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-25 00:03:54.211025 | orchestrator | ok: [testbed-manager] 2025-07-25 00:03:54.211116 | orchestrator | 2025-07-25 00:03:54.211165 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-25 00:03:54.211189 | orchestrator | 2025-07-25 00:03:54.211210 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-25 00:03:56.257861 | orchestrator | ok: [testbed-manager] 2025-07-25 00:03:56.258071 | orchestrator | 2025-07-25 00:03:56.258095 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-25 00:03:56.944057 | orchestrator | ok: [testbed-manager] 2025-07-25 00:03:56.944114 | orchestrator | 2025-07-25 00:03:56.944123 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-25 00:03:56.994588 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:03:56.994635 | orchestrator | 2025-07-25 00:03:56.994645 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-25 00:03:57.028805 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:03:57.028882 | orchestrator | 2025-07-25 00:03:57.028899 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-25 00:03:57.064056 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:03:57.064114 | orchestrator | 2025-07-25 00:03:57.064122 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-25 00:03:57.093576 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:03:57.093626 | orchestrator | 2025-07-25 00:03:57.093633 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-25 00:03:57.131317 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:03:57.131371 | orchestrator | 2025-07-25 00:03:57.131381 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-25 00:03:57.168918 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:03:57.168993 | orchestrator | 2025-07-25 00:03:57.169007 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-25 00:03:57.196736 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:03:57.196801 | orchestrator | 2025-07-25 00:03:57.196814 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-25 00:03:58.003734 | orchestrator | changed: [testbed-manager] 2025-07-25 00:03:58.003822 | orchestrator | 2025-07-25 00:03:58.003838 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-25 00:06:18.041547 | orchestrator | changed: [testbed-manager] 2025-07-25 00:06:18.041646 | orchestrator | 2025-07-25 00:06:18.041661 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-25 00:07:55.506275 | orchestrator | changed: [testbed-manager] 2025-07-25 00:07:55.506392 | orchestrator | 2025-07-25 00:07:55.506410 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-25 00:08:23.046619 | orchestrator | changed: [testbed-manager] 2025-07-25 00:08:23.046752 | orchestrator | 2025-07-25 00:08:23.046773 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-25 00:08:32.244635 | orchestrator | changed: [testbed-manager] 2025-07-25 00:08:32.244792 | orchestrator | 2025-07-25 00:08:32.244819 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-25 00:08:32.289238 | orchestrator | ok: [testbed-manager] 2025-07-25 00:08:32.289299 | orchestrator | 2025-07-25 00:08:32.289307 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-25 00:08:33.121734 | orchestrator | ok: [testbed-manager] 2025-07-25 00:08:33.121780 | orchestrator | 2025-07-25 00:08:33.121790 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-25 00:08:33.865227 | orchestrator | changed: [testbed-manager] 2025-07-25 00:08:33.865331 | orchestrator | 2025-07-25 00:08:33.865350 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-25 00:08:41.014490 | orchestrator | changed: [testbed-manager] 2025-07-25 00:08:41.014593 | orchestrator | 2025-07-25 00:08:41.014634 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-25 00:08:47.286152 | orchestrator | changed: [testbed-manager] 2025-07-25 00:08:47.286253 | orchestrator | 2025-07-25 00:08:47.286272 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-25 00:08:50.002609 | orchestrator | changed: [testbed-manager] 2025-07-25 00:08:50.002737 | orchestrator | 2025-07-25 00:08:50.002755 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-25 00:08:51.839457 | orchestrator | changed: [testbed-manager] 2025-07-25 00:08:51.839548 | orchestrator | 2025-07-25 00:08:51.839564 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-25 00:08:53.015189 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-25 00:08:53.015288 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-25 00:08:53.015305 | orchestrator | 2025-07-25 00:08:53.015319 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-25 00:08:53.067001 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-25 00:08:53.067106 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-25 00:08:53.067132 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-25 00:08:53.067152 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-25 00:09:06.457470 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-25 00:09:06.457541 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-25 00:09:06.457551 | orchestrator | 2025-07-25 00:09:06.457558 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-25 00:09:07.040130 | orchestrator | changed: [testbed-manager] 2025-07-25 00:09:07.040167 | orchestrator | 2025-07-25 00:09:07.040174 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-25 00:09:28.084830 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-25 00:09:28.084931 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-25 00:09:28.084951 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-25 00:09:28.084965 | orchestrator | 2025-07-25 00:09:28.084980 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-25 00:09:30.424991 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-25 00:09:30.425029 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-25 00:09:30.425034 | orchestrator | 2025-07-25 00:09:30.425040 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-25 00:09:30.425044 | orchestrator | 2025-07-25 00:09:30.425049 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-25 00:09:31.868893 | orchestrator | ok: [testbed-manager] 2025-07-25 00:09:31.868940 | orchestrator | 2025-07-25 00:09:31.868947 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-25 00:09:31.919737 | orchestrator | ok: [testbed-manager] 2025-07-25 00:09:31.919781 | orchestrator | 2025-07-25 00:09:31.919789 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-25 00:09:32.048893 | orchestrator | ok: [testbed-manager] 2025-07-25 00:09:32.048934 | orchestrator | 2025-07-25 00:09:32.048940 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-25 00:09:32.789622 | orchestrator | changed: [testbed-manager] 2025-07-25 00:09:32.789687 | orchestrator | 2025-07-25 00:09:32.789702 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-25 00:09:33.518635 | orchestrator | changed: [testbed-manager] 2025-07-25 00:09:33.518736 | orchestrator | 2025-07-25 00:09:33.518753 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-25 00:09:34.926820 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-25 00:09:34.926867 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-25 00:09:34.926875 | orchestrator | 2025-07-25 00:09:34.926890 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-25 00:09:36.329342 | orchestrator | changed: [testbed-manager] 2025-07-25 00:09:36.329449 | orchestrator | 2025-07-25 00:09:36.329462 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-25 00:09:38.134391 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-25 00:09:38.134436 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-25 00:09:38.134444 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-25 00:09:38.134451 | orchestrator | 2025-07-25 00:09:38.134459 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-25 00:09:38.201336 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:09:38.201398 | orchestrator | 2025-07-25 00:09:38.201406 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-25 00:09:38.788896 | orchestrator | changed: [testbed-manager] 2025-07-25 00:09:38.788943 | orchestrator | 2025-07-25 00:09:38.788954 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-25 00:09:38.862791 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:09:38.862850 | orchestrator | 2025-07-25 00:09:38.862857 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-25 00:09:39.791445 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-25 00:09:39.791542 | orchestrator | changed: [testbed-manager] 2025-07-25 00:09:39.791558 | orchestrator | 2025-07-25 00:09:39.791571 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-25 00:09:39.832136 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:09:39.832218 | orchestrator | 2025-07-25 00:09:39.832232 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-25 00:09:39.865561 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:09:39.865681 | orchestrator | 2025-07-25 00:09:39.865697 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-25 00:09:39.897377 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:09:39.897469 | orchestrator | 2025-07-25 00:09:39.897485 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-25 00:09:39.958798 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:09:39.958869 | orchestrator | 2025-07-25 00:09:39.958881 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-25 00:09:40.717511 | orchestrator | ok: [testbed-manager] 2025-07-25 00:09:40.717665 | orchestrator | 2025-07-25 00:09:40.717683 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-25 00:09:40.717696 | orchestrator | 2025-07-25 00:09:40.717708 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-25 00:09:42.128249 | orchestrator | ok: [testbed-manager] 2025-07-25 00:09:42.128311 | orchestrator | 2025-07-25 00:09:42.128317 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-25 00:09:43.064043 | orchestrator | changed: [testbed-manager] 2025-07-25 00:09:43.064167 | orchestrator | 2025-07-25 00:09:43.064184 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-25 00:09:43.064197 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-25 00:09:43.064209 | orchestrator | 2025-07-25 00:09:43.351018 | orchestrator | ok: Runtime: 0:05:51.779799 2025-07-25 00:09:43.368886 | 2025-07-25 00:09:43.369071 | TASK [Point out that the log in on the manager is now possible] 2025-07-25 00:09:43.402501 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-25 00:09:43.409651 | 2025-07-25 00:09:43.409753 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-25 00:09:43.446352 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-25 00:09:43.455455 | 2025-07-25 00:09:43.455575 | TASK [Run manager part 1 + 2] 2025-07-25 00:09:44.332852 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-25 00:09:44.387171 | orchestrator | 2025-07-25 00:09:44.387251 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-25 00:09:44.387269 | orchestrator | 2025-07-25 00:09:44.387296 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-25 00:09:46.964953 | orchestrator | ok: [testbed-manager] 2025-07-25 00:09:46.965219 | orchestrator | 2025-07-25 00:09:46.965279 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-25 00:09:47.009574 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:09:47.009647 | orchestrator | 2025-07-25 00:09:47.009663 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-25 00:09:47.051405 | orchestrator | ok: [testbed-manager] 2025-07-25 00:09:47.051482 | orchestrator | 2025-07-25 00:09:47.051499 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-25 00:09:47.090004 | orchestrator | ok: [testbed-manager] 2025-07-25 00:09:47.090161 | orchestrator | 2025-07-25 00:09:47.090179 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-25 00:09:47.154534 | orchestrator | ok: [testbed-manager] 2025-07-25 00:09:47.154718 | orchestrator | 2025-07-25 00:09:47.154740 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-25 00:09:47.218600 | orchestrator | ok: [testbed-manager] 2025-07-25 00:09:47.218666 | orchestrator | 2025-07-25 00:09:47.218677 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-25 00:09:47.255441 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-25 00:09:47.255532 | orchestrator | 2025-07-25 00:09:47.255549 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-25 00:09:47.998351 | orchestrator | ok: [testbed-manager] 2025-07-25 00:09:47.998494 | orchestrator | 2025-07-25 00:09:47.998519 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-25 00:09:48.042186 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:09:48.042274 | orchestrator | 2025-07-25 00:09:48.042292 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-25 00:09:49.441593 | orchestrator | changed: [testbed-manager] 2025-07-25 00:09:49.441690 | orchestrator | 2025-07-25 00:09:49.441712 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-25 00:09:50.031604 | orchestrator | ok: [testbed-manager] 2025-07-25 00:09:50.031699 | orchestrator | 2025-07-25 00:09:50.031715 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-25 00:09:51.247527 | orchestrator | changed: [testbed-manager] 2025-07-25 00:09:51.247632 | orchestrator | 2025-07-25 00:09:51.247652 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-25 00:10:07.049108 | orchestrator | changed: [testbed-manager] 2025-07-25 00:10:07.049146 | orchestrator | 2025-07-25 00:10:07.049154 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-25 00:10:07.716895 | orchestrator | ok: [testbed-manager] 2025-07-25 00:10:07.716968 | orchestrator | 2025-07-25 00:10:07.716985 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-25 00:10:07.772113 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:10:07.772177 | orchestrator | 2025-07-25 00:10:07.772191 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-25 00:10:08.776248 | orchestrator | changed: [testbed-manager] 2025-07-25 00:10:08.776346 | orchestrator | 2025-07-25 00:10:08.776375 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-25 00:10:09.743848 | orchestrator | changed: [testbed-manager] 2025-07-25 00:10:09.743890 | orchestrator | 2025-07-25 00:10:09.743902 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-25 00:10:10.309393 | orchestrator | changed: [testbed-manager] 2025-07-25 00:10:10.309487 | orchestrator | 2025-07-25 00:10:10.309504 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-25 00:10:10.351725 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-25 00:10:10.351801 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-25 00:10:10.351807 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-25 00:10:10.351813 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-25 00:10:14.941625 | orchestrator | changed: [testbed-manager] 2025-07-25 00:10:14.941709 | orchestrator | 2025-07-25 00:10:14.941724 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-25 00:10:24.513196 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-25 00:10:24.513299 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-25 00:10:24.513317 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-25 00:10:24.513329 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-25 00:10:24.513351 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-25 00:10:24.513362 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-25 00:10:24.513374 | orchestrator | 2025-07-25 00:10:24.513386 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-25 00:10:25.616439 | orchestrator | changed: [testbed-manager] 2025-07-25 00:10:25.616475 | orchestrator | 2025-07-25 00:10:25.616482 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-25 00:10:25.661078 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:10:25.661120 | orchestrator | 2025-07-25 00:10:25.661129 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-25 00:10:29.130773 | orchestrator | changed: [testbed-manager] 2025-07-25 00:10:29.130865 | orchestrator | 2025-07-25 00:10:29.130882 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-25 00:10:29.173155 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:10:29.173250 | orchestrator | 2025-07-25 00:10:29.173266 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-25 00:12:11.398998 | orchestrator | changed: [testbed-manager] 2025-07-25 00:12:11.399102 | orchestrator | 2025-07-25 00:12:11.399120 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-25 00:12:12.579397 | orchestrator | ok: [testbed-manager] 2025-07-25 00:12:12.580141 | orchestrator | 2025-07-25 00:12:12.580175 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-25 00:12:12.580198 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-25 00:12:12.580213 | orchestrator | 2025-07-25 00:12:13.084009 | orchestrator | ok: Runtime: 0:02:28.888082 2025-07-25 00:12:13.100749 | 2025-07-25 00:12:13.100890 | TASK [Reboot manager] 2025-07-25 00:12:14.637712 | orchestrator | ok: Runtime: 0:00:01.059693 2025-07-25 00:12:14.653786 | 2025-07-25 00:12:14.653967 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-25 00:12:31.970931 | orchestrator | ok 2025-07-25 00:12:31.981384 | 2025-07-25 00:12:31.981514 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-25 00:13:32.025787 | orchestrator | ok 2025-07-25 00:13:32.037344 | 2025-07-25 00:13:32.037523 | TASK [Deploy manager + bootstrap nodes] 2025-07-25 00:13:34.707551 | orchestrator | 2025-07-25 00:13:34.707735 | orchestrator | # DEPLOY MANAGER 2025-07-25 00:13:34.707758 | orchestrator | 2025-07-25 00:13:34.707772 | orchestrator | + set -e 2025-07-25 00:13:34.707784 | orchestrator | + echo 2025-07-25 00:13:34.707797 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-25 00:13:34.707813 | orchestrator | + echo 2025-07-25 00:13:34.707859 | orchestrator | + cat /opt/manager-vars.sh 2025-07-25 00:13:34.711454 | orchestrator | export NUMBER_OF_NODES=6 2025-07-25 00:13:34.711527 | orchestrator | 2025-07-25 00:13:34.711538 | orchestrator | export CEPH_VERSION=reef 2025-07-25 00:13:34.711549 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-25 00:13:34.711561 | orchestrator | export MANAGER_VERSION=latest 2025-07-25 00:13:34.711585 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-25 00:13:34.711596 | orchestrator | 2025-07-25 00:13:34.711613 | orchestrator | export ARA=false 2025-07-25 00:13:34.711625 | orchestrator | export DEPLOY_MODE=manager 2025-07-25 00:13:34.711643 | orchestrator | export TEMPEST=true 2025-07-25 00:13:34.711655 | orchestrator | export IS_ZUUL=true 2025-07-25 00:13:34.711665 | orchestrator | 2025-07-25 00:13:34.711681 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-07-25 00:13:34.711689 | orchestrator | export EXTERNAL_API=false 2025-07-25 00:13:34.711696 | orchestrator | 2025-07-25 00:13:34.711702 | orchestrator | export IMAGE_USER=ubuntu 2025-07-25 00:13:34.711711 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-25 00:13:34.711717 | orchestrator | 2025-07-25 00:13:34.711724 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-25 00:13:34.711738 | orchestrator | 2025-07-25 00:13:34.711744 | orchestrator | + echo 2025-07-25 00:13:34.711754 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-25 00:13:34.712706 | orchestrator | ++ export INTERACTIVE=false 2025-07-25 00:13:34.712725 | orchestrator | ++ INTERACTIVE=false 2025-07-25 00:13:34.712733 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-25 00:13:34.712740 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-25 00:13:34.712929 | orchestrator | + source /opt/manager-vars.sh 2025-07-25 00:13:34.712946 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-25 00:13:34.712958 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-25 00:13:34.712968 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-25 00:13:34.712978 | orchestrator | ++ CEPH_VERSION=reef 2025-07-25 00:13:34.712987 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-25 00:13:34.712995 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-25 00:13:34.713005 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-25 00:13:34.713016 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-25 00:13:34.713026 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-25 00:13:34.713047 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-25 00:13:34.713059 | orchestrator | ++ export ARA=false 2025-07-25 00:13:34.713071 | orchestrator | ++ ARA=false 2025-07-25 00:13:34.713081 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-25 00:13:34.713092 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-25 00:13:34.713102 | orchestrator | ++ export TEMPEST=true 2025-07-25 00:13:34.713113 | orchestrator | ++ TEMPEST=true 2025-07-25 00:13:34.713124 | orchestrator | ++ export IS_ZUUL=true 2025-07-25 00:13:34.713134 | orchestrator | ++ IS_ZUUL=true 2025-07-25 00:13:34.713144 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-07-25 00:13:34.713150 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-07-25 00:13:34.713157 | orchestrator | ++ export EXTERNAL_API=false 2025-07-25 00:13:34.713163 | orchestrator | ++ EXTERNAL_API=false 2025-07-25 00:13:34.713169 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-25 00:13:34.713175 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-25 00:13:34.713185 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-25 00:13:34.713191 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-25 00:13:34.713198 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-25 00:13:34.713204 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-25 00:13:34.713211 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-25 00:13:34.775233 | orchestrator | + docker version 2025-07-25 00:13:35.072228 | orchestrator | Client: Docker Engine - Community 2025-07-25 00:13:35.072327 | orchestrator | Version: 27.5.1 2025-07-25 00:13:35.072341 | orchestrator | API version: 1.47 2025-07-25 00:13:35.072354 | orchestrator | Go version: go1.22.11 2025-07-25 00:13:35.072363 | orchestrator | Git commit: 9f9e405 2025-07-25 00:13:35.072372 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-25 00:13:35.072382 | orchestrator | OS/Arch: linux/amd64 2025-07-25 00:13:35.072391 | orchestrator | Context: default 2025-07-25 00:13:35.072441 | orchestrator | 2025-07-25 00:13:35.072452 | orchestrator | Server: Docker Engine - Community 2025-07-25 00:13:35.072461 | orchestrator | Engine: 2025-07-25 00:13:35.072470 | orchestrator | Version: 27.5.1 2025-07-25 00:13:35.072480 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-25 00:13:35.072515 | orchestrator | Go version: go1.22.11 2025-07-25 00:13:35.072525 | orchestrator | Git commit: 4c9b3b0 2025-07-25 00:13:35.072534 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-25 00:13:35.072542 | orchestrator | OS/Arch: linux/amd64 2025-07-25 00:13:35.072551 | orchestrator | Experimental: false 2025-07-25 00:13:35.072560 | orchestrator | containerd: 2025-07-25 00:13:35.072568 | orchestrator | Version: 1.7.27 2025-07-25 00:13:35.072578 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-25 00:13:35.072587 | orchestrator | runc: 2025-07-25 00:13:35.072595 | orchestrator | Version: 1.2.5 2025-07-25 00:13:35.072604 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-25 00:13:35.072613 | orchestrator | docker-init: 2025-07-25 00:13:35.072623 | orchestrator | Version: 0.19.0 2025-07-25 00:13:35.072634 | orchestrator | GitCommit: de40ad0 2025-07-25 00:13:35.076078 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-25 00:13:35.086218 | orchestrator | + set -e 2025-07-25 00:13:35.086328 | orchestrator | + source /opt/manager-vars.sh 2025-07-25 00:13:35.086381 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-25 00:13:35.086438 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-25 00:13:35.086460 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-25 00:13:35.086479 | orchestrator | ++ CEPH_VERSION=reef 2025-07-25 00:13:35.086493 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-25 00:13:35.086504 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-25 00:13:35.086515 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-25 00:13:35.086526 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-25 00:13:35.086537 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-25 00:13:35.086548 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-25 00:13:35.086558 | orchestrator | ++ export ARA=false 2025-07-25 00:13:35.086569 | orchestrator | ++ ARA=false 2025-07-25 00:13:35.086580 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-25 00:13:35.086592 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-25 00:13:35.086602 | orchestrator | ++ export TEMPEST=true 2025-07-25 00:13:35.086613 | orchestrator | ++ TEMPEST=true 2025-07-25 00:13:35.086624 | orchestrator | ++ export IS_ZUUL=true 2025-07-25 00:13:35.086635 | orchestrator | ++ IS_ZUUL=true 2025-07-25 00:13:35.086646 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-07-25 00:13:35.086657 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-07-25 00:13:35.086668 | orchestrator | ++ export EXTERNAL_API=false 2025-07-25 00:13:35.086679 | orchestrator | ++ EXTERNAL_API=false 2025-07-25 00:13:35.086689 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-25 00:13:35.086700 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-25 00:13:35.086711 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-25 00:13:35.086721 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-25 00:13:35.086733 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-25 00:13:35.086743 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-25 00:13:35.086754 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-25 00:13:35.086765 | orchestrator | ++ export INTERACTIVE=false 2025-07-25 00:13:35.086776 | orchestrator | ++ INTERACTIVE=false 2025-07-25 00:13:35.086786 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-25 00:13:35.086802 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-25 00:13:35.086826 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-25 00:13:35.086837 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-25 00:13:35.086848 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-07-25 00:13:35.093898 | orchestrator | + set -e 2025-07-25 00:13:35.093980 | orchestrator | + VERSION=reef 2025-07-25 00:13:35.095119 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-25 00:13:35.101132 | orchestrator | + [[ -n ceph_version: reef ]] 2025-07-25 00:13:35.101168 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-07-25 00:13:35.107293 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-07-25 00:13:35.113469 | orchestrator | + set -e 2025-07-25 00:13:35.113504 | orchestrator | + VERSION=2024.2 2025-07-25 00:13:35.114810 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-25 00:13:35.119302 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-07-25 00:13:35.119340 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-07-25 00:13:35.124089 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-25 00:13:35.124909 | orchestrator | ++ semver latest 7.0.0 2025-07-25 00:13:35.194183 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-25 00:13:35.194285 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-25 00:13:35.194301 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-25 00:13:35.194314 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-25 00:13:35.293991 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-25 00:13:35.301975 | orchestrator | + source /opt/venv/bin/activate 2025-07-25 00:13:35.303109 | orchestrator | ++ deactivate nondestructive 2025-07-25 00:13:35.303200 | orchestrator | ++ '[' -n '' ']' 2025-07-25 00:13:35.303214 | orchestrator | ++ '[' -n '' ']' 2025-07-25 00:13:35.303223 | orchestrator | ++ hash -r 2025-07-25 00:13:35.303230 | orchestrator | ++ '[' -n '' ']' 2025-07-25 00:13:35.303237 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-25 00:13:35.303244 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-25 00:13:35.303251 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-25 00:13:35.303271 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-25 00:13:35.303281 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-25 00:13:35.303291 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-25 00:13:35.303299 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-25 00:13:35.303309 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-25 00:13:35.303319 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-25 00:13:35.303327 | orchestrator | ++ export PATH 2025-07-25 00:13:35.303336 | orchestrator | ++ '[' -n '' ']' 2025-07-25 00:13:35.303584 | orchestrator | ++ '[' -z '' ']' 2025-07-25 00:13:35.303604 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-25 00:13:35.303617 | orchestrator | ++ PS1='(venv) ' 2025-07-25 00:13:35.303633 | orchestrator | ++ export PS1 2025-07-25 00:13:35.303645 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-25 00:13:35.303659 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-25 00:13:35.303672 | orchestrator | ++ hash -r 2025-07-25 00:13:35.303708 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-25 00:13:36.650877 | orchestrator | 2025-07-25 00:13:36.650998 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-25 00:13:36.651016 | orchestrator | 2025-07-25 00:13:36.651028 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-25 00:13:37.248555 | orchestrator | ok: [testbed-manager] 2025-07-25 00:13:37.248674 | orchestrator | 2025-07-25 00:13:37.248700 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-25 00:13:38.250388 | orchestrator | changed: [testbed-manager] 2025-07-25 00:13:38.250552 | orchestrator | 2025-07-25 00:13:38.250573 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-25 00:13:38.250586 | orchestrator | 2025-07-25 00:13:38.250598 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-25 00:13:40.797694 | orchestrator | ok: [testbed-manager] 2025-07-25 00:13:40.797842 | orchestrator | 2025-07-25 00:13:40.797872 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-25 00:13:40.862175 | orchestrator | ok: [testbed-manager] 2025-07-25 00:13:40.862277 | orchestrator | 2025-07-25 00:13:40.862297 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-25 00:13:41.327721 | orchestrator | changed: [testbed-manager] 2025-07-25 00:13:41.327838 | orchestrator | 2025-07-25 00:13:41.327855 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-25 00:13:41.363990 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:13:41.364082 | orchestrator | 2025-07-25 00:13:41.364102 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-25 00:13:41.724471 | orchestrator | changed: [testbed-manager] 2025-07-25 00:13:41.724604 | orchestrator | 2025-07-25 00:13:41.724623 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-25 00:13:41.776996 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:13:41.777100 | orchestrator | 2025-07-25 00:13:41.777116 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-25 00:13:42.122937 | orchestrator | ok: [testbed-manager] 2025-07-25 00:13:42.123068 | orchestrator | 2025-07-25 00:13:42.123095 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-25 00:13:42.249048 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:13:42.249150 | orchestrator | 2025-07-25 00:13:42.249165 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-25 00:13:42.249179 | orchestrator | 2025-07-25 00:13:42.249192 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-25 00:13:44.055820 | orchestrator | ok: [testbed-manager] 2025-07-25 00:13:44.055950 | orchestrator | 2025-07-25 00:13:44.055977 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-25 00:13:44.163852 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-25 00:13:44.163963 | orchestrator | 2025-07-25 00:13:44.163980 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-25 00:13:44.221233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-25 00:13:44.221340 | orchestrator | 2025-07-25 00:13:44.221358 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-25 00:13:45.385287 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-25 00:13:45.385469 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-25 00:13:45.385489 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-25 00:13:45.385503 | orchestrator | 2025-07-25 00:13:45.385516 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-25 00:13:47.241068 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-25 00:13:47.241174 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-25 00:13:47.241203 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-25 00:13:47.241774 | orchestrator | 2025-07-25 00:13:47.241790 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-25 00:13:47.897705 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-25 00:13:47.897837 | orchestrator | changed: [testbed-manager] 2025-07-25 00:13:47.897855 | orchestrator | 2025-07-25 00:13:47.897868 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-25 00:13:48.565578 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-25 00:13:48.565701 | orchestrator | changed: [testbed-manager] 2025-07-25 00:13:48.565719 | orchestrator | 2025-07-25 00:13:48.565733 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-25 00:13:48.626449 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:13:48.626537 | orchestrator | 2025-07-25 00:13:48.626552 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-25 00:13:48.992718 | orchestrator | ok: [testbed-manager] 2025-07-25 00:13:48.992823 | orchestrator | 2025-07-25 00:13:48.992841 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-25 00:13:49.060989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-25 00:13:49.061081 | orchestrator | 2025-07-25 00:13:49.061094 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-25 00:13:50.156839 | orchestrator | changed: [testbed-manager] 2025-07-25 00:13:50.156951 | orchestrator | 2025-07-25 00:13:50.156969 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-25 00:13:50.998494 | orchestrator | changed: [testbed-manager] 2025-07-25 00:13:50.998579 | orchestrator | 2025-07-25 00:13:50.998588 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-25 00:14:03.566082 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:03.566203 | orchestrator | 2025-07-25 00:14:03.566221 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-25 00:14:03.620963 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:14:03.621049 | orchestrator | 2025-07-25 00:14:03.621064 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-25 00:14:03.621077 | orchestrator | 2025-07-25 00:14:03.621088 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-25 00:14:05.474642 | orchestrator | ok: [testbed-manager] 2025-07-25 00:14:05.474751 | orchestrator | 2025-07-25 00:14:05.474805 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-25 00:14:05.593800 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-25 00:14:05.593910 | orchestrator | 2025-07-25 00:14:05.593925 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-25 00:14:05.651065 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-25 00:14:05.651173 | orchestrator | 2025-07-25 00:14:05.651189 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-25 00:14:08.954959 | orchestrator | ok: [testbed-manager] 2025-07-25 00:14:08.955043 | orchestrator | 2025-07-25 00:14:08.955053 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-25 00:14:09.010989 | orchestrator | ok: [testbed-manager] 2025-07-25 00:14:09.011076 | orchestrator | 2025-07-25 00:14:09.011092 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-25 00:14:09.134528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-25 00:14:09.134630 | orchestrator | 2025-07-25 00:14:09.134642 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-25 00:14:12.004985 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-25 00:14:12.005094 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-25 00:14:12.005111 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-25 00:14:12.005124 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-25 00:14:12.005135 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-25 00:14:12.005146 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-25 00:14:12.005157 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-25 00:14:12.005168 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-25 00:14:12.005179 | orchestrator | 2025-07-25 00:14:12.005192 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-25 00:14:12.685972 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:12.686137 | orchestrator | 2025-07-25 00:14:12.686166 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-25 00:14:13.366858 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:13.366961 | orchestrator | 2025-07-25 00:14:13.366977 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-25 00:14:13.463568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-25 00:14:13.463664 | orchestrator | 2025-07-25 00:14:13.463677 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-25 00:14:14.683873 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-25 00:14:14.684029 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-25 00:14:14.684059 | orchestrator | 2025-07-25 00:14:14.684780 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-25 00:14:15.345599 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:15.345705 | orchestrator | 2025-07-25 00:14:15.345723 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-25 00:14:15.410563 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:14:15.410652 | orchestrator | 2025-07-25 00:14:15.410663 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-25 00:14:15.473185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-25 00:14:15.473269 | orchestrator | 2025-07-25 00:14:15.473279 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-25 00:14:16.922638 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-25 00:14:16.922750 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-25 00:14:16.922765 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:16.922779 | orchestrator | 2025-07-25 00:14:16.922791 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-25 00:14:17.620188 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:17.620296 | orchestrator | 2025-07-25 00:14:17.620314 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-25 00:14:17.664588 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:14:17.664682 | orchestrator | 2025-07-25 00:14:17.664696 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-25 00:14:17.750606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-25 00:14:17.750709 | orchestrator | 2025-07-25 00:14:17.750725 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-25 00:14:18.287957 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:18.288077 | orchestrator | 2025-07-25 00:14:18.288094 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-25 00:14:18.711241 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:18.711350 | orchestrator | 2025-07-25 00:14:18.711423 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-25 00:14:19.959429 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-25 00:14:19.959511 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-25 00:14:19.959518 | orchestrator | 2025-07-25 00:14:19.959523 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-25 00:14:20.629598 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:20.629700 | orchestrator | 2025-07-25 00:14:20.629718 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-25 00:14:21.025928 | orchestrator | ok: [testbed-manager] 2025-07-25 00:14:21.026098 | orchestrator | 2025-07-25 00:14:21.026118 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-25 00:14:21.402864 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:21.402940 | orchestrator | 2025-07-25 00:14:21.402947 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-25 00:14:21.447426 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:14:21.447490 | orchestrator | 2025-07-25 00:14:21.447500 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-25 00:14:21.526790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-25 00:14:21.526879 | orchestrator | 2025-07-25 00:14:21.526893 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-25 00:14:21.564077 | orchestrator | ok: [testbed-manager] 2025-07-25 00:14:21.564161 | orchestrator | 2025-07-25 00:14:21.564175 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-25 00:14:23.587717 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-25 00:14:23.587826 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-25 00:14:23.587842 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-25 00:14:23.587853 | orchestrator | 2025-07-25 00:14:23.587866 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-25 00:14:24.328454 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:24.328556 | orchestrator | 2025-07-25 00:14:24.328569 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-25 00:14:25.057569 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:25.057675 | orchestrator | 2025-07-25 00:14:25.057692 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-25 00:14:25.779500 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:25.779614 | orchestrator | 2025-07-25 00:14:25.779631 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-25 00:14:25.850062 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-25 00:14:25.850161 | orchestrator | 2025-07-25 00:14:25.850175 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-25 00:14:25.893247 | orchestrator | ok: [testbed-manager] 2025-07-25 00:14:25.893337 | orchestrator | 2025-07-25 00:14:25.893351 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-25 00:14:26.616286 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-25 00:14:26.616471 | orchestrator | 2025-07-25 00:14:26.616495 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-25 00:14:26.694928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-25 00:14:26.695007 | orchestrator | 2025-07-25 00:14:26.695014 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-25 00:14:27.425714 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:27.425825 | orchestrator | 2025-07-25 00:14:27.425842 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-25 00:14:28.030891 | orchestrator | ok: [testbed-manager] 2025-07-25 00:14:28.031013 | orchestrator | 2025-07-25 00:14:28.031041 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-25 00:14:28.082512 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:14:28.082610 | orchestrator | 2025-07-25 00:14:28.082624 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-25 00:14:28.131901 | orchestrator | ok: [testbed-manager] 2025-07-25 00:14:28.131987 | orchestrator | 2025-07-25 00:14:28.132002 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-25 00:14:28.959317 | orchestrator | changed: [testbed-manager] 2025-07-25 00:14:28.959466 | orchestrator | 2025-07-25 00:14:28.959483 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-25 00:15:38.678008 | orchestrator | changed: [testbed-manager] 2025-07-25 00:15:38.678173 | orchestrator | 2025-07-25 00:15:38.678193 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-25 00:15:39.722483 | orchestrator | ok: [testbed-manager] 2025-07-25 00:15:39.722591 | orchestrator | 2025-07-25 00:15:39.722607 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-25 00:15:39.779867 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:15:39.779987 | orchestrator | 2025-07-25 00:15:39.780017 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-25 00:16:08.708102 | orchestrator | changed: [testbed-manager] 2025-07-25 00:16:08.708227 | orchestrator | 2025-07-25 00:16:08.708249 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-25 00:16:08.755210 | orchestrator | ok: [testbed-manager] 2025-07-25 00:16:08.755292 | orchestrator | 2025-07-25 00:16:08.755371 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-25 00:16:08.755385 | orchestrator | 2025-07-25 00:16:08.755397 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-25 00:16:08.810774 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:16:08.810857 | orchestrator | 2025-07-25 00:16:08.810871 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-25 00:17:08.855588 | orchestrator | Pausing for 60 seconds 2025-07-25 00:17:08.855717 | orchestrator | changed: [testbed-manager] 2025-07-25 00:17:08.855734 | orchestrator | 2025-07-25 00:17:08.855748 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-25 00:17:12.744084 | orchestrator | changed: [testbed-manager] 2025-07-25 00:17:12.744193 | orchestrator | 2025-07-25 00:17:12.744210 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-25 00:17:54.659503 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-25 00:17:54.659628 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-25 00:17:54.659644 | orchestrator | changed: [testbed-manager] 2025-07-25 00:17:54.659658 | orchestrator | 2025-07-25 00:17:54.659671 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-25 00:18:04.526106 | orchestrator | changed: [testbed-manager] 2025-07-25 00:18:04.526225 | orchestrator | 2025-07-25 00:18:04.526284 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-25 00:18:04.603833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-25 00:18:04.603970 | orchestrator | 2025-07-25 00:18:04.603986 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-25 00:18:04.603998 | orchestrator | 2025-07-25 00:18:04.604010 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-25 00:18:04.642369 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:18:04.642474 | orchestrator | 2025-07-25 00:18:04.642493 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-25 00:18:04.642507 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-25 00:18:04.642519 | orchestrator | 2025-07-25 00:18:04.747131 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-25 00:18:04.747278 | orchestrator | + deactivate 2025-07-25 00:18:04.747297 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-25 00:18:04.747311 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-25 00:18:04.747322 | orchestrator | + export PATH 2025-07-25 00:18:04.747333 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-25 00:18:04.747346 | orchestrator | + '[' -n '' ']' 2025-07-25 00:18:04.747357 | orchestrator | + hash -r 2025-07-25 00:18:04.747368 | orchestrator | + '[' -n '' ']' 2025-07-25 00:18:04.747379 | orchestrator | + unset VIRTUAL_ENV 2025-07-25 00:18:04.747389 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-25 00:18:04.747423 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-25 00:18:04.747434 | orchestrator | + unset -f deactivate 2025-07-25 00:18:04.747446 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-25 00:18:04.752754 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-25 00:18:04.752807 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-25 00:18:04.752819 | orchestrator | + local max_attempts=60 2025-07-25 00:18:04.752831 | orchestrator | + local name=ceph-ansible 2025-07-25 00:18:04.752843 | orchestrator | + local attempt_num=1 2025-07-25 00:18:04.753728 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-25 00:18:04.796000 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-25 00:18:04.796111 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-25 00:18:04.796134 | orchestrator | + local max_attempts=60 2025-07-25 00:18:04.796156 | orchestrator | + local name=kolla-ansible 2025-07-25 00:18:04.796174 | orchestrator | + local attempt_num=1 2025-07-25 00:18:04.796300 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-25 00:18:04.839904 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-25 00:18:04.840009 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-25 00:18:04.840024 | orchestrator | + local max_attempts=60 2025-07-25 00:18:04.840037 | orchestrator | + local name=osism-ansible 2025-07-25 00:18:04.840048 | orchestrator | + local attempt_num=1 2025-07-25 00:18:04.841352 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-25 00:18:04.881209 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-25 00:18:04.881342 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-25 00:18:04.881356 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-25 00:18:05.601285 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-25 00:18:05.860968 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-25 00:18:05.861072 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-25 00:18:05.861087 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-25 00:18:05.861099 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-25 00:18:05.861112 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-25 00:18:05.861156 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-25 00:18:05.861168 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-25 00:18:05.861179 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-07-25 00:18:05.861190 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-25 00:18:05.861201 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-25 00:18:05.861212 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-25 00:18:05.861223 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-25 00:18:05.861296 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-25 00:18:05.861307 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-25 00:18:05.861318 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-25 00:18:05.867026 | orchestrator | ++ semver latest 7.0.0 2025-07-25 00:18:05.915073 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-25 00:18:05.915151 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-25 00:18:05.915166 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-25 00:18:05.919697 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-25 00:18:17.812390 | orchestrator | 2025-07-25 00:18:17 | INFO  | Task f981e945-bffb-46d2-a487-04a3e31b0255 (resolvconf) was prepared for execution. 2025-07-25 00:18:17.812534 | orchestrator | 2025-07-25 00:18:17 | INFO  | It takes a moment until task f981e945-bffb-46d2-a487-04a3e31b0255 (resolvconf) has been started and output is visible here. 2025-07-25 00:18:38.452595 | orchestrator | 2025-07-25 00:18:38.452719 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-25 00:18:38.452740 | orchestrator | 2025-07-25 00:18:38.452758 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-25 00:18:38.452774 | orchestrator | Friday 25 July 2025 00:18:23 +0000 (0:00:00.148) 0:00:00.148 *********** 2025-07-25 00:18:38.452790 | orchestrator | ok: [testbed-manager] 2025-07-25 00:18:38.452806 | orchestrator | 2025-07-25 00:18:38.452820 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-25 00:18:38.452830 | orchestrator | Friday 25 July 2025 00:18:29 +0000 (0:00:05.282) 0:00:05.431 *********** 2025-07-25 00:18:38.452840 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:18:38.452849 | orchestrator | 2025-07-25 00:18:38.452862 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-25 00:18:38.452871 | orchestrator | Friday 25 July 2025 00:18:29 +0000 (0:00:00.068) 0:00:05.499 *********** 2025-07-25 00:18:38.452902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-25 00:18:38.452912 | orchestrator | 2025-07-25 00:18:38.452921 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-25 00:18:38.452930 | orchestrator | Friday 25 July 2025 00:18:29 +0000 (0:00:00.085) 0:00:05.584 *********** 2025-07-25 00:18:38.452939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-25 00:18:38.452947 | orchestrator | 2025-07-25 00:18:38.452956 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-25 00:18:38.452965 | orchestrator | Friday 25 July 2025 00:18:29 +0000 (0:00:00.071) 0:00:05.656 *********** 2025-07-25 00:18:38.452973 | orchestrator | ok: [testbed-manager] 2025-07-25 00:18:38.452982 | orchestrator | 2025-07-25 00:18:38.452990 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-25 00:18:38.452999 | orchestrator | Friday 25 July 2025 00:18:31 +0000 (0:00:01.627) 0:00:07.284 *********** 2025-07-25 00:18:38.453007 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:18:38.453016 | orchestrator | 2025-07-25 00:18:38.453024 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-25 00:18:38.453033 | orchestrator | Friday 25 July 2025 00:18:31 +0000 (0:00:00.060) 0:00:07.344 *********** 2025-07-25 00:18:38.453041 | orchestrator | ok: [testbed-manager] 2025-07-25 00:18:38.453049 | orchestrator | 2025-07-25 00:18:38.453058 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-25 00:18:38.453066 | orchestrator | Friday 25 July 2025 00:18:31 +0000 (0:00:00.739) 0:00:08.084 *********** 2025-07-25 00:18:38.453075 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:18:38.453083 | orchestrator | 2025-07-25 00:18:38.453092 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-25 00:18:38.453101 | orchestrator | Friday 25 July 2025 00:18:31 +0000 (0:00:00.082) 0:00:08.167 *********** 2025-07-25 00:18:38.453110 | orchestrator | changed: [testbed-manager] 2025-07-25 00:18:38.453118 | orchestrator | 2025-07-25 00:18:38.453127 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-25 00:18:38.453135 | orchestrator | Friday 25 July 2025 00:18:32 +0000 (0:00:00.999) 0:00:09.166 *********** 2025-07-25 00:18:38.453146 | orchestrator | changed: [testbed-manager] 2025-07-25 00:18:38.453156 | orchestrator | 2025-07-25 00:18:38.453165 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-25 00:18:38.453175 | orchestrator | Friday 25 July 2025 00:18:34 +0000 (0:00:01.729) 0:00:10.896 *********** 2025-07-25 00:18:38.453185 | orchestrator | ok: [testbed-manager] 2025-07-25 00:18:38.453195 | orchestrator | 2025-07-25 00:18:38.453205 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-25 00:18:38.453262 | orchestrator | Friday 25 July 2025 00:18:36 +0000 (0:00:01.415) 0:00:12.312 *********** 2025-07-25 00:18:38.453273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-25 00:18:38.453282 | orchestrator | 2025-07-25 00:18:38.453302 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-25 00:18:38.453311 | orchestrator | Friday 25 July 2025 00:18:36 +0000 (0:00:00.079) 0:00:12.391 *********** 2025-07-25 00:18:38.453321 | orchestrator | changed: [testbed-manager] 2025-07-25 00:18:38.453331 | orchestrator | 2025-07-25 00:18:38.453341 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-25 00:18:38.453352 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-25 00:18:38.453363 | orchestrator | 2025-07-25 00:18:38.453372 | orchestrator | 2025-07-25 00:18:38.453383 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-25 00:18:38.453400 | orchestrator | Friday 25 July 2025 00:18:37 +0000 (0:00:01.632) 0:00:14.023 *********** 2025-07-25 00:18:38.453410 | orchestrator | =============================================================================== 2025-07-25 00:18:38.453420 | orchestrator | Gathering Facts --------------------------------------------------------- 5.28s 2025-07-25 00:18:38.453430 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.73s 2025-07-25 00:18:38.453441 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.63s 2025-07-25 00:18:38.453451 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.63s 2025-07-25 00:18:38.453461 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.42s 2025-07-25 00:18:38.453472 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 1.00s 2025-07-25 00:18:38.453499 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.74s 2025-07-25 00:18:38.453508 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-07-25 00:18:38.453517 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-07-25 00:18:38.453526 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-07-25 00:18:38.453535 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-07-25 00:18:38.453543 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-07-25 00:18:38.453552 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-07-25 00:18:38.789709 | orchestrator | + osism apply sshconfig 2025-07-25 00:18:50.818931 | orchestrator | 2025-07-25 00:18:50 | INFO  | Task 66a738f1-af61-4343-bdd0-396ba99fec9b (sshconfig) was prepared for execution. 2025-07-25 00:18:50.819048 | orchestrator | 2025-07-25 00:18:50 | INFO  | It takes a moment until task 66a738f1-af61-4343-bdd0-396ba99fec9b (sshconfig) has been started and output is visible here. 2025-07-25 00:19:07.837153 | orchestrator | 2025-07-25 00:19:07.837310 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-25 00:19:07.837329 | orchestrator | 2025-07-25 00:19:07.837341 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-25 00:19:07.837352 | orchestrator | Friday 25 July 2025 00:18:56 +0000 (0:00:00.147) 0:00:00.147 *********** 2025-07-25 00:19:07.837363 | orchestrator | ok: [testbed-manager] 2025-07-25 00:19:07.837375 | orchestrator | 2025-07-25 00:19:07.837385 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-25 00:19:07.837396 | orchestrator | Friday 25 July 2025 00:18:57 +0000 (0:00:00.751) 0:00:00.899 *********** 2025-07-25 00:19:07.837407 | orchestrator | changed: [testbed-manager] 2025-07-25 00:19:07.837418 | orchestrator | 2025-07-25 00:19:07.837429 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-25 00:19:07.837439 | orchestrator | Friday 25 July 2025 00:18:58 +0000 (0:00:00.931) 0:00:01.831 *********** 2025-07-25 00:19:07.837450 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-25 00:19:07.837461 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-25 00:19:07.837471 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-25 00:19:07.837482 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-25 00:19:07.837492 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-25 00:19:07.837503 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-25 00:19:07.837534 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-25 00:19:07.837546 | orchestrator | 2025-07-25 00:19:07.837556 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-25 00:19:07.837567 | orchestrator | Friday 25 July 2025 00:19:06 +0000 (0:00:07.956) 0:00:09.787 *********** 2025-07-25 00:19:07.837601 | orchestrator | skipping: [testbed-manager] 2025-07-25 00:19:07.837612 | orchestrator | 2025-07-25 00:19:07.837623 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-25 00:19:07.837633 | orchestrator | Friday 25 July 2025 00:19:06 +0000 (0:00:00.049) 0:00:09.837 *********** 2025-07-25 00:19:07.837644 | orchestrator | changed: [testbed-manager] 2025-07-25 00:19:07.837654 | orchestrator | 2025-07-25 00:19:07.837666 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-25 00:19:07.837679 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-25 00:19:07.837694 | orchestrator | 2025-07-25 00:19:07.837706 | orchestrator | 2025-07-25 00:19:07.837719 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-25 00:19:07.837731 | orchestrator | Friday 25 July 2025 00:19:07 +0000 (0:00:00.802) 0:00:10.640 *********** 2025-07-25 00:19:07.837744 | orchestrator | =============================================================================== 2025-07-25 00:19:07.837756 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 7.96s 2025-07-25 00:19:07.837768 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.93s 2025-07-25 00:19:07.837780 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.80s 2025-07-25 00:19:07.837793 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.75s 2025-07-25 00:19:07.837805 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.05s 2025-07-25 00:19:08.105512 | orchestrator | + osism apply known-hosts 2025-07-25 00:19:20.113165 | orchestrator | 2025-07-25 00:19:20 | INFO  | Task 3725a04e-8cf9-429d-a0d9-c96ba7fff3f1 (known-hosts) was prepared for execution. 2025-07-25 00:19:20.113320 | orchestrator | 2025-07-25 00:19:20 | INFO  | It takes a moment until task 3725a04e-8cf9-429d-a0d9-c96ba7fff3f1 (known-hosts) has been started and output is visible here. 2025-07-25 00:19:34.262927 | orchestrator | 2025-07-25 00:19:34 | INFO  | Task 161f95a5-df15-4493-b0ce-f04e2680ab91 (known-hosts) was prepared for execution. 2025-07-25 00:19:34.263044 | orchestrator | 2025-07-25 00:19:34 | INFO  | It takes a moment until task 161f95a5-df15-4493-b0ce-f04e2680ab91 (known-hosts) has been started and output is visible here. 2025-07-25 00:19:47.060875 | orchestrator | 2025-07-25 00:19:47.061002 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-25 00:19:47.061021 | orchestrator | 2025-07-25 00:19:47.061034 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-25 00:19:47.061047 | orchestrator | Friday 25 July 2025 00:19:26 +0000 (0:00:00.151) 0:00:00.151 *********** 2025-07-25 00:19:47.061059 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-25 00:19:47.061070 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-25 00:19:47.061081 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-25 00:19:47.061092 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-25 00:19:47.061102 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-25 00:19:47.061113 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-25 00:19:47.061124 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-25 00:19:47.061134 | orchestrator | 2025-07-25 00:19:47.061145 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-25 00:19:47.061158 | orchestrator | Friday 25 July 2025 00:19:33 +0000 (0:00:07.234) 0:00:07.385 *********** 2025-07-25 00:19:47.061170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-25 00:19:47.061219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-25 00:19:47.061268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-25 00:19:47.061294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-25 00:19:47.061306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-25 00:19:47.061317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-25 00:19:47.061327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-25 00:19:47.061338 | orchestrator | 2025-07-25 00:19:47.061349 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-25 00:19:47.061360 | orchestrator | Friday 25 July 2025 00:19:33 +0000 (0:00:00.193) 0:00:07.579 *********** 2025-07-25 00:19:47.061371 | orchestrator | [ERROR]: Task failed: Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-25 00:19:47.061386 | orchestrator |  2025-07-25 00:19:47.061398 | orchestrator | Task failed. 2025-07-25 00:19:47.061412 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:2:3 2025-07-25 00:19:47.061425 | orchestrator |  2025-07-25 00:19:47.061438 | orchestrator | 1 --- 2025-07-25 00:19:47.061450 | orchestrator | 2 - name: Write scanned known_hosts entries 2025-07-25 00:19:47.061463 | orchestrator |  ^ column 3 2025-07-25 00:19:47.061475 | orchestrator |  2025-07-25 00:19:47.061487 | orchestrator | <<< caused by >>> 2025-07-25 00:19:47.061499 | orchestrator |  2025-07-25 00:19:47.061512 | orchestrator | Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-25 00:19:47.061525 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:12:7 2025-07-25 00:19:47.061537 | orchestrator |  2025-07-25 00:19:47.061550 | orchestrator | 10 when: 2025-07-25 00:19:47.061563 | orchestrator | 11 - item['stdout_lines'] is defined 2025-07-25 00:19:47.061576 | orchestrator | 12 - item['stdout_lines'] | length 2025-07-25 00:19:47.061589 | orchestrator |  ^ column 7 2025-07-25 00:19:47.061601 | orchestrator |  2025-07-25 00:19:47.061614 | orchestrator | Broken conditionals can be temporarily allowed with the `ALLOW_BROKEN_CONDITIONALS` configuration option. 2025-07-25 00:19:47.061627 | orchestrator |  2025-07-25 00:19:47.061643 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7l6AvuE8wUbV1wGSXUNtZrnphtTscJpPlJq1qtp4Gr+QvzQNNQsAFtZsCtI6osTMJT2MW1wBu6yUaOBxkiXyhwn/GmaBN5lmdsjjyfjITka1LHzsiFNZBWwKXl7aPHgFPBtV8IzXGt3anZmgvsHjuVlLx6JNJM1UVOuczWluqGgEogV7qFe+MwFJYSfCBxTTB1ZOVtLBXSR/f2SEIbjH7saX6QwkfdUTL+hj6vXmZhk7d+AwnPcYz5TA8P7wuYQ/q2OtsmGXPL7sqdKGyclMLoT0otMS+XyRyJ60wQaS+aT0a/ZI70JQfjQp7uJQ0mtwEv6A6cU1irx0ol40TX7er2O0TZPvjFL+nGy7zhmoy7rC6qIsJYAKM9puFsX7KQEuNanI7KM5Q5ImO3v+P4OHcMnryjLjCIEdHJpmSlmlIZMq4ARGvd0GvtLC8kgTp4A+aDwsPJeGR4m6mPY8nUfefHGzh0qE3OMWdIk16RVEj7VIpBdt7Ror/H7ISLF4ApbM=) => changed=false  2025-07-25 00:19:47.061659 | orchestrator |  ansible_loop_var: inner_item 2025-07-25 00:19:47.061692 | orchestrator |  inner_item: testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7l6AvuE8wUbV1wGSXUNtZrnphtTscJpPlJq1qtp4Gr+QvzQNNQsAFtZsCtI6osTMJT2MW1wBu6yUaOBxkiXyhwn/GmaBN5lmdsjjyfjITka1LHzsiFNZBWwKXl7aPHgFPBtV8IzXGt3anZmgvsHjuVlLx6JNJM1UVOuczWluqGgEogV7qFe+MwFJYSfCBxTTB1ZOVtLBXSR/f2SEIbjH7saX6QwkfdUTL+hj6vXmZhk7d+AwnPcYz5TA8P7wuYQ/q2OtsmGXPL7sqdKGyclMLoT0otMS+XyRyJ60wQaS+aT0a/ZI70JQfjQp7uJQ0mtwEv6A6cU1irx0ol40TX7er2O0TZPvjFL+nGy7zhmoy7rC6qIsJYAKM9puFsX7KQEuNanI7KM5Q5ImO3v+P4OHcMnryjLjCIEdHJpmSlmlIZMq4ARGvd0GvtLC8kgTp4A+aDwsPJeGR4m6mPY8nUfefHGzh0qE3OMWdIk16RVEj7VIpBdt7Ror/H7ISLF4ApbM= 2025-07-25 00:19:47.061714 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-25 00:19:47.061729 | orchestrator | failed: [testbed-manager] (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhldSQL/nMRe97Ir7furQs/9Nl8CflnxlbFHs+H1LKgrjiDAQ3sOtZu9i5nif88K9xTJGPq+Cw/vDWIzLKc+/Y=) => changed=false  2025-07-25 00:19:47.061741 | orchestrator |  ansible_loop_var: inner_item 2025-07-25 00:19:47.061752 | orchestrator |  inner_item: testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhldSQL/nMRe97Ir7furQs/9Nl8CflnxlbFHs+H1LKgrjiDAQ3sOtZu9i5nif88K9xTJGPq+Cw/vDWIzLKc+/Y= 2025-07-25 00:19:47.061763 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-25 00:19:47.061838 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAVV5djlR964rNTBt7/yZJ4DYmo8wqFW9w+eje8drir/) => changed=false  2025-07-25 00:19:47.061850 | orchestrator |  ansible_loop_var: inner_item 2025-07-25 00:19:47.061860 | orchestrator |  inner_item: testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAVV5djlR964rNTBt7/yZJ4DYmo8wqFW9w+eje8drir/ 2025-07-25 00:19:47.061872 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-25 00:19:47.061882 | orchestrator | 2025-07-25 00:19:47.061893 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-25 00:19:47.061904 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-25 00:19:47.061915 | orchestrator | 2025-07-25 00:19:47.061925 | orchestrator | 2025-07-25 00:19:47.061936 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-25 00:19:47.061947 | orchestrator | Friday 25 July 2025 00:19:33 +0000 (0:00:00.096) 0:00:07.676 *********** 2025-07-25 00:19:47.061957 | orchestrator | =============================================================================== 2025-07-25 00:19:47.061967 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 7.23s 2025-07-25 00:19:47.061978 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-07-25 00:19:47.061989 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.10s 2025-07-25 00:19:47.062143 | orchestrator | 2025-07-25 00:19:47.062165 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-25 00:19:47.062233 | orchestrator | 2025-07-25 00:19:47.062251 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-25 00:19:47.062269 | orchestrator | Friday 25 July 2025 00:19:40 +0000 (0:00:00.154) 0:00:00.154 *********** 2025-07-25 00:19:47.062285 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-25 00:19:47.062302 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-25 00:19:47.062319 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-25 00:19:47.062336 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-25 00:19:47.062354 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-25 00:19:47.062371 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-25 00:19:47.062389 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-25 00:19:47.062408 | orchestrator | 2025-07-25 00:19:47.062426 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-25 00:19:47.062444 | orchestrator | Friday 25 July 2025 00:19:46 +0000 (0:00:06.494) 0:00:06.648 *********** 2025-07-25 00:19:47.062476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-25 00:19:47.062495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-25 00:19:47.062512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-25 00:19:47.062538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-25 00:19:47.696830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-25 00:19:47.696934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-25 00:19:47.696949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-25 00:19:47.696961 | orchestrator | 2025-07-25 00:19:47.696973 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-25 00:19:47.696986 | orchestrator | Friday 25 July 2025 00:19:47 +0000 (0:00:00.181) 0:00:06.830 *********** 2025-07-25 00:19:47.696997 | orchestrator | [ERROR]: Task failed: Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-25 00:19:47.697009 | orchestrator |  2025-07-25 00:19:47.697020 | orchestrator | Task failed. 2025-07-25 00:19:47.697033 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:2:3 2025-07-25 00:19:47.697045 | orchestrator |  2025-07-25 00:19:47.697056 | orchestrator | 1 --- 2025-07-25 00:19:47.697066 | orchestrator | 2 - name: Write scanned known_hosts entries 2025-07-25 00:19:47.697077 | orchestrator |  ^ column 3 2025-07-25 00:19:47.697088 | orchestrator |  2025-07-25 00:19:47.697099 | orchestrator | <<< caused by >>> 2025-07-25 00:19:47.697109 | orchestrator |  2025-07-25 00:19:47.697121 | orchestrator | Conditional result was '3' of type 'int', which evaluates to True. Conditionals must have a boolean result. 2025-07-25 00:19:47.697132 | orchestrator | Origin: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml:12:7 2025-07-25 00:19:47.697143 | orchestrator |  2025-07-25 00:19:47.697153 | orchestrator | 10 when: 2025-07-25 00:19:47.697164 | orchestrator | 11 - item['stdout_lines'] is defined 2025-07-25 00:19:47.697240 | orchestrator | 12 - item['stdout_lines'] | length 2025-07-25 00:19:47.697254 | orchestrator |  ^ column 7 2025-07-25 00:19:47.697265 | orchestrator |  2025-07-25 00:19:47.697295 | orchestrator | Broken conditionals can be temporarily allowed with the `ALLOW_BROKEN_CONDITIONALS` configuration option. 2025-07-25 00:19:47.697307 | orchestrator |  2025-07-25 00:19:47.697319 | orchestrator | failed: [testbed-manager] (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhldSQL/nMRe97Ir7furQs/9Nl8CflnxlbFHs+H1LKgrjiDAQ3sOtZu9i5nif88K9xTJGPq+Cw/vDWIzLKc+/Y=) => changed=false  2025-07-25 00:19:47.697332 | orchestrator |  ansible_loop_var: inner_item 2025-07-25 00:19:47.697343 | orchestrator |  inner_item: testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhldSQL/nMRe97Ir7furQs/9Nl8CflnxlbFHs+H1LKgrjiDAQ3sOtZu9i5nif88K9xTJGPq+Cw/vDWIzLKc+/Y= 2025-07-25 00:19:47.697377 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-25 00:19:47.697393 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7l6AvuE8wUbV1wGSXUNtZrnphtTscJpPlJq1qtp4Gr+QvzQNNQsAFtZsCtI6osTMJT2MW1wBu6yUaOBxkiXyhwn/GmaBN5lmdsjjyfjITka1LHzsiFNZBWwKXl7aPHgFPBtV8IzXGt3anZmgvsHjuVlLx6JNJM1UVOuczWluqGgEogV7qFe+MwFJYSfCBxTTB1ZOVtLBXSR/f2SEIbjH7saX6QwkfdUTL+hj6vXmZhk7d+AwnPcYz5TA8P7wuYQ/q2OtsmGXPL7sqdKGyclMLoT0otMS+XyRyJ60wQaS+aT0a/ZI70JQfjQp7uJQ0mtwEv6A6cU1irx0ol40TX7er2O0TZPvjFL+nGy7zhmoy7rC6qIsJYAKM9puFsX7KQEuNanI7KM5Q5ImO3v+P4OHcMnryjLjCIEdHJpmSlmlIZMq4ARGvd0GvtLC8kgTp4A+aDwsPJeGR4m6mPY8nUfefHGzh0qE3OMWdIk16RVEj7VIpBdt7Ror/H7ISLF4ApbM=) => changed=false  2025-07-25 00:19:47.697409 | orchestrator |  ansible_loop_var: inner_item 2025-07-25 00:19:47.697422 | orchestrator |  inner_item: testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7l6AvuE8wUbV1wGSXUNtZrnphtTscJpPlJq1qtp4Gr+QvzQNNQsAFtZsCtI6osTMJT2MW1wBu6yUaOBxkiXyhwn/GmaBN5lmdsjjyfjITka1LHzsiFNZBWwKXl7aPHgFPBtV8IzXGt3anZmgvsHjuVlLx6JNJM1UVOuczWluqGgEogV7qFe+MwFJYSfCBxTTB1ZOVtLBXSR/f2SEIbjH7saX6QwkfdUTL+hj6vXmZhk7d+AwnPcYz5TA8P7wuYQ/q2OtsmGXPL7sqdKGyclMLoT0otMS+XyRyJ60wQaS+aT0a/ZI70JQfjQp7uJQ0mtwEv6A6cU1irx0ol40TX7er2O0TZPvjFL+nGy7zhmoy7rC6qIsJYAKM9puFsX7KQEuNanI7KM5Q5ImO3v+P4OHcMnryjLjCIEdHJpmSlmlIZMq4ARGvd0GvtLC8kgTp4A+aDwsPJeGR4m6mPY8nUfefHGzh0qE3OMWdIk16RVEj7VIpBdt7Ror/H7ISLF4ApbM= 2025-07-25 00:19:47.697436 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-25 00:19:47.697448 | orchestrator | failed: [testbed-manager] (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAVV5djlR964rNTBt7/yZJ4DYmo8wqFW9w+eje8drir/) => changed=false  2025-07-25 00:19:47.697462 | orchestrator |  ansible_loop_var: inner_item 2025-07-25 00:19:47.697492 | orchestrator |  inner_item: testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAVV5djlR964rNTBt7/yZJ4DYmo8wqFW9w+eje8drir/ 2025-07-25 00:19:47.697505 | orchestrator |  msg: 'Task failed: Conditional result was ''3'' of type ''int'', which evaluates to True. Conditionals must have a boolean result.' 2025-07-25 00:19:47.697517 | orchestrator | 2025-07-25 00:19:47.697530 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-25 00:19:47.697543 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-25 00:19:47.697555 | orchestrator | 2025-07-25 00:19:47.697568 | orchestrator | 2025-07-25 00:19:47.697580 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-25 00:19:47.697592 | orchestrator | Friday 25 July 2025 00:19:47 +0000 (0:00:00.102) 0:00:06.932 *********** 2025-07-25 00:19:47.697605 | orchestrator | =============================================================================== 2025-07-25 00:19:47.697618 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.49s 2025-07-25 00:19:47.697630 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-07-25 00:19:47.697642 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.10s 2025-07-25 00:19:48.357424 | orchestrator | ERROR 2025-07-25 00:19:48.357895 | orchestrator | { 2025-07-25 00:19:48.358047 | orchestrator | "delta": "0:06:15.233072", 2025-07-25 00:19:48.358137 | orchestrator | "end": "2025-07-25 00:19:47.973604", 2025-07-25 00:19:48.358194 | orchestrator | "msg": "non-zero return code", 2025-07-25 00:19:48.358246 | orchestrator | "rc": 2, 2025-07-25 00:19:48.358295 | orchestrator | "start": "2025-07-25 00:13:32.740532" 2025-07-25 00:19:48.358341 | orchestrator | } failure 2025-07-25 00:19:48.384650 | 2025-07-25 00:19:48.384895 | PLAY RECAP 2025-07-25 00:19:48.385039 | orchestrator | ok: 20 changed: 7 unreachable: 0 failed: 1 skipped: 2 rescued: 0 ignored: 0 2025-07-25 00:19:48.385111 | 2025-07-25 00:19:48.537895 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-25 00:19:48.541623 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-25 00:19:49.285727 | 2025-07-25 00:19:49.286033 | PLAY [Post output play] 2025-07-25 00:19:49.303553 | 2025-07-25 00:19:49.303692 | LOOP [stage-output : Register sources] 2025-07-25 00:19:49.357018 | 2025-07-25 00:19:49.357228 | TASK [stage-output : Check sudo] 2025-07-25 00:19:50.525882 | orchestrator | sudo: a password is required 2025-07-25 00:19:50.893465 | orchestrator | ok: Runtime: 0:00:00.333322 2025-07-25 00:19:50.909362 | 2025-07-25 00:19:50.909531 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-25 00:19:50.947546 | 2025-07-25 00:19:50.947805 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-25 00:19:51.025337 | orchestrator | ok 2025-07-25 00:19:51.034507 | 2025-07-25 00:19:51.034649 | LOOP [stage-output : Ensure target folders exist] 2025-07-25 00:19:51.501151 | orchestrator | ok: "docs" 2025-07-25 00:19:51.501449 | 2025-07-25 00:19:51.744572 | orchestrator | ok: "artifacts" 2025-07-25 00:19:51.990362 | orchestrator | ok: "logs" 2025-07-25 00:19:52.012928 | 2025-07-25 00:19:52.013134 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-25 00:19:52.050006 | 2025-07-25 00:19:52.050289 | TASK [stage-output : Make all log files readable] 2025-07-25 00:19:52.337403 | orchestrator | ok 2025-07-25 00:19:52.344632 | 2025-07-25 00:19:52.344747 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-25 00:19:52.368879 | orchestrator | skipping: Conditional result was False 2025-07-25 00:19:52.376546 | 2025-07-25 00:19:52.376652 | TASK [stage-output : Discover log files for compression] 2025-07-25 00:19:52.400749 | orchestrator | skipping: Conditional result was False 2025-07-25 00:19:52.407603 | 2025-07-25 00:19:52.407703 | LOOP [stage-output : Archive everything from logs] 2025-07-25 00:19:52.453695 | 2025-07-25 00:19:52.453912 | PLAY [Post cleanup play] 2025-07-25 00:19:52.463065 | 2025-07-25 00:19:52.463175 | TASK [Set cloud fact (Zuul deployment)] 2025-07-25 00:19:52.520149 | orchestrator | ok 2025-07-25 00:19:52.532324 | 2025-07-25 00:19:52.532445 | TASK [Set cloud fact (local deployment)] 2025-07-25 00:19:52.566499 | orchestrator | skipping: Conditional result was False 2025-07-25 00:19:52.584163 | 2025-07-25 00:19:52.584338 | TASK [Clean the cloud environment] 2025-07-25 00:19:53.184977 | orchestrator | 2025-07-25 00:19:53 - clean up servers 2025-07-25 00:19:53.963719 | orchestrator | 2025-07-25 00:19:53 - testbed-manager 2025-07-25 00:19:54.045108 | orchestrator | 2025-07-25 00:19:54 - testbed-node-3 2025-07-25 00:19:54.135914 | orchestrator | 2025-07-25 00:19:54 - testbed-node-1 2025-07-25 00:19:54.232468 | orchestrator | 2025-07-25 00:19:54 - testbed-node-5 2025-07-25 00:19:54.330509 | orchestrator | 2025-07-25 00:19:54 - testbed-node-0 2025-07-25 00:19:54.436151 | orchestrator | 2025-07-25 00:19:54 - testbed-node-2 2025-07-25 00:19:54.534948 | orchestrator | 2025-07-25 00:19:54 - testbed-node-4 2025-07-25 00:19:54.626596 | orchestrator | 2025-07-25 00:19:54 - clean up keypairs 2025-07-25 00:19:54.645017 | orchestrator | 2025-07-25 00:19:54 - testbed 2025-07-25 00:19:54.673515 | orchestrator | 2025-07-25 00:19:54 - wait for servers to be gone 2025-07-25 00:20:05.948767 | orchestrator | 2025-07-25 00:20:05 - clean up ports 2025-07-25 00:20:06.154240 | orchestrator | 2025-07-25 00:20:06 - 303da216-d145-4511-b554-af2fe1014875 2025-07-25 00:20:06.440978 | orchestrator | 2025-07-25 00:20:06 - 907f7683-97a6-49e1-9916-ef84e50cbbd6 2025-07-25 00:20:06.720370 | orchestrator | 2025-07-25 00:20:06 - 9cf3cb16-df3b-486d-b9c8-45f823756a91 2025-07-25 00:20:06.988794 | orchestrator | 2025-07-25 00:20:06 - c05ca2b5-3389-4866-8abc-16645348220c 2025-07-25 00:20:07.205581 | orchestrator | 2025-07-25 00:20:07 - dfaa182a-fe74-4cae-b769-0bb8506e1cf1 2025-07-25 00:20:07.408534 | orchestrator | 2025-07-25 00:20:07 - e38f99b8-2c31-4c30-8b09-ba991aec4970 2025-07-25 00:20:07.605890 | orchestrator | 2025-07-25 00:20:07 - f58ea015-415b-4222-896d-ce7dae2ee823 2025-07-25 00:20:08.051645 | orchestrator | 2025-07-25 00:20:08 - clean up volumes 2025-07-25 00:20:08.171516 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-2-node-base 2025-07-25 00:20:08.209152 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-4-node-base 2025-07-25 00:20:08.249147 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-5-node-base 2025-07-25 00:20:08.288115 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-0-node-base 2025-07-25 00:20:08.329527 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-3-node-base 2025-07-25 00:20:08.371924 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-1-node-base 2025-07-25 00:20:08.415153 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-manager-base 2025-07-25 00:20:08.456713 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-1-node-4 2025-07-25 00:20:08.496892 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-8-node-5 2025-07-25 00:20:08.540563 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-2-node-5 2025-07-25 00:20:08.583999 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-0-node-3 2025-07-25 00:20:08.628942 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-5-node-5 2025-07-25 00:20:08.675384 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-7-node-4 2025-07-25 00:20:08.715154 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-3-node-3 2025-07-25 00:20:08.755000 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-6-node-3 2025-07-25 00:20:08.796686 | orchestrator | 2025-07-25 00:20:08 - testbed-volume-4-node-4 2025-07-25 00:20:08.839538 | orchestrator | 2025-07-25 00:20:08 - disconnect routers 2025-07-25 00:20:08.969855 | orchestrator | 2025-07-25 00:20:08 - testbed 2025-07-25 00:20:09.934906 | orchestrator | 2025-07-25 00:20:09 - clean up subnets 2025-07-25 00:20:10.002612 | orchestrator | 2025-07-25 00:20:10 - subnet-testbed-management 2025-07-25 00:20:10.529831 | orchestrator | 2025-07-25 00:20:10 - clean up networks 2025-07-25 00:20:10.700853 | orchestrator | 2025-07-25 00:20:10 - net-testbed-management 2025-07-25 00:20:10.992738 | orchestrator | 2025-07-25 00:20:10 - clean up security groups 2025-07-25 00:20:11.028385 | orchestrator | 2025-07-25 00:20:11 - testbed-node 2025-07-25 00:20:11.138133 | orchestrator | 2025-07-25 00:20:11 - testbed-management 2025-07-25 00:20:11.254967 | orchestrator | 2025-07-25 00:20:11 - clean up floating ips 2025-07-25 00:20:11.287466 | orchestrator | 2025-07-25 00:20:11 - 81.163.193.172 2025-07-25 00:20:11.634413 | orchestrator | 2025-07-25 00:20:11 - clean up routers 2025-07-25 00:20:11.733502 | orchestrator | 2025-07-25 00:20:11 - testbed 2025-07-25 00:20:13.146708 | orchestrator | ok: Runtime: 0:00:19.845479 2025-07-25 00:20:13.151234 | 2025-07-25 00:20:13.151407 | PLAY RECAP 2025-07-25 00:20:13.151588 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-25 00:20:13.151676 | 2025-07-25 00:20:13.286678 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-25 00:20:13.289179 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-25 00:20:14.032783 | 2025-07-25 00:20:14.032956 | PLAY [Cleanup play] 2025-07-25 00:20:14.048576 | 2025-07-25 00:20:14.048703 | TASK [Set cloud fact (Zuul deployment)] 2025-07-25 00:20:14.100768 | orchestrator | ok 2025-07-25 00:20:14.108674 | 2025-07-25 00:20:14.108805 | TASK [Set cloud fact (local deployment)] 2025-07-25 00:20:14.142868 | orchestrator | skipping: Conditional result was False 2025-07-25 00:20:14.158714 | 2025-07-25 00:20:14.158917 | TASK [Clean the cloud environment] 2025-07-25 00:20:15.327953 | orchestrator | 2025-07-25 00:20:15 - clean up servers 2025-07-25 00:20:15.870568 | orchestrator | 2025-07-25 00:20:15 - clean up keypairs 2025-07-25 00:20:15.887561 | orchestrator | 2025-07-25 00:20:15 - wait for servers to be gone 2025-07-25 00:20:15.929491 | orchestrator | 2025-07-25 00:20:15 - clean up ports 2025-07-25 00:20:16.008648 | orchestrator | 2025-07-25 00:20:16 - clean up volumes 2025-07-25 00:20:16.071740 | orchestrator | 2025-07-25 00:20:16 - disconnect routers 2025-07-25 00:20:16.099574 | orchestrator | 2025-07-25 00:20:16 - clean up subnets 2025-07-25 00:20:16.123725 | orchestrator | 2025-07-25 00:20:16 - clean up networks 2025-07-25 00:20:16.286010 | orchestrator | 2025-07-25 00:20:16 - clean up security groups 2025-07-25 00:20:16.323192 | orchestrator | 2025-07-25 00:20:16 - clean up floating ips 2025-07-25 00:20:16.347570 | orchestrator | 2025-07-25 00:20:16 - clean up routers 2025-07-25 00:20:16.698779 | orchestrator | ok: Runtime: 0:00:01.430766 2025-07-25 00:20:16.702966 | 2025-07-25 00:20:16.703134 | PLAY RECAP 2025-07-25 00:20:16.703265 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-25 00:20:16.703333 | 2025-07-25 00:20:16.828520 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-25 00:20:16.830987 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-25 00:20:17.575280 | 2025-07-25 00:20:17.575458 | PLAY [Base post-fetch] 2025-07-25 00:20:17.591209 | 2025-07-25 00:20:17.591345 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-25 00:20:17.656857 | orchestrator | skipping: Conditional result was False 2025-07-25 00:20:17.672528 | 2025-07-25 00:20:17.672738 | TASK [fetch-output : Set log path for single node] 2025-07-25 00:20:17.722462 | orchestrator | ok 2025-07-25 00:20:17.733054 | 2025-07-25 00:20:17.733208 | LOOP [fetch-output : Ensure local output dirs] 2025-07-25 00:20:18.220635 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/d3b177b786864e9fa5b133c6bb8ca532/work/logs" 2025-07-25 00:20:18.490281 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/d3b177b786864e9fa5b133c6bb8ca532/work/artifacts" 2025-07-25 00:20:18.768350 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/d3b177b786864e9fa5b133c6bb8ca532/work/docs" 2025-07-25 00:20:18.792989 | 2025-07-25 00:20:18.793153 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-25 00:20:19.728920 | orchestrator | changed: .d..t...... ./ 2025-07-25 00:20:19.729249 | orchestrator | changed: All items complete 2025-07-25 00:20:19.729303 | 2025-07-25 00:20:20.433844 | orchestrator | changed: .d..t...... ./ 2025-07-25 00:20:21.181293 | orchestrator | changed: .d..t...... ./ 2025-07-25 00:20:21.206140 | 2025-07-25 00:20:21.206293 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-25 00:20:21.242527 | orchestrator | skipping: Conditional result was False 2025-07-25 00:20:21.245002 | orchestrator | skipping: Conditional result was False 2025-07-25 00:20:21.271600 | 2025-07-25 00:20:21.271751 | PLAY RECAP 2025-07-25 00:20:21.271875 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-07-25 00:20:21.271930 | 2025-07-25 00:20:21.397452 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-25 00:20:21.398428 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-25 00:20:22.132541 | 2025-07-25 00:20:22.132698 | PLAY [Base post] 2025-07-25 00:20:22.147428 | 2025-07-25 00:20:22.147572 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-25 00:20:23.489008 | orchestrator | changed 2025-07-25 00:20:23.500556 | 2025-07-25 00:20:23.500685 | PLAY RECAP 2025-07-25 00:20:23.500764 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-25 00:20:23.500862 | 2025-07-25 00:20:23.615311 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-25 00:20:23.616372 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-25 00:20:24.388540 | 2025-07-25 00:20:24.388706 | PLAY [Base post-logs] 2025-07-25 00:20:24.399339 | 2025-07-25 00:20:24.399471 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-25 00:20:24.865524 | localhost | changed 2025-07-25 00:20:24.882559 | 2025-07-25 00:20:24.882764 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-25 00:20:24.912875 | localhost | ok 2025-07-25 00:20:24.922781 | 2025-07-25 00:20:24.923106 | TASK [Set zuul-log-path fact] 2025-07-25 00:20:24.941505 | localhost | ok 2025-07-25 00:20:24.951634 | 2025-07-25 00:20:24.951747 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-25 00:20:24.977367 | localhost | ok 2025-07-25 00:20:24.980683 | 2025-07-25 00:20:24.980792 | TASK [upload-logs : Create log directories] 2025-07-25 00:20:25.499135 | localhost | changed 2025-07-25 00:20:25.503412 | 2025-07-25 00:20:25.503570 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-25 00:20:26.006536 | localhost -> localhost | ok: Runtime: 0:00:00.007768 2025-07-25 00:20:26.017410 | 2025-07-25 00:20:26.017611 | TASK [upload-logs : Upload logs to log server] 2025-07-25 00:20:26.623509 | localhost | Output suppressed because no_log was given 2025-07-25 00:20:26.626358 | 2025-07-25 00:20:26.626506 | LOOP [upload-logs : Compress console log and json output] 2025-07-25 00:20:26.685387 | localhost | skipping: Conditional result was False 2025-07-25 00:20:26.690368 | localhost | skipping: Conditional result was False 2025-07-25 00:20:26.697960 | 2025-07-25 00:20:26.698178 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-25 00:20:26.745192 | localhost | skipping: Conditional result was False 2025-07-25 00:20:26.745815 | 2025-07-25 00:20:26.749416 | localhost | skipping: Conditional result was False 2025-07-25 00:20:26.762932 | 2025-07-25 00:20:26.763166 | LOOP [upload-logs : Upload console log and json output]