2025-05-14 12:50:38.581534 | Job console starting 2025-05-14 12:50:38.597856 | Updating git repos 2025-05-14 12:50:38.669245 | Cloning repos into workspace 2025-05-14 12:50:38.828872 | Restoring repo states 2025-05-14 12:50:38.870119 | Merging changes 2025-05-14 12:50:38.870146 | Checking out repos 2025-05-14 12:50:39.116802 | Preparing playbooks 2025-05-14 12:50:39.743654 | Running Ansible setup 2025-05-14 12:50:44.158695 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-14 12:50:44.912819 | 2025-05-14 12:50:44.912988 | PLAY [Base pre] 2025-05-14 12:50:44.930260 | 2025-05-14 12:50:44.930426 | TASK [Setup log path fact] 2025-05-14 12:50:44.962166 | orchestrator | ok 2025-05-14 12:50:44.980711 | 2025-05-14 12:50:44.980869 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-14 12:50:45.012425 | orchestrator | ok 2025-05-14 12:50:45.024973 | 2025-05-14 12:50:45.025114 | TASK [emit-job-header : Print job information] 2025-05-14 12:50:45.069186 | # Job Information 2025-05-14 12:50:45.069506 | Ansible Version: 2.16.14 2025-05-14 12:50:45.069560 | Job: testbed-deploy-in-a-nutshell-with-tempest-ubuntu-24.04 2025-05-14 12:50:45.069608 | Pipeline: periodic-daily 2025-05-14 12:50:45.069642 | Executor: 521e9411259a 2025-05-14 12:50:45.069672 | Triggered by: https://github.com/osism/testbed 2025-05-14 12:50:45.069703 | Event ID: 6c91c4a85ac14584838ab44f58195915 2025-05-14 12:50:45.079278 | 2025-05-14 12:50:45.079477 | LOOP [emit-job-header : Print node information] 2025-05-14 12:50:45.225235 | orchestrator | ok: 2025-05-14 12:50:45.225583 | orchestrator | # Node Information 2025-05-14 12:50:45.225642 | orchestrator | Inventory Hostname: orchestrator 2025-05-14 12:50:45.225684 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-14 12:50:45.225722 | orchestrator | Username: zuul-testbed04 2025-05-14 12:50:45.225757 | orchestrator | Distro: Debian 12.10 2025-05-14 12:50:45.225796 | orchestrator | Provider: static-testbed 2025-05-14 12:50:45.225830 | orchestrator | Region: 2025-05-14 12:50:45.225865 | orchestrator | Label: testbed-orchestrator 2025-05-14 12:50:45.225897 | orchestrator | Product Name: OpenStack Nova 2025-05-14 12:50:45.225929 | orchestrator | Interface IP: 81.163.193.140 2025-05-14 12:50:45.255099 | 2025-05-14 12:50:45.255311 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-14 12:50:45.830540 | orchestrator -> localhost | changed 2025-05-14 12:50:45.845315 | 2025-05-14 12:50:45.845538 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-14 12:50:46.928979 | orchestrator -> localhost | changed 2025-05-14 12:50:46.953748 | 2025-05-14 12:50:46.953909 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-14 12:50:47.283160 | orchestrator -> localhost | ok 2025-05-14 12:50:47.290850 | 2025-05-14 12:50:47.290997 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-14 12:50:47.322813 | orchestrator | ok 2025-05-14 12:50:47.340028 | orchestrator | included: /var/lib/zuul/builds/d1b692c6f21c4dabb19ab37a03516b02/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-14 12:50:47.348342 | 2025-05-14 12:50:47.348483 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-14 12:50:49.242766 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-14 12:50:49.243390 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/d1b692c6f21c4dabb19ab37a03516b02/work/d1b692c6f21c4dabb19ab37a03516b02_id_rsa 2025-05-14 12:50:49.243508 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/d1b692c6f21c4dabb19ab37a03516b02/work/d1b692c6f21c4dabb19ab37a03516b02_id_rsa.pub 2025-05-14 12:50:49.243585 | orchestrator -> localhost | The key fingerprint is: 2025-05-14 12:50:49.243655 | orchestrator -> localhost | SHA256:oKIViLrPw4zzS9c0Ckg7+dcCDaJH3+dCrM6OG34M5m0 zuul-build-sshkey 2025-05-14 12:50:49.243721 | orchestrator -> localhost | The key's randomart image is: 2025-05-14 12:50:49.243807 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-14 12:50:49.243871 | orchestrator -> localhost | | | 2025-05-14 12:50:49.243935 | orchestrator -> localhost | |.. | 2025-05-14 12:50:49.243994 | orchestrator -> localhost | |ooo. . | 2025-05-14 12:50:49.244051 | orchestrator -> localhost | |+o+oo+ . | 2025-05-14 12:50:49.244108 | orchestrator -> localhost | |++=.o.* S | 2025-05-14 12:50:49.244181 | orchestrator -> localhost | | ==o.*.+ | 2025-05-14 12:50:49.244238 | orchestrator -> localhost | |o*o=+oo.. | 2025-05-14 12:50:49.244295 | orchestrator -> localhost | |o==*E .. | 2025-05-14 12:50:49.244373 | orchestrator -> localhost | | oOB+ | 2025-05-14 12:50:49.244437 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-14 12:50:49.244586 | orchestrator -> localhost | ok: Runtime: 0:00:01.368736 2025-05-14 12:50:49.261448 | 2025-05-14 12:50:49.261636 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-14 12:50:49.302095 | orchestrator | ok 2025-05-14 12:50:49.318176 | orchestrator | included: /var/lib/zuul/builds/d1b692c6f21c4dabb19ab37a03516b02/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-14 12:50:49.328397 | 2025-05-14 12:50:49.328513 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-14 12:50:49.353156 | orchestrator | skipping: Conditional result was False 2025-05-14 12:50:49.365195 | 2025-05-14 12:50:49.365321 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-14 12:50:49.989998 | orchestrator | changed 2025-05-14 12:50:49.998801 | 2025-05-14 12:50:49.998969 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-14 12:50:50.283816 | orchestrator | ok 2025-05-14 12:50:50.293249 | 2025-05-14 12:50:50.293419 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-14 12:50:50.739631 | orchestrator | ok 2025-05-14 12:50:50.749130 | 2025-05-14 12:50:50.749274 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-14 12:50:51.181700 | orchestrator | ok 2025-05-14 12:50:51.190488 | 2025-05-14 12:50:51.190629 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-14 12:50:51.216891 | orchestrator | skipping: Conditional result was False 2025-05-14 12:50:51.226884 | 2025-05-14 12:50:51.227010 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-14 12:50:51.718241 | orchestrator -> localhost | changed 2025-05-14 12:50:51.738043 | 2025-05-14 12:50:51.738222 | TASK [add-build-sshkey : Add back temp key] 2025-05-14 12:50:52.112481 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/d1b692c6f21c4dabb19ab37a03516b02/work/d1b692c6f21c4dabb19ab37a03516b02_id_rsa (zuul-build-sshkey) 2025-05-14 12:50:52.113171 | orchestrator -> localhost | ok: Runtime: 0:00:00.022016 2025-05-14 12:50:52.128751 | 2025-05-14 12:50:52.128962 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-14 12:50:52.584519 | orchestrator | ok 2025-05-14 12:50:52.592978 | 2025-05-14 12:50:52.593117 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-14 12:50:52.627771 | orchestrator | skipping: Conditional result was False 2025-05-14 12:50:52.691752 | 2025-05-14 12:50:52.691909 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-14 12:50:53.105439 | orchestrator | ok 2025-05-14 12:50:53.119536 | 2025-05-14 12:50:53.119676 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-14 12:50:53.161515 | orchestrator | ok 2025-05-14 12:50:53.169987 | 2025-05-14 12:50:53.170098 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-14 12:50:53.479209 | orchestrator -> localhost | ok 2025-05-14 12:50:53.494707 | 2025-05-14 12:50:53.494880 | TASK [validate-host : Collect information about the host] 2025-05-14 12:50:54.756269 | orchestrator | ok 2025-05-14 12:50:54.773541 | 2025-05-14 12:50:54.773683 | TASK [validate-host : Sanitize hostname] 2025-05-14 12:50:54.836068 | orchestrator | ok 2025-05-14 12:50:54.843641 | 2025-05-14 12:50:54.843774 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-14 12:50:55.457238 | orchestrator -> localhost | changed 2025-05-14 12:50:55.471591 | 2025-05-14 12:50:55.471784 | TASK [validate-host : Collect information about zuul worker] 2025-05-14 12:50:55.927698 | orchestrator | ok 2025-05-14 12:50:55.935901 | 2025-05-14 12:50:55.936055 | TASK [validate-host : Write out all zuul information for each host] 2025-05-14 12:50:56.508792 | orchestrator -> localhost | changed 2025-05-14 12:50:56.529279 | 2025-05-14 12:50:56.529488 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-14 12:50:56.820525 | orchestrator | ok 2025-05-14 12:50:56.829473 | 2025-05-14 12:50:56.829608 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-14 12:51:17.503986 | orchestrator | changed: 2025-05-14 12:51:17.504241 | orchestrator | .d..t...... src/ 2025-05-14 12:51:17.504276 | orchestrator | .d..t...... src/github.com/ 2025-05-14 12:51:17.504301 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-14 12:51:17.504323 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-14 12:51:17.504344 | orchestrator | RedHat.yml 2025-05-14 12:51:17.514274 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-14 12:51:17.514292 | orchestrator | RedHat.yml 2025-05-14 12:51:17.514345 | orchestrator | 2025-05-14 12:51:49.998304 | orchestrator | main() 2025-05-14 12:51:49.998463 | orchestrator | File "/home/zuul-testbed04/src/github.com/osism/testbed/terraform/scripts/cleanup.py", line 201, in main 2025-05-14 12:51:49.998632 | orchestrator | cleanup_servers(conn, PREFIX) 2025-05-14 12:51:49.998692 | orchestrator | File "/home/zuul-testbed04/src/github.com/osism/testbed/terraform/scripts/cleanup.py", line 120, in cleanup_servers 2025-05-14 12:51:49.998883 | orchestrator | servers = list(conn.compute.servers(name=f"^{prefix}")) 2025-05-14 12:51:49.998919 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:51:49.998938 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/openstack/resource.py", line 1775, in list 2025-05-14 12:51:50.001825 | orchestrator | exceptions.raise_from_response(response) 2025-05-14 12:51:50.001869 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/openstack/exceptions.py", line 236, in raise_from_response 2025-05-14 12:51:50.003030 | orchestrator | raise cls( 2025-05-14 12:51:50.003091 | orchestrator | openstack.exceptions.HttpException: HttpException: 503: Server Error for url: https://nova.services.a.regiocloud.tech/v2.1/servers/detail?name=%5Etestbed, 503 Service Unavailable: problems. Please try again later.: The server is temporarily unable to service your: Apache/2.4.52 (Ubuntu) Server at nova.services.a.regiocloud.tech Port 8774: Service Unavailable: request due to maintenance downtime or capacity 2025-05-14 12:51:50.207564 | orchestrator | ok: Runtime: 0:00:30.914104 2025-05-14 12:51:50.228976 | 2025-05-14 12:51:50.229169 | TASK [Download terragrunt] 2025-05-14 12:51:51.359327 | orchestrator | ok: HTTP Error 304: Not Modified 2025-05-14 12:51:51.374023 | 2025-05-14 12:51:51.374356 | TASK [Extract tofu binary] 2025-05-14 12:51:54.371627 | orchestrator | ok 2025-05-14 12:51:54.385149 | 2025-05-14 12:51:54.385305 | TASK [Copy tofu binary] 2025-05-14 12:51:55.435911 | orchestrator | ok 2025-05-14 12:51:55.446213 | 2025-05-14 12:51:55.446352 | TASK [Sync terraform blueprint] 2025-05-14 12:51:55.746561 | orchestrator | sending incremental file list 2025-05-14 12:51:55.748544 | orchestrator | data.tf 2025-05-14 12:51:55.750511 | orchestrator | main.tf 2025-05-14 12:51:55.750572 | orchestrator | manager.tf 2025-05-14 12:51:55.750736 | orchestrator | neutron.tf 2025-05-14 12:51:55.750815 | orchestrator | nodes.tf 2025-05-14 12:51:55.750836 | orchestrator | nova.tf 2025-05-14 12:51:55.750887 | orchestrator | outputs.tf 2025-05-14 12:51:55.750913 | orchestrator | provider.tf 2025-05-14 12:51:55.750931 | orchestrator | terragrunt.hcl 2025-05-14 12:51:55.750973 | orchestrator | variables.tf 2025-05-14 12:51:55.751891 | orchestrator | customisations/ 2025-05-14 12:51:55.751927 | orchestrator | customisations/access_floatingip_custom.tf 2025-05-14 12:51:55.751945 | orchestrator | customisations/access_ipv4_custom.tf 2025-05-14 12:51:55.751962 | orchestrator | customisations/access_ipv6_custom.tf 2025-05-14 12:51:55.751974 | orchestrator | customisations/default_custom.tf 2025-05-14 12:51:55.751995 | orchestrator | customisations/external_api_custom.tf 2025-05-14 12:51:55.752049 | orchestrator | customisations/neutron_floatingip_custom.tf 2025-05-14 12:51:55.752078 | orchestrator | overrides/ 2025-05-14 12:51:55.752093 | orchestrator | overrides/manager_boot_from_image_override.tf 2025-05-14 12:51:55.752109 | orchestrator | overrides/manager_boot_from_volume_override.tf 2025-05-14 12:51:55.752145 | orchestrator | overrides/neutron_availability_zone_hints_network_override.tf 2025-05-14 12:51:55.752164 | orchestrator | overrides/neutron_availability_zone_hints_router_override.tf 2025-05-14 12:51:55.752179 | orchestrator | overrides/neutron_router_enable_snat_override.tf 2025-05-14 12:51:55.752216 | orchestrator | overrides/nodes_boot_from_image_override.tf 2025-05-14 12:51:55.752231 | orchestrator | overrides/nodes_boot_from_volume_override.tf 2025-05-14 12:51:55.752246 | orchestrator | overrides/nodes_use_ephemeral_storage_override.tf 2025-05-14 12:51:55.795557 | orchestrator | 2025-05-14 12:51:55.795618 | orchestrator | sent 7,087 bytes received 489 bytes 15,152.00 bytes/sec 2025-05-14 12:51:55.795631 | orchestrator | total size is 26,814 speedup is 3.54 2025-05-14 12:51:55.984550 | orchestrator | ok: Runtime: 0:00:00.064629 2025-05-14 12:51:55.998971 | 2025-05-14 12:51:55.999113 | TASK [Create local.env file] 2025-05-14 12:51:56.638308 | orchestrator | changed 2025-05-14 12:51:56.640969 | 2025-05-14 12:51:56.641085 | PLAY RECAP 2025-05-14 12:51:56.641162 | orchestrator | ok: 7 changed: 3 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-14 12:51:56.641197 | 2025-05-14 12:51:56.789337 | PRE-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/pre.yml@main] 2025-05-14 12:51:56.792057 | RUN START: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-14 12:51:57.597419 | 2025-05-14 12:51:57.597606 | PLAY [Deploy testbed] 2025-05-14 12:51:57.613142 | 2025-05-14 12:51:57.613286 | TASK [Print used ceph version] 2025-05-14 12:51:57.693143 | orchestrator | ok 2025-05-14 12:51:57.704044 | 2025-05-14 12:51:57.704205 | TASK [Print used openstack version] 2025-05-14 12:51:57.785357 | orchestrator | ok 2025-05-14 12:51:57.794690 | 2025-05-14 12:51:57.794826 | TASK [Print used manager version] 2025-05-14 12:51:57.863856 | orchestrator | ok 2025-05-14 12:51:57.873959 | 2025-05-14 12:51:57.874098 | TASK [Set facts (Zuul deployment)] 2025-05-14 12:51:57.965155 | orchestrator | ok 2025-05-14 12:51:57.974464 | 2025-05-14 12:51:57.974580 | TASK [Set facts (local deployment)] 2025-05-14 12:51:58.010136 | orchestrator | skipping: Conditional result was False 2025-05-14 12:51:58.026191 | 2025-05-14 12:51:58.026329 | TASK [Create infrastructure (latest)] 2025-05-14 12:51:58.672404 | orchestrator | 12:51:58.672 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-14 12:51:58.775346 | orchestrator | 12:51:58.775 STDOUT terraform: Initializing the backend... 2025-05-14 12:51:58.778765 | orchestrator | 12:51:58.776 STDOUT terraform: Initializing provider plugins... 2025-05-14 12:51:58.778807 | orchestrator | 12:51:58.776 STDOUT terraform: - terraform.io/builtin/terraform is built in to OpenTofu 2025-05-14 12:51:58.778820 | orchestrator | 12:51:58.776 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-14 12:51:59.019071 | orchestrator | 12:51:59.018 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-05-14 12:51:59.122714 | orchestrator | 12:51:59.122 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-14 12:52:00.520383 | orchestrator | 12:52:00.520 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-14 12:52:01.565281 | orchestrator | 12:52:01.565 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-14 12:52:03.130737 | orchestrator | 12:52:03.130 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-14 12:52:04.419195 | orchestrator | 12:52:04.418 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-14 12:52:05.910723 | orchestrator | 12:52:05.910 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-05-14 12:52:07.331268 | orchestrator | 12:52:07.330 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-05-14 12:52:07.331414 | orchestrator | 12:52:07.331 STDOUT terraform: Providers are signed by their developers. 2025-05-14 12:52:07.331433 | orchestrator | 12:52:07.331 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-14 12:52:07.331450 | orchestrator | 12:52:07.331 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-14 12:52:07.331594 | orchestrator | 12:52:07.331 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-14 12:52:07.331819 | orchestrator | 12:52:07.331 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-14 12:52:07.331991 | orchestrator | 12:52:07.331 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-14 12:52:07.332052 | orchestrator | 12:52:07.331 STDOUT terraform: you run "tofu init" in the future. 2025-05-14 12:52:07.332156 | orchestrator | 12:52:07.332 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-14 12:52:07.332290 | orchestrator | 12:52:07.332 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-14 12:52:07.332427 | orchestrator | 12:52:07.332 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-14 12:52:07.332472 | orchestrator | 12:52:07.332 STDOUT terraform: should now work. 2025-05-14 12:52:07.332731 | orchestrator | 12:52:07.332 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-14 12:52:07.332920 | orchestrator | 12:52:07.332 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-14 12:52:07.333044 | orchestrator | 12:52:07.332 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-14 12:52:07.515735 | orchestrator | 12:52:07.515 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-14 12:52:07.737125 | orchestrator | 12:52:07.736 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-14 12:52:07.737254 | orchestrator | 12:52:07.737 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-14 12:52:07.737442 | orchestrator | 12:52:07.737 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-14 12:52:07.737537 | orchestrator | 12:52:07.737 STDOUT terraform: for this configuration. 2025-05-14 12:52:07.989760 | orchestrator | 12:52:07.989 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-14 12:52:08.115048 | orchestrator | 12:52:08.114 STDOUT terraform: ci.auto.tfvars 2025-05-14 12:52:08.120084 | orchestrator | 12:52:08.120 STDOUT terraform: default_custom.tf 2025-05-14 12:52:08.325587 | orchestrator | 12:52:08.325 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-14 12:52:09.342841 | orchestrator | 12:52:09.342 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-14 12:52:19.343999 | orchestrator | 12:52:19.343 STDOUT terraform: data.openstack_networking_network_v2.public: Still reading... [10s elapsed] 2025-05-14 12:52:29.345050 | orchestrator | 12:52:29.344 STDOUT terraform: data.openstack_networking_network_v2.public: Still reading... [20s elapsed] 2025-05-14 12:52:39.345740 | orchestrator | 12:52:39.345 STDOUT terraform: data.openstack_networking_network_v2.public: Still reading... [30s elapsed] 2025-05-14 12:52:49.346829 | orchestrator | 12:52:49.346 STDOUT terraform: data.openstack_networking_network_v2.public: Still reading... [40s elapsed] 2025-05-14 12:52:59.347853 | orchestrator | 12:52:59.347 STDOUT terraform: data.openstack_networking_network_v2.public: Still reading... [50s elapsed] 2025-05-14 12:53:09.348909 | orchestrator | 12:53:09.348 STDOUT terraform: data.openstack_networking_network_v2.public: Still reading... [1m0s elapsed] 2025-05-14 12:53:19.349936 | orchestrator | 12:53:19.349 STDOUT terraform: data.openstack_networking_network_v2.public: Still reading... [1m10s elapsed] 2025-05-14 12:53:29.350878 | orchestrator | 12:53:29.350 STDOUT terraform: data.openstack_networking_network_v2.public: Still reading... [1m20s elapsed] 2025-05-14 12:53:39.351736 | orchestrator | 12:53:39.351 STDOUT terraform: data.openstack_networking_network_v2.public: Still reading... [1m30s elapsed] 2025-05-14 12:53:49.352742 | orchestrator | 12:53:49.352 STDOUT terraform: data.openstack_networking_network_v2.public: Still reading... [1m40s elapsed] 2025-05-14 12:53:59.352942 | orchestrator | 12:53:59.352 STDOUT terraform: data.openstack_networking_network_v2.public: Still reading... [1m50s elapsed] 2025-05-14 12:54:09.354854 | orchestrator | 12:54:09.353 STDOUT terraform: data.openstack_networking_network_v2.public: Still reading... [2m0s elapsed] 2025-05-14 12:54:09.749952 | orchestrator | 12:54:09.749 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-14 12:54:09.750085 | orchestrator | 12:54:09.749 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-14 12:54:09.750119 | orchestrator | 12:54:09.749 STDOUT terraform:  + create 2025-05-14 12:54:09.750139 | orchestrator | 12:54:09.749 STDOUT terraform:  <= read (data resources) 2025-05-14 12:54:09.750146 | orchestrator | 12:54:09.749 STDOUT terraform: OpenTofu planned the following actions, but then encountered a problem: 2025-05-14 12:54:09.750156 | orchestrator | 12:54:09.749 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-14 12:54:09.750162 | orchestrator | 12:54:09.750 STDOUT terraform:  # (config refers to values not yet known) 2025-05-14 12:54:09.750168 | orchestrator | 12:54:09.750 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-14 12:54:09.750177 | orchestrator | 12:54:09.750 STDOUT terraform:  + checksum = (known after apply) 2025-05-14 12:54:09.750185 | orchestrator | 12:54:09.750 STDOUT terraform:  + created_at = (known after apply) 2025-05-14 12:54:09.750239 | orchestrator | 12:54:09.750 STDOUT terraform:  + file = (known after apply) 2025-05-14 12:54:09.750289 | orchestrator | 12:54:09.750 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.750326 | orchestrator | 12:54:09.750 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.750367 | orchestrator | 12:54:09.750 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-14 12:54:09.750411 | orchestrator | 12:54:09.750 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-14 12:54:09.750446 | orchestrator | 12:54:09.750 STDOUT terraform:  + most_recent = true 2025-05-14 12:54:09.750495 | orchestrator | 12:54:09.750 STDOUT terraform:  + name = (known after apply) 2025-05-14 12:54:09.750533 | orchestrator | 12:54:09.750 STDOUT terraform:  + protected = (known after apply) 2025-05-14 12:54:09.750573 | orchestrator | 12:54:09.750 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.750649 | orchestrator | 12:54:09.750 STDOUT terraform:  + schema = (known after apply) 2025-05-14 12:54:09.750703 | orchestrator | 12:54:09.750 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-14 12:54:09.750739 | orchestrator | 12:54:09.750 STDOUT terraform:  + tags = (known after apply) 2025-05-14 12:54:09.750778 | orchestrator | 12:54:09.750 STDOUT terraform:  + updated_at = (known after apply) 2025-05-14 12:54:09.750799 | orchestrator | 12:54:09.750 STDOUT terraform:  } 2025-05-14 12:54:09.750909 | orchestrator | 12:54:09.750 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-14 12:54:09.750918 | orchestrator | 12:54:09.750 STDOUT terraform:  # (config refers to values not yet known) 2025-05-14 12:54:09.750959 | orchestrator | 12:54:09.750 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-14 12:54:09.751009 | orchestrator | 12:54:09.750 STDOUT terraform:  + checksum = (known after apply) 2025-05-14 12:54:09.751038 | orchestrator | 12:54:09.750 STDOUT terraform:  + created_at = (known after apply) 2025-05-14 12:54:09.751084 | orchestrator | 12:54:09.751 STDOUT terraform:  + file = (known after apply) 2025-05-14 12:54:09.751125 | orchestrator | 12:54:09.751 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.751170 | orchestrator | 12:54:09.751 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.751215 | orchestrator | 12:54:09.751 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-14 12:54:09.751254 | orchestrator | 12:54:09.751 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-14 12:54:09.751274 | orchestrator | 12:54:09.751 STDOUT terraform:  + most_recent = true 2025-05-14 12:54:09.751324 | orchestrator | 12:54:09.751 STDOUT terraform:  + name = (known after apply) 2025-05-14 12:54:09.751372 | orchestrator | 12:54:09.751 STDOUT terraform:  + protected = (known after apply) 2025-05-14 12:54:09.751412 | orchestrator | 12:54:09.751 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.751466 | orchestrator | 12:54:09.751 STDOUT terraform:  + schema = (known after apply) 2025-05-14 12:54:09.751495 | orchestrator | 12:54:09.751 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-14 12:54:09.751541 | orchestrator | 12:54:09.751 STDOUT terraform:  + tags = (known after apply) 2025-05-14 12:54:09.751601 | orchestrator | 12:54:09.751 STDOUT terraform:  + updated_at = (known after apply) 2025-05-14 12:54:09.751611 | orchestrator | 12:54:09.751 STDOUT terraform:  } 2025-05-14 12:54:09.751818 | orchestrator | 12:54:09.751 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-14 12:54:09.751919 | orchestrator | 12:54:09.751 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-14 12:54:09.751947 | orchestrator | 12:54:09.751 STDOUT terraform:  + content = (known after apply) 2025-05-14 12:54:09.751960 | orchestrator | 12:54:09.751 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 12:54:09.751971 | orchestrator | 12:54:09.751 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 12:54:09.751984 | orchestrator | 12:54:09.751 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 12:54:09.751995 | orchestrator | 12:54:09.751 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 12:54:09.752010 | orchestrator | 12:54:09.751 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 12:54:09.752083 | orchestrator | 12:54:09.751 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 12:54:09.752096 | orchestrator | 12:54:09.752 STDOUT terraform:  + directory_permission = "0777" 2025-05-14 12:54:09.752112 | orchestrator | 12:54:09.752 STDOUT terraform:  + file_permission = "0644" 2025-05-14 12:54:09.752173 | orchestrator | 12:54:09.752 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-14 12:54:09.752225 | orchestrator | 12:54:09.752 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.752237 | orchestrator | 12:54:09.752 STDOUT terraform:  } 2025-05-14 12:54:09.752297 | orchestrator | 12:54:09.752 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-14 12:54:09.752313 | orchestrator | 12:54:09.752 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-14 12:54:09.752363 | orchestrator | 12:54:09.752 STDOUT terraform:  + content = (sensitive value) 2025-05-14 12:54:09.752436 | orchestrator | 12:54:09.752 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 12:54:09.752453 | orchestrator | 12:54:09.752 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 12:54:09.752535 | orchestrator | 12:54:09.752 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 12:54:09.752645 | orchestrator | 12:54:09.752 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 12:54:09.752659 | orchestrator | 12:54:09.752 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 12:54:09.752674 | orchestrator | 12:54:09.752 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 12:54:09.752719 | orchestrator | 12:54:09.752 STDOUT terraform:  + directory_permission = "0700" 2025-05-14 12:54:09.752757 | orchestrator | 12:54:09.752 STDOUT terraform:  + file_permission = "0600" 2025-05-14 12:54:09.752795 | orchestrator | 12:54:09.752 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-14 12:54:09.752848 | orchestrator | 12:54:09.752 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.752865 | orchestrator | 12:54:09.752 STDOUT terraform:  } 2025-05-14 12:54:09.752942 | orchestrator | 12:54:09.752 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-14 12:54:09.753018 | orchestrator | 12:54:09.752 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-14 12:54:09.753036 | orchestrator | 12:54:09.752 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.753085 | orchestrator | 12:54:09.753 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.753102 | orchestrator | 12:54:09.753 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.753184 | orchestrator | 12:54:09.752 STDERR terraform: Error: Expected HTTP response code [200 204 300] when accessing [GET https://neutron.services.a.regiocloud.tech/v2.0/networks?name=public], but got 504 instead:

504 Gateway Time-out

2025-05-14 12:54:09.753212 | orchestrator | 12:54:09.753 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 12:54:09.753228 | orchestrator | 12:54:09.753 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.753262 | orchestrator | 12:54:09.753 STDERR terraform: The server didn't respond in time. 2025-05-14 12:54:09.753274 | orchestrator | 12:54:09.753 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-14 12:54:09.753285 | orchestrator | 12:54:09.753 STDERR terraform:  2025-05-14 12:54:09.753296 | orchestrator | 12:54:09.753 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.753311 | orchestrator | 12:54:09.753 STDOUT terraform:  + size = 80 2025-05-14 12:54:09.753322 | orchestrator | 12:54:09.753 STDERR terraform:  with data.openstack_networking_network_v2.public, 2025-05-14 12:54:09.753333 | orchestrator | 12:54:09.753 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.753344 | orchestrator | 12:54:09.753 STDOUT terraform:  } 2025-05-14 12:54:09.753430 | orchestrator | 12:54:09.753 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-14 12:54:09.753444 | orchestrator | 12:54:09.753 STDERR terraform:  on data.tf line 1, in data "openstack_networking_network_v2" "public": 2025-05-14 12:54:09.753460 | orchestrator | 12:54:09.753 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 12:54:09.753471 | orchestrator | 12:54:09.753 STDERR terraform:  1: data "openstack_networking_network_v2" "public" { 2025-05-14 12:54:09.753497 | orchestrator | 12:54:09.753 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.753509 | orchestrator | 12:54:09.753 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.753562 | orchestrator | 12:54:09.753 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.753609 | orchestrator | 12:54:09.753 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 12:54:09.753647 | orchestrator | 12:54:09.753 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.753705 | orchestrator | 12:54:09.753 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-14 12:54:09.753762 | orchestrator | 12:54:09.753 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.753778 | orchestrator | 12:54:09.753 STDOUT terraform:  + size = 80 2025-05-14 12:54:09.753789 | orchestrator | 12:54:09.753 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.753804 | orchestrator | 12:54:09.753 STDOUT terraform:  } 2025-05-14 12:54:09.753875 | orchestrator | 12:54:09.753 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-14 12:54:09.753930 | orchestrator | 12:54:09.753 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 12:54:09.753946 | orchestrator | 12:54:09.753 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.753961 | orchestrator | 12:54:09.753 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.754047 | orchestrator | 12:54:09.753 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.754068 | orchestrator | 12:54:09.753 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 12:54:09.754117 | orchestrator | 12:54:09.754 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.754179 | orchestrator | 12:54:09.754 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-14 12:54:09.754195 | orchestrator | 12:54:09.754 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.754210 | orchestrator | 12:54:09.754 STDOUT terraform:  + size = 80 2025-05-14 12:54:09.754289 | orchestrator | 12:54:09.754 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.754324 | orchestrator | 12:54:09.754 STDOUT terraform:  } 2025-05-14 12:54:09.754341 | orchestrator | 12:54:09.754 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-14 12:54:09.754426 | orchestrator | 12:54:09.754 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 12:54:09.754463 | orchestrator | 12:54:09.754 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.754475 | orchestrator | 12:54:09.754 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.754490 | orchestrator | 12:54:09.754 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.754541 | orchestrator | 12:54:09.754 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 12:54:09.754619 | orchestrator | 12:54:09.754 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.754668 | orchestrator | 12:54:09.754 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-14 12:54:09.754696 | orchestrator | 12:54:09.754 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.754751 | orchestrator | 12:54:09.754 STDOUT terraform:  + size = 80 2025-05-14 12:54:09.754767 | orchestrator | 12:54:09.754 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.754780 | orchestrator | 12:54:09.754 STDOUT terraform:  } 2025-05-14 12:54:09.754838 | orchestrator | 12:54:09.754 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-14 12:54:09.754909 | orchestrator | 12:54:09.754 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 12:54:09.754926 | orchestrator | 12:54:09.754 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.754942 | orchestrator | 12:54:09.754 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.754994 | orchestrator | 12:54:09.754 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.755011 | orchestrator | 12:54:09.754 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 12:54:09.755090 | orchestrator | 12:54:09.755 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.755108 | orchestrator | 12:54:09.755 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-14 12:54:09.755173 | orchestrator | 12:54:09.755 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.755186 | orchestrator | 12:54:09.755 STDOUT terraform:  + size = 80 2025-05-14 12:54:09.755201 | orchestrator | 12:54:09.755 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.755212 | orchestrator | 12:54:09.755 STDOUT terraform:  } 2025-05-14 12:54:09.755275 | orchestrator | 12:54:09.755 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-14 12:54:09.755355 | orchestrator | 12:54:09.755 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 12:54:09.755373 | orchestrator | 12:54:09.755 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.755389 | orchestrator | 12:54:09.755 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.755437 | orchestrator | 12:54:09.755 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.755455 | orchestrator | 12:54:09.755 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 12:54:09.755541 | orchestrator | 12:54:09.755 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.755560 | orchestrator | 12:54:09.755 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-14 12:54:09.755616 | orchestrator | 12:54:09.755 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.755633 | orchestrator | 12:54:09.755 STDOUT terraform:  + size = 80 2025-05-14 12:54:09.755644 | orchestrator | 12:54:09.755 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.755659 | orchestrator | 12:54:09.755 STDOUT terraform:  } 2025-05-14 12:54:09.755713 | orchestrator | 12:54:09.755 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-14 12:54:09.755786 | orchestrator | 12:54:09.755 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 12:54:09.755805 | orchestrator | 12:54:09.755 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.755820 | orchestrator | 12:54:09.755 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.755872 | orchestrator | 12:54:09.755 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.755889 | orchestrator | 12:54:09.755 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 12:54:09.755965 | orchestrator | 12:54:09.755 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.755982 | orchestrator | 12:54:09.755 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-14 12:54:09.756055 | orchestrator | 12:54:09.755 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.756069 | orchestrator | 12:54:09.756 STDOUT terraform:  + size = 80 2025-05-14 12:54:09.756080 | orchestrator | 12:54:09.756 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.756095 | orchestrator | 12:54:09.756 STDOUT terraform:  } 2025-05-14 12:54:09.756149 | orchestrator | 12:54:09.756 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-14 12:54:09.756218 | orchestrator | 12:54:09.756 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 12:54:09.756237 | orchestrator | 12:54:09.756 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.756249 | orchestrator | 12:54:09.756 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.756295 | orchestrator | 12:54:09.756 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.756312 | orchestrator | 12:54:09.756 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.756391 | orchestrator | 12:54:09.756 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-14 12:54:09.756411 | orchestrator | 12:54:09.756 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.756425 | orchestrator | 12:54:09.756 STDOUT terraform:  + size = 20 2025-05-14 12:54:09.756489 | orchestrator | 12:54:09.756 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.756501 | orchestrator | 12:54:09.756 STDOUT terraform:  } 2025-05-14 12:54:09.756567 | orchestrator | 12:54:09.756 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-14 12:54:09.756605 | orchestrator | 12:54:09.756 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 12:54:09.756651 | orchestrator | 12:54:09.756 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.756668 | orchestrator | 12:54:09.756 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.756710 | orchestrator | 12:54:09.756 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.756746 | orchestrator | 12:54:09.756 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.756792 | orchestrator | 12:54:09.756 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-14 12:54:09.756840 | orchestrator | 12:54:09.756 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.756865 | orchestrator | 12:54:09.756 STDOUT terraform:  + size = 20 2025-05-14 12:54:09.756877 | orchestrator | 12:54:09.756 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.756891 | orchestrator | 12:54:09.756 STDOUT terraform:  } 2025-05-14 12:54:09.756927 | orchestrator | 12:54:09.756 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-14 12:54:09.760635 | orchestrator | 12:54:09.756 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 12:54:09.760692 | orchestrator | 12:54:09.756 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.760704 | orchestrator | 12:54:09.757 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.760714 | orchestrator | 12:54:09.757 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.760724 | orchestrator | 12:54:09.757 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.760734 | orchestrator | 12:54:09.757 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-14 12:54:09.760743 | orchestrator | 12:54:09.757 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.760763 | orchestrator | 12:54:09.757 STDOUT terraform:  + size = 20 2025-05-14 12:54:09.760773 | orchestrator | 12:54:09.757 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.760783 | orchestrator | 12:54:09.757 STDOUT terraform:  } 2025-05-14 12:54:09.760794 | orchestrator | 12:54:09.757 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-14 12:54:09.760804 | orchestrator | 12:54:09.757 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 12:54:09.760813 | orchestrator | 12:54:09.757 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.760823 | orchestrator | 12:54:09.757 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.760880 | orchestrator | 12:54:09.757 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.760890 | orchestrator | 12:54:09.757 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.760899 | orchestrator | 12:54:09.757 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-14 12:54:09.760909 | orchestrator | 12:54:09.757 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.760918 | orchestrator | 12:54:09.757 STDOUT terraform:  + size = 20 2025-05-14 12:54:09.760928 | orchestrator | 12:54:09.757 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.760938 | orchestrator | 12:54:09.757 STDOUT terraform:  } 2025-05-14 12:54:09.760952 | orchestrator | 12:54:09.757 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-14 12:54:09.760962 | orchestrator | 12:54:09.757 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 12:54:09.760971 | orchestrator | 12:54:09.758 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.760981 | orchestrator | 12:54:09.758 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.760991 | orchestrator | 12:54:09.758 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.761016 | orchestrator | 12:54:09.758 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.761026 | orchestrator | 12:54:09.758 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-14 12:54:09.761036 | orchestrator | 12:54:09.758 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.761045 | orchestrator | 12:54:09.758 STDOUT terraform:  + size = 20 2025-05-14 12:54:09.761054 | orchestrator | 12:54:09.758 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.761064 | orchestrator | 12:54:09.758 STDOUT terraform:  } 2025-05-14 12:54:09.761074 | orchestrator | 12:54:09.758 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-14 12:54:09.761083 | orchestrator | 12:54:09.758 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 12:54:09.761093 | orchestrator | 12:54:09.758 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.761102 | orchestrator | 12:54:09.758 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.761112 | orchestrator | 12:54:09.758 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.761134 | orchestrator | 12:54:09.758 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.761144 | orchestrator | 12:54:09.758 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-14 12:54:09.761159 | orchestrator | 12:54:09.758 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.761169 | orchestrator | 12:54:09.758 STDOUT terraform:  + size = 20 2025-05-14 12:54:09.761178 | orchestrator | 12:54:09.758 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.761330 | orchestrator | 12:54:09.758 STDOUT terraform:  } 2025-05-14 12:54:09.761345 | orchestrator | 12:54:09.758 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-14 12:54:09.761354 | orchestrator | 12:54:09.758 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 12:54:09.761364 | orchestrator | 12:54:09.758 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.761373 | orchestrator | 12:54:09.758 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.761383 | orchestrator | 12:54:09.758 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.761392 | orchestrator | 12:54:09.758 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.761402 | orchestrator | 12:54:09.758 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-14 12:54:09.761411 | orchestrator | 12:54:09.759 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.761421 | orchestrator | 12:54:09.759 STDOUT terraform:  + size = 20 2025-05-14 12:54:09.761430 | orchestrator | 12:54:09.759 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.761440 | orchestrator | 12:54:09.759 STDOUT terraform:  } 2025-05-14 12:54:09.761449 | orchestrator | 12:54:09.759 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-14 12:54:09.761459 | orchestrator | 12:54:09.759 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 12:54:09.761476 | orchestrator | 12:54:09.759 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.761486 | orchestrator | 12:54:09.759 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.761495 | orchestrator | 12:54:09.759 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.761505 | orchestrator | 12:54:09.759 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.761514 | orchestrator | 12:54:09.759 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-14 12:54:09.761524 | orchestrator | 12:54:09.759 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.761534 | orchestrator | 12:54:09.759 STDOUT terraform:  + size = 20 2025-05-14 12:54:09.761543 | orchestrator | 12:54:09.759 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.761552 | orchestrator | 12:54:09.759 STDOUT terraform:  } 2025-05-14 12:54:09.761562 | orchestrator | 12:54:09.759 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-14 12:54:09.761572 | orchestrator | 12:54:09.759 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 12:54:09.761642 | orchestrator | 12:54:09.759 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 12:54:09.761652 | orchestrator | 12:54:09.759 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 12:54:09.761662 | orchestrator | 12:54:09.759 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.761671 | orchestrator | 12:54:09.759 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 12:54:09.761681 | orchestrator | 12:54:09.759 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-14 12:54:09.761690 | orchestrator | 12:54:09.759 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.761699 | orchestrator | 12:54:09.759 STDOUT terraform:  + size = 20 2025-05-14 12:54:09.761709 | orchestrator | 12:54:09.759 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 12:54:09.761718 | orchestrator | 12:54:09.759 STDOUT terraform:  } 2025-05-14 12:54:09.761744 | orchestrator | 12:54:09.759 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-14 12:54:09.761755 | orchestrator | 12:54:09.759 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-14 12:54:09.761765 | orchestrator | 12:54:09.759 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-14 12:54:09.761774 | orchestrator | 12:54:09.759 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.761784 | orchestrator | 12:54:09.759 STDOUT terraform:  + name = "testbed" 2025-05-14 12:54:09.761793 | orchestrator | 12:54:09.759 STDOUT terraform:  + private_key = (sensitive value) 2025-05-14 12:54:09.761803 | orchestrator | 12:54:09.759 STDOUT terraform:  + public_key = (known after apply) 2025-05-14 12:54:09.761812 | orchestrator | 12:54:09.759 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.761822 | orchestrator | 12:54:09.759 STDOUT terraform:  + user_id = (known after apply) 2025-05-14 12:54:09.761831 | orchestrator | 12:54:09.759 STDOUT terraform:  } 2025-05-14 12:54:09.761848 | orchestrator | 12:54:09.759 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-14 12:54:09.761857 | orchestrator | 12:54:09.760 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-14 12:54:09.761867 | orchestrator | 12:54:09.760 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 12:54:09.761876 | orchestrator | 12:54:09.760 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 12:54:09.761886 | orchestrator | 12:54:09.760 STDOUT terraform:  + availability_zone_hints = [ 2025-05-14 12:54:09.761895 | orchestrator | 12:54:09.760 STDOUT terraform:  + "nova", 2025-05-14 12:54:09.761904 | orchestrator | 12:54:09.760 STDOUT terraform:  ] 2025-05-14 12:54:09.761914 | orchestrator | 12:54:09.760 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-14 12:54:09.761924 | orchestrator | 12:54:09.760 STDOUT terraform:  + external = (known after apply) 2025-05-14 12:54:09.761933 | orchestrator | 12:54:09.760 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.761942 | orchestrator | 12:54:09.760 STDOUT terraform:  + mtu = (known after apply) 2025-05-14 12:54:09.761952 | orchestrator | 12:54:09.760 STDOUT terraform:  + name = "net-testbed-management" 2025-05-14 12:54:09.761961 | orchestrator | 12:54:09.760 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 12:54:09.761970 | orchestrator | 12:54:09.760 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 12:54:09.761980 | orchestrator | 12:54:09.760 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.761990 | orchestrator | 12:54:09.760 STDOUT terraform:  + shared = (known after apply) 2025-05-14 12:54:09.761999 | orchestrator | 12:54:09.760 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 12:54:09.762008 | orchestrator | 12:54:09.760 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-14 12:54:09.762048 | orchestrator | 12:54:09.760 STDOUT terraform:  + segments (known after apply) 2025-05-14 12:54:09.762060 | orchestrator | 12:54:09.760 STDOUT terraform:  } 2025-05-14 12:54:09.762070 | orchestrator | 12:54:09.760 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-14 12:54:09.762079 | orchestrator | 12:54:09.760 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-14 12:54:09.762089 | orchestrator | 12:54:09.760 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 12:54:09.762098 | orchestrator | 12:54:09.760 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-14 12:54:09.762108 | orchestrator | 12:54:09.760 STDOUT terraform:  + dns_nameservers = [ 2025-05-14 12:54:09.762116 | orchestrator | 12:54:09.760 STDOUT terraform:  + "8.8.8.8", 2025-05-14 12:54:09.762123 | orchestrator | 12:54:09.760 STDOUT terraform:  + "9.9.9.9", 2025-05-14 12:54:09.762131 | orchestrator | 12:54:09.760 STDOUT terraform:  ] 2025-05-14 12:54:09.762145 | orchestrator | 12:54:09.760 STDOUT terraform:  + enable_dhcp = true 2025-05-14 12:54:09.762153 | orchestrator | 12:54:09.760 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-14 12:54:09.762166 | orchestrator | 12:54:09.761 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.762178 | orchestrator | 12:54:09.761 STDOUT terraform:  + ip_version = 4 2025-05-14 12:54:09.762186 | orchestrator | 12:54:09.761 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-14 12:54:09.762194 | orchestrator | 12:54:09.761 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-14 12:54:09.762202 | orchestrator | 12:54:09.761 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-14 12:54:09.762210 | orchestrator | 12:54:09.761 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 12:54:09.762218 | orchestrator | 12:54:09.761 STDOUT terraform:  + no_gateway = false 2025-05-14 12:54:09.762225 | orchestrator | 12:54:09.761 STDOUT terraform:  + region = (known after apply) 2025-05-14 12:54:09.762233 | orchestrator | 12:54:09.761 STDOUT terraform:  + service_types = (known after apply) 2025-05-14 12:54:09.762241 | orchestrator | 12:54:09.761 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 12:54:09.762248 | orchestrator | 12:54:09.761 STDOUT terraform:  + allocation_pool { 2025-05-14 12:54:09.762256 | orchestrator | 12:54:09.761 STDOUT terraform:  + end = "192.168.31.250" 2025-05-14 12:54:09.762264 | orchestrator | 12:54:09.761 STDOUT terraform:  + start = "192.168.31.200" 2025-05-14 12:54:09.762272 | orchestrator | 12:54:09.761 STDOUT terraform:  } 2025-05-14 12:54:09.762280 | orchestrator | 12:54:09.761 STDOUT terraform:  } 2025-05-14 12:54:09.762288 | orchestrator | 12:54:09.761 STDOUT terraform:  # terraform_data.image will be created 2025-05-14 12:54:09.762295 | orchestrator | 12:54:09.761 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-14 12:54:09.762303 | orchestrator | 12:54:09.761 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.762311 | orchestrator | 12:54:09.761 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-14 12:54:09.762319 | orchestrator | 12:54:09.761 STDOUT terraform:  + output = (known after apply) 2025-05-14 12:54:09.762326 | orchestrator | 12:54:09.761 STDOUT terraform:  } 2025-05-14 12:54:09.762334 | orchestrator | 12:54:09.761 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-14 12:54:09.762342 | orchestrator | 12:54:09.761 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-14 12:54:09.762350 | orchestrator | 12:54:09.761 STDOUT terraform:  + id = (known after apply) 2025-05-14 12:54:09.762357 | orchestrator | 12:54:09.761 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-14 12:54:09.762365 | orchestrator | 12:54:09.761 STDOUT terraform:  + output = (known after apply) 2025-05-14 12:54:09.762373 | orchestrator | 12:54:09.761 STDOUT terraform:  } 2025-05-14 12:54:09.762380 | orchestrator | 12:54:09.761 STDOUT terraform: Plan: 23 to add, 0 to change, 0 to destroy. 2025-05-14 12:54:09.762388 | orchestrator | 12:54:09.761 STDOUT terraform: Changes to Outputs: 2025-05-14 12:54:09.762396 | orchestrator | 12:54:09.761 STDOUT terraform:  + private_key = (sensitive value) 2025-05-14 12:54:09.768798 | orchestrator | 12:54:09.768 ERROR  tofu invocation failed in . 2025-05-14 12:54:09.769167 | orchestrator | 12:54:09.768 ERROR  error occurred: 2025-05-14 12:54:09.769187 | orchestrator | 2025-05-14 12:54:09.769196 | orchestrator | * Failed to execute "../../../../../terraform apply -auto-approve" in . 2025-05-14 12:54:09.769204 | orchestrator | 2025-05-14 12:54:09.769214 | orchestrator | Error: Expected HTTP response code [200 204 300] when accessing [GET https://neutron.services.a.regiocloud.tech/v2.0/networks?name=public], but got 504 instead:

504 Gateway Time-out

2025-05-14 12:54:09.769223 | orchestrator | The server didn't respond in time. 2025-05-14 12:54:09.769232 | orchestrator | 2025-05-14 12:54:09.769242 | orchestrator | 2025-05-14 12:54:09.769250 | orchestrator | with data.openstack_networking_network_v2.public, 2025-05-14 12:54:09.769259 | orchestrator | on data.tf line 1, in data "openstack_networking_network_v2" "public": 2025-05-14 12:54:09.769268 | orchestrator | 1: data "openstack_networking_network_v2" "public" { 2025-05-14 12:54:09.769276 | orchestrator | 2025-05-14 12:54:09.769285 | orchestrator | 2025-05-14 12:54:09.769294 | orchestrator | exit status 1 2025-05-14 12:54:09.769303 | orchestrator | 2025-05-14 12:54:09.795141 | orchestrator | make: *** [Makefile:111: create] Error 1 2025-05-14 12:54:10.190086 | orchestrator | ERROR 2025-05-14 12:54:10.190585 | orchestrator | { 2025-05-14 12:54:10.190695 | orchestrator | "delta": "0:02:11.295664", 2025-05-14 12:54:10.190798 | orchestrator | "end": "2025-05-14 12:54:09.795806", 2025-05-14 12:54:10.190921 | orchestrator | "msg": "non-zero return code", 2025-05-14 12:54:10.190981 | orchestrator | "rc": 2, 2025-05-14 12:54:10.191035 | orchestrator | "start": "2025-05-14 12:51:58.500142" 2025-05-14 12:54:10.191087 | orchestrator | } failure 2025-05-14 12:54:10.212406 | 2025-05-14 12:54:10.212639 | PLAY RECAP 2025-05-14 12:54:10.213111 | orchestrator | ok: 4 changed: 0 unreachable: 0 failed: 1 skipped: 1 rescued: 0 ignored: 0 2025-05-14 12:54:10.213195 | 2025-05-14 12:54:10.391689 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-14 12:54:10.392830 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-14 12:54:11.196932 | 2025-05-14 12:54:11.197141 | PLAY [Post output play] 2025-05-14 12:54:11.214513 | 2025-05-14 12:54:11.214688 | LOOP [stage-output : Register sources] 2025-05-14 12:54:11.295128 | 2025-05-14 12:54:11.295421 | TASK [stage-output : Check sudo] 2025-05-14 12:54:11.763881 | orchestrator | sudo: a password is required 2025-05-14 12:54:11.834028 | orchestrator | ok: Runtime: 0:00:00.014644 2025-05-14 12:54:11.846208 | 2025-05-14 12:54:11.846375 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-14 12:54:11.883640 | 2025-05-14 12:54:11.883925 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-14 12:54:11.962405 | orchestrator | ok 2025-05-14 12:54:11.971448 | 2025-05-14 12:54:11.971619 | LOOP [stage-output : Ensure target folders exist] 2025-05-14 12:54:12.457693 | orchestrator | ok: "docs" 2025-05-14 12:54:12.458093 | 2025-05-14 12:54:12.702427 | orchestrator | ok: "artifacts" 2025-05-14 12:54:12.954930 | orchestrator | ok: "logs" 2025-05-14 12:54:12.976878 | 2025-05-14 12:54:12.977074 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-14 12:54:13.014415 | 2025-05-14 12:54:13.014739 | TASK [stage-output : Make all log files readable] 2025-05-14 12:54:13.328514 | orchestrator | ok 2025-05-14 12:54:13.338044 | 2025-05-14 12:54:13.338191 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-14 12:54:13.373902 | orchestrator | skipping: Conditional result was False 2025-05-14 12:54:13.392415 | 2025-05-14 12:54:13.392633 | TASK [stage-output : Discover log files for compression] 2025-05-14 12:54:13.418258 | orchestrator | skipping: Conditional result was False 2025-05-14 12:54:13.431575 | 2025-05-14 12:54:13.431724 | LOOP [stage-output : Archive everything from logs] 2025-05-14 12:54:13.483038 | 2025-05-14 12:54:13.483268 | PLAY [Post cleanup play] 2025-05-14 12:54:13.494093 | 2025-05-14 12:54:13.494287 | TASK [Set cloud fact (Zuul deployment)] 2025-05-14 12:54:13.566901 | orchestrator | ok 2025-05-14 12:54:13.577316 | 2025-05-14 12:54:13.577433 | TASK [Set cloud fact (local deployment)] 2025-05-14 12:54:13.612036 | orchestrator | skipping: Conditional result was False 2025-05-14 12:54:13.628223 | 2025-05-14 12:54:13.628379 | TASK [Clean the cloud environment] 2025-05-14 12:54:14.232108 | orchestrator | 2025-05-14 12:54:14 - clean up servers 2025-05-14 12:56:29.681039 | orchestrator | Traceback (most recent call last): 2025-05-14 12:56:29.681178 | orchestrator | File "/home/zuul-testbed04/src/github.com/osism/testbed/terraform/scripts/cleanup.py", line 215, in 2025-05-14 12:56:29.681338 | orchestrator | main() 2025-05-14 12:56:29.681445 | orchestrator | File "/home/zuul-testbed04/src/github.com/osism/testbed/terraform/scripts/cleanup.py", line 201, in main 2025-05-14 12:56:29.681621 | orchestrator | cleanup_servers(conn, PREFIX) 2025-05-14 12:56:29.681691 | orchestrator | File "/home/zuul-testbed04/src/github.com/osism/testbed/terraform/scripts/cleanup.py", line 120, in cleanup_servers 2025-05-14 12:56:29.681834 | orchestrator | servers = list(conn.compute.servers(name=f"^{prefix}")) 2025-05-14 12:56:29.681941 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:56:29.681952 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/openstack/resource.py", line 1775, in list 2025-05-14 12:56:29.682903 | orchestrator | exceptions.raise_from_response(response) 2025-05-14 12:56:29.682971 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/openstack/exceptions.py", line 236, in raise_from_response 2025-05-14 12:56:29.683211 | orchestrator | raise cls( 2025-05-14 12:56:29.683302 | orchestrator | openstack.exceptions.HttpException: HttpException: 503: Server Error for url: https://nova.services.a.regiocloud.tech/v2.1/servers/detail?name=%5Etestbed, 503 Service Unavailable: request due to maintenance downtime or capacity: Apache/2.4.52 (Ubuntu) Server at nova.services.a.regiocloud.tech Port 8774: Service Unavailable: problems. Please try again later.: The server is temporarily unable to service your 2025-05-14 12:56:29.797461 | orchestrator | ok: Runtime: 0:02:15.819213 2025-05-14 12:56:29.799462 | 2025-05-14 12:56:29.799551 | PLAY RECAP 2025-05-14 12:56:29.799657 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-14 12:56:29.799690 | 2025-05-14 12:56:29.949039 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-14 12:56:29.950120 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-14 12:56:30.731730 | 2025-05-14 12:56:30.731924 | PLAY [Cleanup play] 2025-05-14 12:56:30.748734 | 2025-05-14 12:56:30.748910 | TASK [Set cloud fact (Zuul deployment)] 2025-05-14 12:56:30.808792 | orchestrator | ok 2025-05-14 12:56:30.818756 | 2025-05-14 12:56:30.818943 | TASK [Set cloud fact (local deployment)] 2025-05-14 12:56:30.864056 | orchestrator | skipping: Conditional result was False 2025-05-14 12:56:30.880383 | 2025-05-14 12:56:30.880560 | TASK [Clean the cloud environment] 2025-05-14 12:56:31.691718 | orchestrator | 2025-05-14 12:56:31 - clean up servers 2025-05-14 12:58:31.753265 | orchestrator | 2025-05-14 12:58:31 - Failed to discover available identity versions when contacting https://keystone.services.a.regiocloud.tech. Attempting to parse version from URL. 2025-05-14 12:58:31.753431 | orchestrator | Traceback (most recent call last): 2025-05-14 12:58:31.753462 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/identity/generic/base.py", line 133, in _do_create_plugin 2025-05-14 12:58:31.755355 | orchestrator | disc = self.get_discovery(session, 2025-05-14 12:58:31.755406 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.755425 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/identity/base.py", line 605, in get_discovery 2025-05-14 12:58:31.757097 | orchestrator | return discover.get_discovery(session=session, url=url, 2025-05-14 12:58:31.757139 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.757163 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/discover.py", line 1459, in get_discovery 2025-05-14 12:58:31.758777 | orchestrator | disc = Discover(session, url, authenticated=authenticated) 2025-05-14 12:58:31.758892 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.758925 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/discover.py", line 539, in __init__ 2025-05-14 12:58:31.759108 | orchestrator | self._data = get_version_data(session, url, 2025-05-14 12:58:31.759129 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.759141 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/discover.py", line 106, in get_version_data 2025-05-14 12:58:31.759207 | orchestrator | resp = session.get(url, headers=headers, authenticated=authenticated) 2025-05-14 12:58:31.759278 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.759293 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/session.py", line 1133, in get 2025-05-14 12:58:31.760973 | orchestrator | return self.request(url, 'GET', **kwargs) 2025-05-14 12:58:31.761009 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.761022 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/session.py", line 978, in request 2025-05-14 12:58:31.761367 | orchestrator | raise exceptions.from_response(resp, method, url) 2025-05-14 12:58:31.761386 | orchestrator | keystoneauth1.exceptions.http.GatewayTimeout: Gateway Timeout (HTTP 504) 2025-05-14 12:58:31.761397 | orchestrator | 2025-05-14 12:58:31.761409 | orchestrator | During handling of the above exception, another exception occurred: 2025-05-14 12:58:31.761420 | orchestrator | 2025-05-14 12:58:31.761436 | orchestrator | Traceback (most recent call last): 2025-05-14 12:58:31.761448 | orchestrator | File "/home/zuul-testbed04/src/github.com/osism/testbed/terraform/scripts/cleanup.py", line 215, in 2025-05-14 12:58:31.761499 | orchestrator | main() 2025-05-14 12:58:31.761515 | orchestrator | File "/home/zuul-testbed04/src/github.com/osism/testbed/terraform/scripts/cleanup.py", line 201, in main 2025-05-14 12:58:31.761614 | orchestrator | cleanup_servers(conn, PREFIX) 2025-05-14 12:58:31.761633 | orchestrator | File "/home/zuul-testbed04/src/github.com/osism/testbed/terraform/scripts/cleanup.py", line 120, in cleanup_servers 2025-05-14 12:58:31.761716 | orchestrator | servers = list(conn.compute.servers(name=f"^{prefix}")) 2025-05-14 12:58:31.761731 | orchestrator | ^^^^^^^^^^^^ 2025-05-14 12:58:31.761746 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/openstack/service_description.py", line 87, in __get__ 2025-05-14 12:58:31.762773 | orchestrator | proxy = self._make_proxy(instance) 2025-05-14 12:58:31.762797 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.762813 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/openstack/service_description.py", line 262, in _make_proxy 2025-05-14 12:58:31.762939 | orchestrator | found_version = temp_adapter.get_api_major_version() 2025-05-14 12:58:31.762958 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.762970 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/adapter.py", line 354, in get_api_major_version 2025-05-14 12:58:31.763973 | orchestrator | return self.session.get_api_major_version(auth or self.auth, **kwargs) 2025-05-14 12:58:31.764037 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.764050 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/session.py", line 1268, in get_api_major_version 2025-05-14 12:58:31.764424 | orchestrator | return auth.get_api_major_version(self, **kwargs) 2025-05-14 12:58:31.764447 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.764459 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/identity/base.py", line 497, in get_api_major_version 2025-05-14 12:58:31.764649 | orchestrator | data = get_endpoint_data(discover_versions=discover_versions) 2025-05-14 12:58:31.764667 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.764681 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/identity/base.py", line 268, in get_endpoint_data 2025-05-14 12:58:31.764963 | orchestrator | service_catalog = self.get_access(session).service_catalog 2025-05-14 12:58:31.765049 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.765066 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/identity/base.py", line 131, in get_access 2025-05-14 12:58:31.765090 | orchestrator | self.auth_ref = self.get_auth_ref(session) 2025-05-14 12:58:31.765102 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.765114 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/identity/generic/base.py", line 203, in get_auth_ref 2025-05-14 12:58:31.765126 | orchestrator | self._plugin = self._do_create_plugin(session) 2025-05-14 12:58:31.765141 | orchestrator | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-05-14 12:58:31.765153 | orchestrator | File "/home/zuul-testbed04/venv/lib/python3.11/site-packages/keystoneauth1/identity/generic/base.py", line 155, in _do_create_plugin 2025-05-14 12:58:31.765236 | orchestrator | raise exceptions.DiscoveryFailure( 2025-05-14 12:58:31.765255 | orchestrator | keystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Gateway Timeout (HTTP 504) 2025-05-14 12:58:31.999872 | orchestrator | ok: Runtime: 0:02:00.455544 2025-05-14 12:58:32.006981 | 2025-05-14 12:58:32.007158 | PLAY RECAP 2025-05-14 12:58:32.007281 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-14 12:58:32.007357 | 2025-05-14 12:58:32.149350 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-14 12:58:32.150502 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-14 12:58:32.950318 | 2025-05-14 12:58:32.950527 | PLAY [Base post-fetch] 2025-05-14 12:58:32.967148 | 2025-05-14 12:58:32.967304 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-14 12:58:33.024845 | orchestrator | skipping: Conditional result was False 2025-05-14 12:58:33.038581 | 2025-05-14 12:58:33.038804 | TASK [fetch-output : Set log path for single node] 2025-05-14 12:58:33.099406 | orchestrator | ok 2025-05-14 12:58:33.111066 | 2025-05-14 12:58:33.111254 | LOOP [fetch-output : Ensure local output dirs] 2025-05-14 12:58:33.613302 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/d1b692c6f21c4dabb19ab37a03516b02/work/logs" 2025-05-14 12:58:33.896958 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/d1b692c6f21c4dabb19ab37a03516b02/work/artifacts" 2025-05-14 12:58:34.183362 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/d1b692c6f21c4dabb19ab37a03516b02/work/docs" 2025-05-14 12:58:34.203685 | 2025-05-14 12:58:34.203948 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-14 12:58:35.133357 | orchestrator | changed: .d..t...... ./ 2025-05-14 12:58:35.133720 | orchestrator | changed: All items complete 2025-05-14 12:58:35.133777 | 2025-05-14 12:58:35.883426 | orchestrator | changed: .d..t...... ./ 2025-05-14 12:58:36.631844 | orchestrator | changed: .d..t...... ./ 2025-05-14 12:58:36.660387 | 2025-05-14 12:58:36.660525 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-14 12:58:36.704126 | orchestrator | skipping: Conditional result was False 2025-05-14 12:58:36.706745 | orchestrator | skipping: Conditional result was False 2025-05-14 12:58:36.723417 | 2025-05-14 12:58:36.723579 | PLAY RECAP 2025-05-14 12:58:36.723664 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-14 12:58:36.723725 | 2025-05-14 12:58:36.867805 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-14 12:58:36.868924 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-14 12:58:37.621718 | 2025-05-14 12:58:37.621935 | PLAY [Base post] 2025-05-14 12:58:37.637353 | 2025-05-14 12:58:37.637508 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-14 12:58:38.259634 | orchestrator | changed 2025-05-14 12:58:38.273306 | 2025-05-14 12:58:38.273453 | PLAY RECAP 2025-05-14 12:58:38.273537 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-14 12:58:38.273620 | 2025-05-14 12:58:38.409723 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-14 12:58:38.410821 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-14 12:58:39.237833 | 2025-05-14 12:58:39.238416 | PLAY [Base post-logs] 2025-05-14 12:58:39.249703 | 2025-05-14 12:58:39.249851 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-14 12:58:39.725926 | localhost | changed 2025-05-14 12:58:39.745162 | 2025-05-14 12:58:39.745350 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-14 12:58:39.785013 | localhost | ok 2025-05-14 12:58:39.793620 | 2025-05-14 12:58:39.793904 | TASK [Set zuul-log-path fact] 2025-05-14 12:58:39.817634 | localhost | ok 2025-05-14 12:58:39.831904 | 2025-05-14 12:58:39.832048 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-14 12:58:39.871158 | localhost | ok 2025-05-14 12:58:39.878633 | 2025-05-14 12:58:39.878955 | TASK [upload-logs : Create log directories] 2025-05-14 12:58:40.406336 | localhost | changed 2025-05-14 12:58:40.409632 | 2025-05-14 12:58:40.409769 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-14 12:58:40.932286 | localhost -> localhost | ok: Runtime: 0:00:00.007930 2025-05-14 12:58:40.939657 | 2025-05-14 12:58:40.939900 | TASK [upload-logs : Upload logs to log server] 2025-05-14 12:58:41.518568 | localhost | Output suppressed because no_log was given 2025-05-14 12:58:41.522053 | 2025-05-14 12:58:41.522222 | LOOP [upload-logs : Compress console log and json output] 2025-05-14 12:58:41.580313 | localhost | skipping: Conditional result was False 2025-05-14 12:58:41.594326 | localhost | skipping: Conditional result was False 2025-05-14 12:58:41.605713 | 2025-05-14 12:58:41.605884 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-14 12:58:41.660799 | localhost | skipping: Conditional result was False 2025-05-14 12:58:41.661229 | 2025-05-14 12:58:41.665593 | localhost | skipping: Conditional result was False 2025-05-14 12:58:41.678565 | 2025-05-14 12:58:41.678994 | LOOP [upload-logs : Upload console log and json output]