On some systems the shell in login mode wipes important parts of the
environment, such as PATH. This causes the build to fail since it can't
find `spack`.
For better robustness, don't use a login shell.
In a full CI job the final spack install is run in an environment formed by scripts running in this order:
export AWS_SECRET=... # 1. Load environment from GitLab project variables
source spack/share/spack/setup-env.sh # 2. Load Spack into the environment (PATH)
spack env activate -V concrete_env # 3. Activate the concrete environment
source /etc/profile # 4. Bash login shell (from -l)
spack install ...
Whereas when a user launches their own container with (docker|podman) run -it, they end up running spack install in an environment formed in this order:
source /etc/bash.bashrc # (not 4). Bash interactive shell (default with TTY)
export AWS_SECRET=... #~1. Manually load environment from GitLab project variables
source spack/share/spack/setup-env.sh # 2. Load Spack into the environment (PATH)
spack env activate -V concrete_env # 3. Activate the concrete environment
spack install ...
The big problem being that (4) has a completely different position and content (on Leap 15 and possibly other containers).
So in context, this PR removes (4) from the CI job case, leaving us with the simpler:
export AWS_SECRET=... # 1. Load environment from GitLab project variables
source spack/share/spack/setup-env.sh # 2. Load Spack into the environment (PATH)
spack env activate -V concrete_env # 3. Activate the concrete environment
spack install ...