Compare commits
1 Commits
bugfix/env
...
test/envir
Author | SHA1 | Date | |
---|---|---|---|
![]() |
79208db191 |
6
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
6
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
@@ -1,4 +1,4 @@
|
||||
name: "\U0001F38A Feature request"
|
||||
name: "\U0001F38A Feature request"
|
||||
description: Suggest adding a feature that is not yet in Spack
|
||||
labels: [feature]
|
||||
body:
|
||||
@@ -29,11 +29,13 @@ body:
|
||||
attributes:
|
||||
label: General information
|
||||
options:
|
||||
- label: I have run `spack --version` and reported the version of Spack
|
||||
required: true
|
||||
- label: I have searched the issues of this repo and believe this is not a duplicate
|
||||
required: true
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
|
||||
|
||||
|
||||
Other than that, thanks for taking the time to contribute to Spack!
|
||||
|
2
.github/workflows/audit.yaml
vendored
2
.github/workflows/audit.yaml
vendored
@@ -19,7 +19,7 @@ jobs:
|
||||
package-audits:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
with:
|
||||
python-version: ${{inputs.python_version}}
|
||||
|
22
.github/workflows/bootstrap.yml
vendored
22
.github/workflows/bootstrap.yml
vendored
@@ -24,7 +24,7 @@ jobs:
|
||||
make patch unzip which xz python3 python3-devel tree \
|
||||
cmake bison bison-devel libstdc++-static
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup non-root user
|
||||
@@ -62,7 +62,7 @@ jobs:
|
||||
make patch unzip xz-utils python3 python3-dev tree \
|
||||
cmake bison
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup non-root user
|
||||
@@ -99,7 +99,7 @@ jobs:
|
||||
bzip2 curl file g++ gcc gfortran git gnupg2 gzip \
|
||||
make patch unzip xz-utils python3 python3-dev tree
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup non-root user
|
||||
@@ -133,7 +133,7 @@ jobs:
|
||||
make patch unzip which xz python3 python3-devel tree \
|
||||
cmake bison
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup repo
|
||||
@@ -158,7 +158,7 @@ jobs:
|
||||
run: |
|
||||
brew install cmake bison@2.7 tree
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
- name: Bootstrap clingo
|
||||
run: |
|
||||
source share/spack/setup-env.sh
|
||||
@@ -179,7 +179,7 @@ jobs:
|
||||
run: |
|
||||
brew install tree
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
- name: Bootstrap clingo
|
||||
run: |
|
||||
set -ex
|
||||
@@ -204,7 +204,7 @@ jobs:
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup repo
|
||||
@@ -247,7 +247,7 @@ jobs:
|
||||
bzip2 curl file g++ gcc patchelf gfortran git gzip \
|
||||
make patch unzip xz-utils python3 python3-dev tree
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup non-root user
|
||||
@@ -283,7 +283,7 @@ jobs:
|
||||
make patch unzip xz-utils python3 python3-dev tree \
|
||||
gawk
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Setup non-root user
|
||||
@@ -316,7 +316,7 @@ jobs:
|
||||
# Remove GnuPG since we want to bootstrap it
|
||||
sudo rm -rf /usr/local/bin/gpg
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
- name: Bootstrap GnuPG
|
||||
run: |
|
||||
source share/spack/setup-env.sh
|
||||
@@ -333,7 +333,7 @@ jobs:
|
||||
# Remove GnuPG since we want to bootstrap it
|
||||
sudo rm -rf /usr/local/bin/gpg
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
- name: Bootstrap GnuPG
|
||||
run: |
|
||||
source share/spack/setup-env.sh
|
||||
|
4
.github/workflows/build-containers.yml
vendored
4
.github/workflows/build-containers.yml
vendored
@@ -50,7 +50,7 @@ jobs:
|
||||
if: github.repository == 'spack/spack'
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # @v2
|
||||
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
|
||||
- name: Set Container Tag Normal (Nightly)
|
||||
run: |
|
||||
@@ -89,7 +89,7 @@ jobs:
|
||||
uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # @v1
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@4b4e9c3e2d4531116a6f8ba8e71fc6e2cb6e6c8c # @v1
|
||||
uses: docker/setup-buildx-action@f03ac48505955848960e80bbb68046aa35c7b9e7 # @v1
|
||||
|
||||
- name: Log in to GitHub Container Registry
|
||||
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # @v1
|
||||
|
2
.github/workflows/ci.yaml
vendored
2
.github/workflows/ci.yaml
vendored
@@ -35,7 +35,7 @@ jobs:
|
||||
core: ${{ steps.filter.outputs.core }}
|
||||
packages: ${{ steps.filter.outputs.packages }}
|
||||
steps:
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
if: ${{ github.event_name == 'push' }}
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
10
.github/workflows/unit_tests.yaml
vendored
10
.github/workflows/unit_tests.yaml
vendored
@@ -47,7 +47,7 @@ jobs:
|
||||
on_develop: false
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
@@ -94,7 +94,7 @@ jobs:
|
||||
shell:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
@@ -133,7 +133,7 @@ jobs:
|
||||
dnf install -y \
|
||||
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
|
||||
make patch tcl unzip which xz
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
- name: Setup repo and non-root user
|
||||
run: |
|
||||
git --version
|
||||
@@ -151,7 +151,7 @@ jobs:
|
||||
clingo-cffi:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
@@ -185,7 +185,7 @@ jobs:
|
||||
matrix:
|
||||
python-version: ["3.10"]
|
||||
steps:
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
|
29
.github/workflows/valid-style.yml
vendored
29
.github/workflows/valid-style.yml
vendored
@@ -18,7 +18,7 @@ jobs:
|
||||
validate:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
with:
|
||||
python-version: '3.11'
|
||||
@@ -35,7 +35,7 @@ jobs:
|
||||
style:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # @v2
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # @v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
|
||||
@@ -58,28 +58,3 @@ jobs:
|
||||
with:
|
||||
with_coverage: ${{ inputs.with_coverage }}
|
||||
python_version: '3.11'
|
||||
# Check that spack can bootstrap the development environment on Python 3.6 - RHEL8
|
||||
bootstrap-dev-rhel8:
|
||||
runs-on: ubuntu-latest
|
||||
container: registry.access.redhat.com/ubi8/ubi
|
||||
steps:
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
dnf install -y \
|
||||
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
|
||||
make patch tcl unzip which xz
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3 # @v2
|
||||
- name: Setup repo and non-root user
|
||||
run: |
|
||||
git --version
|
||||
git fetch --unshallow
|
||||
. .github/workflows/setup_git.sh
|
||||
useradd spack-test
|
||||
chown -R spack-test .
|
||||
- name: Bootstrap Spack development environment
|
||||
shell: runuser -u spack-test -- bash {0}
|
||||
run: |
|
||||
source share/spack/setup-env.sh
|
||||
spack -d bootstrap now --dev
|
||||
spack style -t black
|
||||
spack unit-test -V
|
||||
|
8
.github/workflows/windows_python.yml
vendored
8
.github/workflows/windows_python.yml
vendored
@@ -15,7 +15,7 @@ jobs:
|
||||
unit-tests:
|
||||
runs-on: windows-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
|
||||
@@ -39,7 +39,7 @@ jobs:
|
||||
unit-tests-cmd:
|
||||
runs-on: windows-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
|
||||
@@ -63,7 +63,7 @@ jobs:
|
||||
build-abseil:
|
||||
runs-on: windows-latest
|
||||
steps:
|
||||
- uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
|
||||
@@ -87,7 +87,7 @@ jobs:
|
||||
# git config --global core.symlinks false
|
||||
# shell:
|
||||
# powershell
|
||||
# - uses: actions/checkout@8f4b7f84864484a7bf31766abe9204da3cbe65b3
|
||||
# - uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c
|
||||
# with:
|
||||
# fetch-depth: 0
|
||||
# - uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
|
||||
|
109
bin/spack.bat
109
bin/spack.bat
@@ -50,69 +50,24 @@ setlocal enabledelayedexpansion
|
||||
:: flags will always start with '-', e.g. --help or -V
|
||||
:: subcommands will never start with '-'
|
||||
:: everything after the subcommand is an arg
|
||||
|
||||
:: we cannot allow batch "for" loop to directly process CL args
|
||||
:: a number of batch reserved characters are commonly passed to
|
||||
:: spack and allowing batch's "for" method to process the raw inputs
|
||||
:: results in a large number of formatting issues
|
||||
:: instead, treat the entire CLI as one string
|
||||
:: and split by space manually
|
||||
:: capture cl args in variable named cl_args
|
||||
set cl_args=%*
|
||||
:process_cl_args
|
||||
rem tokens=1* returns the first processed token produced
|
||||
rem by tokenizing the input string cl_args on spaces into
|
||||
rem the named variable %%g
|
||||
rem While this make look like a for loop, it only
|
||||
rem executes a single time for each of the cl args
|
||||
rem the actual iterative loop is performed by the
|
||||
rem goto process_cl_args stanza
|
||||
rem we are simply leveraging the "for" method's string
|
||||
rem tokenization
|
||||
for /f "tokens=1*" %%g in ("%cl_args%") do (
|
||||
set t=%%~g
|
||||
rem remainder of string is composed into %%h
|
||||
rem these are the cl args yet to be processed
|
||||
rem assign cl_args var to only the args to be processed
|
||||
rem effectively discarding the current arg %%g
|
||||
rem this will be nul when we have no further tokens to process
|
||||
set cl_args=%%h
|
||||
rem process the first space delineated cl arg
|
||||
rem of this iteration
|
||||
for %%x in (%*) do (
|
||||
set t="%%~x"
|
||||
if "!t:~0,1!" == "-" (
|
||||
if defined _sp_subcommand (
|
||||
rem We already have a subcommand, processing args now
|
||||
if not defined _sp_args (
|
||||
set "_sp_args=!t!"
|
||||
) else (
|
||||
set "_sp_args=!_sp_args! !t!"
|
||||
)
|
||||
:: We already have a subcommand, processing args now
|
||||
set "_sp_args=!_sp_args! !t!"
|
||||
) else (
|
||||
if not defined _sp_flags (
|
||||
set "_sp_flags=!t!"
|
||||
shift
|
||||
) else (
|
||||
set "_sp_flags=!_sp_flags! !t!"
|
||||
shift
|
||||
)
|
||||
set "_sp_flags=!_sp_flags! !t!"
|
||||
shift
|
||||
)
|
||||
) else if not defined _sp_subcommand (
|
||||
set "_sp_subcommand=!t!"
|
||||
shift
|
||||
) else (
|
||||
if not defined _sp_args (
|
||||
set "_sp_args=!t!"
|
||||
shift
|
||||
) else (
|
||||
set "_sp_args=!_sp_args! !t!"
|
||||
shift
|
||||
)
|
||||
set "_sp_args=!_sp_args! !t!"
|
||||
shift
|
||||
)
|
||||
)
|
||||
rem if this is not nil, we have more tokens to process
|
||||
rem start above process again with remaining unprocessed cl args
|
||||
if defined cl_args goto :process_cl_args
|
||||
|
||||
|
||||
:: --help, -h and -V flags don't require further output parsing.
|
||||
:: If we encounter, execute and exit
|
||||
@@ -140,21 +95,31 @@ if not defined _sp_subcommand (
|
||||
|
||||
:: pass parsed variables outside of local scope. Need to do
|
||||
:: this because delayedexpansion can only be set by setlocal
|
||||
endlocal & (
|
||||
set "_sp_flags=%_sp_flags%"
|
||||
set "_sp_args=%_sp_args%"
|
||||
set "_sp_subcommand=%_sp_subcommand%"
|
||||
)
|
||||
|
||||
echo %_sp_flags%>flags
|
||||
echo %_sp_args%>args
|
||||
echo %_sp_subcommand%>subcmd
|
||||
endlocal
|
||||
set /p _sp_subcommand=<subcmd
|
||||
set /p _sp_flags=<flags
|
||||
set /p _sp_args=<args
|
||||
if "%_sp_subcommand%"=="ECHO is off." (set "_sp_subcommand=")
|
||||
if "%_sp_subcommand%"=="ECHO is on." (set "_sp_subcommand=")
|
||||
if "%_sp_flags%"=="ECHO is off." (set "_sp_flags=")
|
||||
if "%_sp_flags%"=="ECHO is on." (set "_sp_flags=")
|
||||
if "%_sp_args%"=="ECHO is off." (set "_sp_args=")
|
||||
if "%_sp_args%"=="ECHO is on." (set "_sp_args=")
|
||||
del subcmd
|
||||
del flags
|
||||
del args
|
||||
|
||||
:: Filter out some commands. For any others, just run the command.
|
||||
if "%_sp_subcommand%" == "cd" (
|
||||
if %_sp_subcommand% == "cd" (
|
||||
goto :case_cd
|
||||
) else if "%_sp_subcommand%" == "env" (
|
||||
) else if %_sp_subcommand% == "env" (
|
||||
goto :case_env
|
||||
) else if "%_sp_subcommand%" == "load" (
|
||||
) else if %_sp_subcommand% == "load" (
|
||||
goto :case_load
|
||||
) else if "%_sp_subcommand%" == "unload" (
|
||||
) else if %_sp_subcommand% == "unload" (
|
||||
goto :case_load
|
||||
) else (
|
||||
goto :default_case
|
||||
@@ -189,20 +154,20 @@ goto :end_switch
|
||||
if NOT defined _sp_args (
|
||||
goto :default_case
|
||||
)
|
||||
|
||||
if NOT "%_sp_args%"=="%_sp_args:--help=%" (
|
||||
set args_no_quote=%_sp_args:"=%
|
||||
if NOT "%args_no_quote%"=="%args_no_quote:--help=%" (
|
||||
goto :default_case
|
||||
) else if NOT "%_sp_args%"=="%_sp_args: -h=%" (
|
||||
) else if NOT "%args_no_quote%"=="%args_no_quote: -h=%" (
|
||||
goto :default_case
|
||||
) else if NOT "%_sp_args%"=="%_sp_args:--bat=%" (
|
||||
) else if NOT "%args_no_quote%"=="%args_no_quote:--bat=%" (
|
||||
goto :default_case
|
||||
) else if NOT "%_sp_args%"=="%_sp_args:deactivate=%" (
|
||||
) else if NOT "%args_no_quote%"=="%args_no_quote:deactivate=%" (
|
||||
for /f "tokens=* USEBACKQ" %%I in (
|
||||
`call python %spack% %_sp_flags% env deactivate --bat %_sp_args:deactivate=%`
|
||||
`call python %spack% %_sp_flags% env deactivate --bat %args_no_quote:deactivate=%`
|
||||
) do %%I
|
||||
) else if NOT "%_sp_args%"=="%_sp_args:activate=%" (
|
||||
) else if NOT "%args_no_quote%"=="%args_no_quote:activate=%" (
|
||||
for /f "tokens=* USEBACKQ" %%I in (
|
||||
`python %spack% %_sp_flags% env activate --bat %_sp_args:activate=%`
|
||||
`python %spack% %_sp_flags% env activate --bat %args_no_quote:activate=%`
|
||||
) do %%I
|
||||
) else (
|
||||
goto :default_case
|
||||
@@ -223,7 +188,7 @@ if defined _sp_args (
|
||||
|
||||
for /f "tokens=* USEBACKQ" %%I in (
|
||||
`python "%spack%" %_sp_flags% %_sp_subcommand% --bat %_sp_args%`) do %%I
|
||||
|
||||
)
|
||||
goto :end_switch
|
||||
|
||||
:case_unload
|
||||
|
@@ -13,18 +13,16 @@ concretizer:
|
||||
# Whether to consider installed packages or packages from buildcaches when
|
||||
# concretizing specs. If `true`, we'll try to use as many installs/binaries
|
||||
# as possible, rather than building. If `false`, we'll always give you a fresh
|
||||
# concretization. If `dependencies`, we'll only reuse dependencies but
|
||||
# give you a fresh concretization for your root specs.
|
||||
reuse: dependencies
|
||||
# concretization.
|
||||
reuse: true
|
||||
# Options that tune which targets are considered for concretization. The
|
||||
# concretization process is very sensitive to the number targets, and the time
|
||||
# needed to reach a solution increases noticeably with the number of targets
|
||||
# considered.
|
||||
targets:
|
||||
# Determine whether we want to target specific or generic
|
||||
# microarchitectures. Valid values are: "microarchitectures" or "generic".
|
||||
# An example of "microarchitectures" would be "skylake" or "bulldozer",
|
||||
# while an example of "generic" would be "aarch64" or "x86_64_v4".
|
||||
# Determine whether we want to target specific or generic microarchitectures.
|
||||
# An example of the first kind might be for instance "skylake" or "bulldozer",
|
||||
# while generic microarchitectures are for instance "aarch64" or "x86_64_v4".
|
||||
granularity: microarchitectures
|
||||
# If "false" allow targets that are incompatible with the current host (for
|
||||
# instance concretize with target "icelake" while running on "haswell").
|
||||
@@ -35,4 +33,4 @@ concretizer:
|
||||
# environments can always be activated. When "false" perform concretization separately
|
||||
# on each root spec, allowing different versions and variants of the same package in
|
||||
# an environment.
|
||||
unify: true
|
||||
unify: true
|
@@ -46,7 +46,7 @@ modules:
|
||||
|
||||
tcl:
|
||||
all:
|
||||
autoload: direct
|
||||
autoload: none
|
||||
|
||||
# Default configurations if lmod is enabled
|
||||
lmod:
|
||||
|
@@ -28,7 +28,7 @@ packages:
|
||||
gl: [glx, osmesa]
|
||||
glu: [mesa-glu, openglu]
|
||||
golang: [go, gcc]
|
||||
go-or-gccgo-bootstrap: [go-bootstrap, gcc]
|
||||
go-external-or-gccgo-bootstrap: [go-bootstrap, gcc]
|
||||
iconv: [libiconv]
|
||||
ipp: [intel-ipp]
|
||||
java: [openjdk, jdk, ibm-java]
|
||||
|
@@ -3,4 +3,3 @@ config:
|
||||
concretizer: clingo
|
||||
build_stage::
|
||||
- '$spack/.staging'
|
||||
stage_name: '{name}-{version}-{hash:7}'
|
||||
|
@@ -19,4 +19,3 @@ packages:
|
||||
- msvc
|
||||
providers:
|
||||
mpi: [msmpi]
|
||||
gl: [wgl]
|
||||
|
@@ -942,7 +942,7 @@ first ``libelf`` above, you would run:
|
||||
|
||||
$ spack load /qmm4kso
|
||||
|
||||
To see which packages that you have loaded to your environment you would
|
||||
To see which packages that you have loaded to your enviornment you would
|
||||
use ``spack find --loaded``.
|
||||
|
||||
.. code-block:: console
|
||||
|
@@ -18,7 +18,7 @@ your Spack mirror and then downloaded and installed by others.
|
||||
|
||||
Whenever a mirror provides prebuilt packages, Spack will take these packages
|
||||
into account during concretization and installation, making ``spack install``
|
||||
significantly faster.
|
||||
signficantly faster.
|
||||
|
||||
|
||||
.. note::
|
||||
|
@@ -28,14 +28,11 @@ This package provides the following variants:
|
||||
|
||||
* **cuda_arch**
|
||||
|
||||
This variant supports the optional specification of one or multiple architectures.
|
||||
This variant supports the optional specification of the architecture.
|
||||
Valid values are maintained in the ``cuda_arch_values`` property and
|
||||
are the numeric character equivalent of the compute capability version
|
||||
(e.g., '10' for version 1.0). Each provided value affects associated
|
||||
``CUDA`` dependencies and compiler conflicts.
|
||||
|
||||
The variant builds both PTX code for the _virtual_ architecture
|
||||
(e.g. ``compute_10``) and binary code for the _real_ architecture (e.g. ``sm_10``).
|
||||
|
||||
GPUs and their compute capability versions are listed at
|
||||
https://developer.nvidia.com/cuda-gpus .
|
||||
|
@@ -124,7 +124,7 @@ Using oneAPI Tools Installed by Spack
|
||||
=====================================
|
||||
|
||||
Spack can be a convenient way to install and configure compilers and
|
||||
libraries, even if you do not intend to build a Spack package. If you
|
||||
libaries, even if you do not intend to build a Spack package. If you
|
||||
want to build a Makefile project using Spack-installed oneAPI compilers,
|
||||
then use spack to configure your environment::
|
||||
|
||||
|
@@ -397,7 +397,7 @@ for specifics and examples for ``packages.yaml`` files.
|
||||
|
||||
.. If your system administrator did not provide modules for pre-installed Intel
|
||||
tools, you could do well to ask for them, because installing multiple copies
|
||||
of the Intel tools, as is won't to happen once Spack is in the picture, is
|
||||
of the Intel tools, as is wont to happen once Spack is in the picture, is
|
||||
bound to stretch disk space and patience thin. If you *are* the system
|
||||
administrator and are still new to modules, then perhaps it's best to follow
|
||||
the `next section <Installing Intel tools within Spack_>`_ and install the tools
|
||||
@@ -653,7 +653,7 @@ follow `the next section <intel-install-libs_>`_ instead.
|
||||
* If you specified a custom variant (for example ``+vtune``) you may want to add this as your
|
||||
preferred variant in the packages configuration for the ``intel-parallel-studio`` package
|
||||
as described in :ref:`package-preferences`. Otherwise you will have to specify
|
||||
the variant every time ``intel-parallel-studio`` is being used as ``mkl``, ``fftw`` or ``mpi``
|
||||
the variant everytime ``intel-parallel-studio`` is being used as ``mkl``, ``fftw`` or ``mpi``
|
||||
implementation to avoid pulling in a different variant.
|
||||
|
||||
* To set the Intel compilers for default use in Spack, instead of the usual ``%gcc``,
|
||||
|
@@ -582,7 +582,7 @@ libraries. Make sure not to add modules/packages containing the word
|
||||
"test", as these likely won't end up in the installation directory,
|
||||
or may require test dependencies like pytest to be installed.
|
||||
|
||||
Instead of defining the ``import_modules`` explicitly, only the subset
|
||||
Instead of defining the ``import_modules`` explicity, only the subset
|
||||
of module names to be skipped can be defined by using ``skip_modules``.
|
||||
If a defined module has submodules, they are skipped as well, e.g.,
|
||||
in case the ``plotting`` modules should be excluded from the
|
||||
|
@@ -227,9 +227,6 @@ You can get the name to use for ``<platform>`` by running ``spack arch
|
||||
--platform``. The system config scope has a ``<platform>`` section for
|
||||
sites at which ``/etc`` is mounted on multiple heterogeneous machines.
|
||||
|
||||
|
||||
.. _config-scope-precedence:
|
||||
|
||||
----------------
|
||||
Scope Precedence
|
||||
----------------
|
||||
@@ -242,11 +239,6 @@ lower-precedence settings. Completely ignoring higher-level configuration
|
||||
options is supported with the ``::`` notation for keys (see
|
||||
:ref:`config-overrides` below).
|
||||
|
||||
There are also special notations for string concatenation and precendense override.
|
||||
Using the ``+:`` notation can be used to force *prepending* strings or lists. For lists, this is identical
|
||||
to the default behavior. Using the ``-:`` works similarly, but for *appending* values.
|
||||
:ref:`config-prepend-append`
|
||||
|
||||
^^^^^^^^^^^
|
||||
Simple keys
|
||||
^^^^^^^^^^^
|
||||
@@ -287,47 +279,6 @@ command:
|
||||
- ~/.spack/stage
|
||||
|
||||
|
||||
.. _config-prepend-append:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
String Concatenation
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Above, the user ``config.yaml`` *completely* overrides specific settings in the
|
||||
default ``config.yaml``. Sometimes, it is useful to add a suffix/prefix
|
||||
to a path or name. To do this, you can use the ``-:`` notation for *append*
|
||||
string concatenation at the end of a key in a configuration file. For example:
|
||||
|
||||
.. code-block:: yaml
|
||||
:emphasize-lines: 1
|
||||
:caption: ~/.spack/config.yaml
|
||||
|
||||
config:
|
||||
install_tree-: /my/custom/suffix/
|
||||
|
||||
Spack will then append to the lower-precedence configuration under the
|
||||
``install_tree-:`` section:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack config get config
|
||||
config:
|
||||
install_tree: /some/other/directory/my/custom/suffix
|
||||
build_stage:
|
||||
- $tempdir/$user/spack-stage
|
||||
- ~/.spack/stage
|
||||
|
||||
|
||||
Similarly, ``+:`` can be used to *prepend* to a path or name:
|
||||
|
||||
.. code-block:: yaml
|
||||
:emphasize-lines: 1
|
||||
:caption: ~/.spack/config.yaml
|
||||
|
||||
config:
|
||||
install_tree+: /my/custom/suffix/
|
||||
|
||||
|
||||
.. _config-overrides:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
@@ -444,120 +444,6 @@ attribute:
|
||||
The minimum version of Singularity required to build a SIF (Singularity Image Format)
|
||||
image from the recipes generated by Spack is ``3.5.3``.
|
||||
|
||||
------------------------------
|
||||
Extending the Jinja2 Templates
|
||||
------------------------------
|
||||
|
||||
The Dockerfile and the Singularity definition file that Spack can generate are based on
|
||||
a few Jinja2 templates that are rendered according to the environment being containerized.
|
||||
Even though Spack allows a great deal of customization by just setting appropriate values for
|
||||
the configuration options, sometimes that is not enough.
|
||||
|
||||
In those cases, a user can directly extend the template that Spack uses to render the image
|
||||
to e.g. set additional environment variables or perform specific operations either before or
|
||||
after a given stage of the build. Let's consider as an example the following structure:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ tree /opt/environment
|
||||
/opt/environment
|
||||
├── data
|
||||
│ └── data.csv
|
||||
├── spack.yaml
|
||||
├── data
|
||||
└── templates
|
||||
└── container
|
||||
└── CustomDockerfile
|
||||
|
||||
containing both the custom template extension and the environment manifest file. To use a custom
|
||||
template, the environment must register the directory containing it, and declare its use under the
|
||||
``container`` configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
:emphasize-lines: 7-8,12
|
||||
|
||||
spack:
|
||||
specs:
|
||||
- hdf5~mpi
|
||||
concretizer:
|
||||
unify: true
|
||||
config:
|
||||
template_dirs:
|
||||
- /opt/environment/templates
|
||||
container:
|
||||
format: docker
|
||||
depfile: true
|
||||
template: container/CustomDockerfile
|
||||
|
||||
The template extension can override two blocks, named ``build_stage`` and ``final_stage``, similarly to
|
||||
the example below:
|
||||
|
||||
.. code-block::
|
||||
:emphasize-lines: 3,8
|
||||
|
||||
{% extends "container/Dockerfile" %}
|
||||
{% block build_stage %}
|
||||
RUN echo "Start building"
|
||||
{{ super() }}
|
||||
{% endblock %}
|
||||
{% block final_stage %}
|
||||
{{ super() }}
|
||||
COPY data /share/myapp/data
|
||||
{% endblock %}
|
||||
|
||||
The recipe that gets generated contains the two extra instruction that we added in our template extension:
|
||||
|
||||
.. code-block:: Dockerfile
|
||||
:emphasize-lines: 4,43
|
||||
|
||||
# Build stage with Spack pre-installed and ready to be used
|
||||
FROM spack/ubuntu-jammy:latest as builder
|
||||
|
||||
RUN echo "Start building"
|
||||
|
||||
# What we want to install and how we want to install it
|
||||
# is specified in a manifest file (spack.yaml)
|
||||
RUN mkdir /opt/spack-environment \
|
||||
&& (echo "spack:" \
|
||||
&& echo " specs:" \
|
||||
&& echo " - hdf5~mpi" \
|
||||
&& echo " concretizer:" \
|
||||
&& echo " unify: true" \
|
||||
&& echo " config:" \
|
||||
&& echo " template_dirs:" \
|
||||
&& echo " - /tmp/environment/templates" \
|
||||
&& echo " install_tree: /opt/software" \
|
||||
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
|
||||
|
||||
# Install the software, remove unnecessary deps
|
||||
RUN cd /opt/spack-environment && spack env activate . && spack concretize && spack env depfile -o Makefile && make -j $(nproc) && spack gc -y
|
||||
|
||||
# Strip all the binaries
|
||||
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
|
||||
xargs file -i | \
|
||||
grep 'charset=binary' | \
|
||||
grep 'x-executable\|x-archive\|x-sharedlib' | \
|
||||
awk -F: '{print $1}' | xargs strip -s
|
||||
|
||||
# Modifications to the environment that are necessary to run
|
||||
RUN cd /opt/spack-environment && \
|
||||
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
|
||||
|
||||
# Bare OS image to run the installed executables
|
||||
FROM ubuntu:22.04
|
||||
|
||||
COPY --from=builder /opt/spack-environment /opt/spack-environment
|
||||
COPY --from=builder /opt/software /opt/software
|
||||
COPY --from=builder /opt/._view /opt/._view
|
||||
COPY --from=builder /opt/view /opt/view
|
||||
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
|
||||
|
||||
COPY data /share/myapp/data
|
||||
|
||||
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l", "-c", "$*", "--" ]
|
||||
CMD [ "/bin/bash" ]
|
||||
|
||||
|
||||
.. _container_config_options:
|
||||
|
||||
-----------------------
|
||||
@@ -578,10 +464,6 @@ to customize the generation of container recipes:
|
||||
- The format of the recipe
|
||||
- ``docker`` or ``singularity``
|
||||
- Yes
|
||||
* - ``depfile``
|
||||
- Whether to use a depfile for installation, or not
|
||||
- True or False (default)
|
||||
- No
|
||||
* - ``images:os``
|
||||
- Operating system used as a base for the image
|
||||
- See :ref:`containers-supported-os`
|
||||
@@ -630,6 +512,14 @@ to customize the generation of container recipes:
|
||||
- System packages needed at run-time
|
||||
- Valid packages for the current OS
|
||||
- No
|
||||
* - ``extra_instructions:build``
|
||||
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``build`` stage
|
||||
- Anything understood by the current ``format``
|
||||
- No
|
||||
* - ``extra_instructions:final``
|
||||
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``final`` stage
|
||||
- Anything understood by the current ``format``
|
||||
- No
|
||||
* - ``labels``
|
||||
- Labels to tag the image
|
||||
- Pairs of key-value strings
|
||||
|
@@ -472,7 +472,7 @@ use my new hook as follows:
|
||||
.. code-block:: python
|
||||
|
||||
def post_log_write(message, level):
|
||||
"""Do something custom with the message and level every time we write
|
||||
"""Do something custom with the messsage and level every time we write
|
||||
to the log
|
||||
"""
|
||||
print('running post_log_write!')
|
||||
|
@@ -1597,8 +1597,8 @@ in a Windows CMD prompt.
|
||||
|
||||
.. note::
|
||||
If you chose to install Spack into a directory on Windows that is set up to require Administrative
|
||||
Privileges, Spack will require elevated privileges to run.
|
||||
Administrative Privileges can be denoted either by default such as
|
||||
Privleges, Spack will require elevated privleges to run.
|
||||
Administrative Privleges can be denoted either by default such as
|
||||
``C:\Program Files``, or aministrator applied administrative restrictions
|
||||
on a directory that spack installs files to such as ``C:\Users``
|
||||
|
||||
@@ -1694,7 +1694,7 @@ Spack console via:
|
||||
|
||||
spack install cpuinfo
|
||||
|
||||
If in the previous step, you did not have CMake or Ninja installed, running the command above should bootstrap both packages
|
||||
If in the previous step, you did not have CMake or Ninja installed, running the command above should boostrap both packages
|
||||
|
||||
"""""""""""""""""""""""""""
|
||||
Windows Compatible Packages
|
||||
|
@@ -13,7 +13,7 @@ The use of module systems to manage user environment in a controlled way
|
||||
is a common practice at HPC centers that is often embraced also by
|
||||
individual programmers on their development machines. To support this
|
||||
common practice Spack integrates with `Environment Modules
|
||||
<http://modules.sourceforge.net/>`_ and `Lmod
|
||||
<http://modules.sourceforge.net/>`_ and `LMod
|
||||
<http://lmod.readthedocs.io/en/latest/>`_ by providing post-install hooks
|
||||
that generate module files and commands to manipulate them.
|
||||
|
||||
@@ -26,8 +26,8 @@ Using module files via Spack
|
||||
----------------------------
|
||||
|
||||
If you have installed a supported module system you should be able to
|
||||
run ``module avail`` to see what module
|
||||
files have been installed. Here is sample output of those programs,
|
||||
run either ``module avail`` or ``use -l spack`` to see what module
|
||||
files have been installed. Here is sample output of those programs,
|
||||
showing lots of installed packages:
|
||||
|
||||
.. code-block:: console
|
||||
@@ -51,7 +51,12 @@ showing lots of installed packages:
|
||||
help2man-1.47.4-gcc-4.8-kcnqmau lua-luaposix-33.4.0-gcc-4.8-mdod2ry netlib-scalapack-2.0.2-gcc-6.3.0-rgqfr6d py-scipy-0.19.0-gcc-6.3.0-kr7nat4 zlib-1.2.11-gcc-6.3.0-7cqp6cj
|
||||
|
||||
The names should look familiar, as they resemble the output from ``spack find``.
|
||||
For example, you could type the following command to load the ``cmake`` module:
|
||||
You *can* use the modules here directly. For example, you could type either of these commands
|
||||
to load the ``cmake`` module:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ use cmake-3.7.2-gcc-6.3.0-fowuuby
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@@ -88,9 +93,9 @@ the different file formats that can be generated by Spack:
|
||||
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
|
||||
| | **Hook name** | **Default root directory** | **Default template file** | **Compatible tools** |
|
||||
+=============================+====================+===============================+==============================================+======================+
|
||||
| **Tcl - Non-Hierarchical** | ``tcl`` | share/spack/modules | share/spack/templates/modules/modulefile.tcl | Env. Modules/Lmod |
|
||||
| **TCL - Non-Hierarchical** | ``tcl`` | share/spack/modules | share/spack/templates/modules/modulefile.tcl | Env. Modules/LMod |
|
||||
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
|
||||
| **Lua - Hierarchical** | ``lmod`` | share/spack/lmod | share/spack/templates/modules/modulefile.lua | Lmod |
|
||||
| **Lua - Hierarchical** | ``lmod`` | share/spack/lmod | share/spack/templates/modules/modulefile.lua | LMod |
|
||||
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
|
||||
|
||||
|
||||
@@ -391,13 +396,13 @@ name and version for all packages that depend on mpi.
|
||||
|
||||
When specifying module names by projection for Lmod modules, we
|
||||
recommend NOT including names of dependencies (e.g., MPI, compilers)
|
||||
that are already in the Lmod hierarchy.
|
||||
that are already in the LMod hierarchy.
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
Tcl modules
|
||||
Tcl modules also allow for explicit conflicts between modulefiles.
|
||||
TCL modules
|
||||
TCL modules also allow for explicit conflicts between modulefiles.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@@ -421,9 +426,9 @@ that are already in the Lmod hierarchy.
|
||||
|
||||
|
||||
.. note::
|
||||
Lmod hierarchical module files
|
||||
LMod hierarchical module files
|
||||
When ``lmod`` is activated Spack will generate a set of hierarchical lua module
|
||||
files that are understood by Lmod. The hierarchy will always contain the
|
||||
files that are understood by LMod. The hierarchy will always contain the
|
||||
two layers ``Core`` / ``Compiler`` but can be further extended to
|
||||
any of the virtual dependencies present in Spack. A case that could be useful in
|
||||
practice is for instance:
|
||||
@@ -445,7 +450,7 @@ that are already in the Lmod hierarchy.
|
||||
|
||||
that will generate a hierarchy in which the ``lapack`` and ``mpi`` layer can be switched
|
||||
independently. This allows a site to build the same libraries or applications against different
|
||||
implementations of ``mpi`` and ``lapack``, and let Lmod switch safely from one to the
|
||||
implementations of ``mpi`` and ``lapack``, and let LMod switch safely from one to the
|
||||
other.
|
||||
|
||||
All packages built with a compiler in ``core_compilers`` and all
|
||||
@@ -455,12 +460,12 @@ that are already in the Lmod hierarchy.
|
||||
.. warning::
|
||||
Consistency of Core packages
|
||||
The user is responsible for maintining consistency among core packages, as ``core_specs``
|
||||
bypasses the hierarchy that allows Lmod to safely switch between coherent software stacks.
|
||||
bypasses the hierarchy that allows LMod to safely switch between coherent software stacks.
|
||||
|
||||
.. warning::
|
||||
Deep hierarchies and ``lmod spider``
|
||||
For hierarchies that are deeper than three layers ``lmod spider`` may have some issues.
|
||||
See `this discussion on the Lmod project <https://github.com/TACC/Lmod/issues/114>`_.
|
||||
See `this discussion on the LMod project <https://github.com/TACC/Lmod/issues/114>`_.
|
||||
|
||||
""""""""""""""""""""""
|
||||
Select default modules
|
||||
@@ -529,7 +534,7 @@ installed to ``/spack/prefix/foo``, if ``foo`` installs executables to
|
||||
update ``MANPATH``.
|
||||
|
||||
The default list of environment variables in this config section
|
||||
includes ``PATH``, ``MANPATH``, ``ACLOCAL_PATH``, ``PKG_CONFIG_PATH``
|
||||
inludes ``PATH``, ``MANPATH``, ``ACLOCAL_PATH``, ``PKG_CONFIG_PATH``
|
||||
and ``CMAKE_PREFIX_PATH``, as well as ``DYLD_FALLBACK_LIBRARY_PATH``
|
||||
on macOS. On Linux however, the corresponding ``LD_LIBRARY_PATH``
|
||||
variable is *not* set, because it affects the behavior of
|
||||
@@ -629,9 +634,8 @@ by its dependency; when the dependency is autoloaded, the executable will be in
|
||||
PATH. Similarly for scripting languages such as Python, packages and their dependencies
|
||||
have to be loaded together.
|
||||
|
||||
Autoloading is enabled by default for Lmod and Environment Modules. The former
|
||||
has builtin support for through the ``depends_on`` function. The latter uses
|
||||
``module load`` statement to load and track dependencies.
|
||||
Autoloading is enabled by default for LMod, as it has great builtin support for through
|
||||
the ``depends_on`` function. For Environment Modules it is disabled by default.
|
||||
|
||||
Autoloading can also be enabled conditionally:
|
||||
|
||||
@@ -651,14 +655,12 @@ The allowed values for the ``autoload`` statement are either ``none``,
|
||||
``direct`` or ``all``.
|
||||
|
||||
.. note::
|
||||
Tcl prerequisites
|
||||
TCL prerequisites
|
||||
In the ``tcl`` section of the configuration file it is possible to use
|
||||
the ``prerequisites`` directive that accepts the same values as
|
||||
``autoload``. It will produce module files that have a ``prereq``
|
||||
statement, which autoloads dependencies on Environment Modules when its
|
||||
``auto_handling`` configuration option is enabled. If Environment Modules
|
||||
is installed with Spack, ``auto_handling`` is enabled by default starting
|
||||
version 4.2. Otherwise it is enabled by default since version 5.0.
|
||||
statement, which can be used to autoload dependencies in some versions
|
||||
of Environment Modules.
|
||||
|
||||
------------------------
|
||||
Maintaining Module Files
|
||||
|
File diff suppressed because it is too large
Load Diff
@@ -9,32 +9,27 @@
|
||||
CI Pipelines
|
||||
============
|
||||
|
||||
Spack provides commands that support generating and running automated build pipelines in CI instances. At the highest
|
||||
level it works like this: provide a spack environment describing the set of packages you care about, and include a
|
||||
description of how those packages should be mapped to Gitlab runners. Spack can then generate a ``.gitlab-ci.yml``
|
||||
file containing job descriptions for all your packages that can be run by a properly configured CI instance. When
|
||||
run, the generated pipeline will build and deploy binaries, and it can optionally report to a CDash instance
|
||||
Spack provides commands that support generating and running automated build
|
||||
pipelines designed for Gitlab CI. At the highest level it works like this:
|
||||
provide a spack environment describing the set of packages you care about,
|
||||
and include within that environment file a description of how those packages
|
||||
should be mapped to Gitlab runners. Spack can then generate a ``.gitlab-ci.yml``
|
||||
file containing job descriptions for all your packages that can be run by a
|
||||
properly configured Gitlab CI instance. When run, the generated pipeline will
|
||||
build and deploy binaries, and it can optionally report to a CDash instance
|
||||
regarding the health of the builds as they evolve over time.
|
||||
|
||||
------------------------------
|
||||
Getting started with pipelines
|
||||
------------------------------
|
||||
|
||||
To get started with automated build pipelines a Gitlab instance with version ``>= 12.9``
|
||||
(more about Gitlab CI `here <https://about.gitlab.com/product/continuous-integration/>`_)
|
||||
with at least one `runner <https://docs.gitlab.com/runner/>`_ configured is required. This
|
||||
can be done quickly by setting up a local Gitlab instance.
|
||||
It is fairly straightforward to get started with automated build pipelines. At
|
||||
a minimum, you'll need to set up a Gitlab instance (more about Gitlab CI
|
||||
`here <https://about.gitlab.com/product/continuous-integration/>`_) and configure
|
||||
at least one `runner <https://docs.gitlab.com/runner/>`_. Then the basic steps
|
||||
for setting up a build pipeline are as follows:
|
||||
|
||||
It is possible to set up pipelines on gitlab.com, but the builds there are limited to
|
||||
60 minutes and generic hardware. It is possible to
|
||||
`hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_
|
||||
Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_)
|
||||
or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those
|
||||
topics are outside the scope of this document.
|
||||
|
||||
After setting up a Gitlab instance for running CI, the basic steps for setting up a build pipeline are as follows:
|
||||
|
||||
#. Create a repository in the Gitlab instance with CI and a runner enabled.
|
||||
#. Create a repository on your gitlab instance
|
||||
#. Add a ``spack.yaml`` at the root containing your pipeline environment
|
||||
#. Add a ``.gitlab-ci.yml`` at the root containing two jobs (one to generate
|
||||
the pipeline dynamically, and one to run the generated jobs).
|
||||
@@ -45,6 +40,13 @@ See the :ref:`functional_example` section for a minimal working example. See al
|
||||
the :ref:`custom_Workflow` section for a link to an example of a custom workflow
|
||||
based on spack pipelines.
|
||||
|
||||
While it is possible to set up pipelines on gitlab.com, as illustrated above, the
|
||||
builds there are limited to 60 minutes and generic hardware. It is also possible to
|
||||
`hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_
|
||||
Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_)
|
||||
or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those
|
||||
topics are outside the scope of this document.
|
||||
|
||||
Spack's pipelines are now making use of the
|
||||
`trigger <https://docs.gitlab.com/ee/ci/yaml/#trigger>`_ syntax to run
|
||||
dynamically generated
|
||||
@@ -130,35 +132,29 @@ And here's the spack environment built by the pipeline represented as a
|
||||
|
||||
mirrors: { "mirror": "s3://spack-public/mirror" }
|
||||
|
||||
ci:
|
||||
gitlab-ci:
|
||||
before_script:
|
||||
- git clone ${SPACK_REPO}
|
||||
- pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
|
||||
- . "./spack/share/spack/setup-env.sh"
|
||||
script:
|
||||
- pushd ${SPACK_CONCRETE_ENV_DIR} && spack env activate --without-view . && popd
|
||||
- spack -d ci rebuild
|
||||
mappings:
|
||||
- match: ["os=ubuntu18.04"]
|
||||
runner-attributes:
|
||||
image:
|
||||
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
|
||||
entrypoint: [""]
|
||||
tags:
|
||||
- docker
|
||||
enable-artifacts-buildcache: True
|
||||
rebuild-index: False
|
||||
pipeline-gen:
|
||||
- any-job:
|
||||
before_script:
|
||||
- git clone ${SPACK_REPO}
|
||||
- pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
|
||||
- . "./spack/share/spack/setup-env.sh"
|
||||
- build-job:
|
||||
tags: [docker]
|
||||
image:
|
||||
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
|
||||
entrypoint: [""]
|
||||
|
||||
|
||||
The elements of this file important to spack ci pipelines are described in more
|
||||
detail below, but there are a couple of things to note about the above working
|
||||
example:
|
||||
|
||||
.. note::
|
||||
There is no ``script`` attribute specified for here. The reason for this is
|
||||
Spack CI will automatically generate reasonable default scripts. More
|
||||
detail on what is in these scripts can be found below.
|
||||
|
||||
Also notice the ``before_script`` section. It is required when using any of the
|
||||
default scripts to source the ``setup-env.sh`` script in order to inform
|
||||
the default scripts where to find the ``spack`` executable.
|
||||
|
||||
Normally ``enable-artifacts-buildcache`` is not recommended in production as it
|
||||
results in large binary artifacts getting transferred back and forth between
|
||||
gitlab and the runners. But in this example on gitlab.com where there is no
|
||||
@@ -178,7 +174,7 @@ during subsequent pipeline runs.
|
||||
With the addition of reproducible builds (#22887) a previously working
|
||||
pipeline will require some changes:
|
||||
|
||||
* In the build-jobs, the environment location changed.
|
||||
* In the build jobs (``runner-attributes``), the environment location changed.
|
||||
This will typically show as a ``KeyError`` in the failing job. Be sure to
|
||||
point to ``${SPACK_CONCRETE_ENV_DIR}``.
|
||||
|
||||
@@ -200,9 +196,9 @@ ci pipelines. These commands are covered in more detail in this section.
|
||||
|
||||
.. _cmd-spack-ci:
|
||||
|
||||
^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
``spack ci``
|
||||
^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Super-command for functionality related to generating pipelines and executing
|
||||
pipeline jobs.
|
||||
@@ -231,7 +227,7 @@ Using ``--prune-dag`` or ``--no-prune-dag`` configures whether or not jobs are
|
||||
generated for specs that are already up to date on the mirror. If enabling
|
||||
DAG pruning using ``--prune-dag``, more information may be required in your
|
||||
``spack.yaml`` file, see the :ref:`noop_jobs` section below regarding
|
||||
``noop-job``.
|
||||
``service-job-attributes``.
|
||||
|
||||
The optional ``--check-index-only`` argument can be used to speed up pipeline
|
||||
generation by telling spack to consider only remote buildcache indices when
|
||||
@@ -267,11 +263,11 @@ generated by jobs in the pipeline.
|
||||
|
||||
.. _cmd-spack-ci-rebuild:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
``spack ci rebuild``
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The purpose of ``spack ci rebuild`` is to take an assigned
|
||||
The purpose of ``spack ci rebuild`` is straightforward: take its assigned
|
||||
spec and ensure a binary of a successful build exists on the target mirror.
|
||||
If the binary does not already exist, it is built from source and pushed
|
||||
to the mirror. The associated stand-alone tests are optionally run against
|
||||
@@ -284,7 +280,7 @@ directory. The script is run in a job to install the spec from source. The
|
||||
resulting binary package is pushed to the mirror. If ``cdash`` is configured
|
||||
for the environment, then the build results will be uploaded to the site.
|
||||
|
||||
Environment variables and values in the ``ci::pipeline-gen`` section of the
|
||||
Environment variables and values in the ``gitlab-ci`` section of the
|
||||
``spack.yaml`` environment file provide inputs to this process. The
|
||||
two main sources of environment variables are variables written into
|
||||
``.gitlab-ci.yml`` by ``spack ci generate`` and the GitLab CI runtime.
|
||||
@@ -302,23 +298,21 @@ A snippet from an example ``spack.yaml`` file illustrating use of this
|
||||
option *and* specification of a package with broken tests is given below.
|
||||
The inclusion of a spec for building ``gptune`` is not shown here. Note
|
||||
that ``--tests`` is passed to ``spack ci rebuild`` as part of the
|
||||
``build-job`` script.
|
||||
``gitlab-ci`` script.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
ci:
|
||||
pipeline-gen:
|
||||
- build-job
|
||||
script:
|
||||
- . "./share/spack/setup-env.sh"
|
||||
- spack --version
|
||||
- cd ${SPACK_CONCRETE_ENV_DIR}
|
||||
- spack env activate --without-view .
|
||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||
- spack -d ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||
gitlab-ci:
|
||||
script:
|
||||
- . "./share/spack/setup-env.sh"
|
||||
- spack --version
|
||||
- cd ${SPACK_CONCRETE_ENV_DIR}
|
||||
- spack env activate --without-view .
|
||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||
- spack -d ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||
|
||||
broken-tests-packages:
|
||||
- gptune
|
||||
@@ -360,31 +354,113 @@ arguments you can pass to ``spack ci reproduce-build`` in order to reproduce
|
||||
a particular build locally.
|
||||
|
||||
------------------------------------
|
||||
Job Types
|
||||
A pipeline-enabled spack environment
|
||||
------------------------------------
|
||||
|
||||
^^^^^^^^^^^^^^^
|
||||
Rebuild (build)
|
||||
^^^^^^^^^^^^^^^
|
||||
Here's an example of a spack environment file that has been enhanced with
|
||||
sections describing a build pipeline:
|
||||
|
||||
Rebuild jobs, denoted as ``build-job``'s in the ``pipeline-gen`` list, are jobs
|
||||
associated with concrete specs that have been marked for rebuild. By default a simple
|
||||
script for doing rebuild is generated, but may be modified as needed.
|
||||
.. code-block:: yaml
|
||||
|
||||
The default script does three main steps, change directories to the pipelines concrete
|
||||
environment, activate the concrete environment, and run the ``spack ci rebuild`` command:
|
||||
spack:
|
||||
definitions:
|
||||
- pkgs:
|
||||
- readline@7.0
|
||||
- compilers:
|
||||
- '%gcc@5.5.0'
|
||||
- oses:
|
||||
- os=ubuntu18.04
|
||||
- os=centos7
|
||||
specs:
|
||||
- matrix:
|
||||
- [$pkgs]
|
||||
- [$compilers]
|
||||
- [$oses]
|
||||
mirrors:
|
||||
cloud_gitlab: https://mirror.spack.io
|
||||
gitlab-ci:
|
||||
mappings:
|
||||
- match:
|
||||
- os=ubuntu18.04
|
||||
runner-attributes:
|
||||
tags:
|
||||
- spack-kube
|
||||
image: spack/ubuntu-bionic
|
||||
- match:
|
||||
- os=centos7
|
||||
runner-attributes:
|
||||
tags:
|
||||
- spack-kube
|
||||
image: spack/centos7
|
||||
cdash:
|
||||
build-group: Release Testing
|
||||
url: https://cdash.spack.io
|
||||
project: Spack
|
||||
site: Spack AWS Gitlab Instance
|
||||
|
||||
.. code-block:: bash
|
||||
Hopefully, the ``definitions``, ``specs``, ``mirrors``, etc. sections are already
|
||||
familiar, as they are part of spack :ref:`environments`. So let's take a more
|
||||
in-depth look some of the pipeline-related sections in that environment file
|
||||
that might not be as familiar.
|
||||
|
||||
cd ${concrete_environment_dir}
|
||||
spack env activate --without-view .
|
||||
spack ci rebuild
|
||||
The ``gitlab-ci`` section is used to configure how the pipeline workload should be
|
||||
generated, mainly how the jobs for building specs should be assigned to the
|
||||
configured runners on your instance. Each entry within the list of ``mappings``
|
||||
corresponds to a known gitlab runner, where the ``match`` section is used
|
||||
in assigning a release spec to one of the runners, and the ``runner-attributes``
|
||||
section is used to configure the spec/job for that particular runner.
|
||||
|
||||
Both the top-level ``gitlab-ci`` section as well as each ``runner-attributes``
|
||||
section can also contain the following keys: ``image``, ``tags``, ``variables``,
|
||||
``before_script``, ``script``, and ``after_script``. If any of these keys are
|
||||
provided at the ``gitlab-ci`` level, they will be used as the defaults for any
|
||||
``runner-attributes``, unless they are overridden in those sections. Specifying
|
||||
any of these keys at the ``runner-attributes`` level generally overrides the
|
||||
keys specified at the higher level, with a couple exceptions. Any ``variables``
|
||||
specified at both levels result in those dictionaries getting merged in the
|
||||
resulting generated job, and any duplicate variable names get assigned the value
|
||||
provided in the specific ``runner-attributes``. If ``tags`` are specified both
|
||||
at the ``gitlab-ci`` level as well as the ``runner-attributes`` level, then the
|
||||
lists of tags are combined, and any duplicates are removed.
|
||||
|
||||
See the section below on using a custom spack for an example of how these keys
|
||||
could be used.
|
||||
|
||||
There are other pipeline options you can configure within the ``gitlab-ci`` section
|
||||
as well.
|
||||
|
||||
The ``bootstrap`` section allows you to specify lists of specs from
|
||||
your ``definitions`` that should be staged ahead of the environment's ``specs`` (this
|
||||
section is described in more detail below). The ``enable-artifacts-buildcache`` key
|
||||
takes a boolean and determines whether the pipeline uses artifacts to store and
|
||||
pass along the buildcaches from one stage to the next (the default if you don't
|
||||
provide this option is ``False``).
|
||||
|
||||
The optional ``broken-specs-url`` key tells Spack to check against a list of
|
||||
specs that are known to be currently broken in ``develop``. If any such specs
|
||||
are found, the ``spack ci generate`` command will fail with an error message
|
||||
informing the user what broken specs were encountered. This allows the pipeline
|
||||
to fail early and avoid wasting compute resources attempting to build packages
|
||||
that will not succeed.
|
||||
|
||||
The optional ``cdash`` section provides information that will be used by the
|
||||
``spack ci generate`` command (invoked by ``spack ci start``) for reporting
|
||||
to CDash. All the jobs generated from this environment will belong to a
|
||||
"build group" within CDash that can be tracked over time. As the release
|
||||
progresses, this build group may have jobs added or removed. The url, project,
|
||||
and site are used to specify the CDash instance to which build results should
|
||||
be reported.
|
||||
|
||||
Take a look at the
|
||||
`schema <https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/gitlab_ci.py>`_
|
||||
for the gitlab-ci section of the spack environment file, to see precisely what
|
||||
syntax is allowed there.
|
||||
|
||||
.. _rebuild_index:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
Update Index (reindex)
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Note about rebuilding buildcache index
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
By default, while a pipeline job may rebuild a package, create a buildcache
|
||||
entry, and push it to the mirror, it does not automatically re-generate the
|
||||
@@ -399,44 +475,21 @@ not correctly reflect the mirror's contents at the end of a pipeline.
|
||||
To make sure the buildcache index is up to date at the end of your pipeline,
|
||||
spack generates a job to update the buildcache index of the target mirror
|
||||
at the end of each pipeline by default. You can disable this behavior by
|
||||
adding ``rebuild-index: False`` inside the ``ci`` section of your
|
||||
spack environment.
|
||||
|
||||
Reindex jobs do not allow modifying the ``script`` attribute since it is automatically
|
||||
generated using the target mirror listed in the ``mirrors::mirror`` configuration.
|
||||
|
||||
^^^^^^^^^^^^^^^^^
|
||||
Signing (signing)
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
This job is run after all of the rebuild jobs are completed and is intended to be used
|
||||
to sign the package binaries built by a protected CI run. Signing jobs are generated
|
||||
only if a signing job ``script`` is specified and the spack CI job type is protected.
|
||||
Note, if an ``any-job`` section contains a script, this will not implicitly create a
|
||||
``signing`` job, a signing job may only exist if it is explicitly specified in the
|
||||
configuration with a ``script`` attribute. Specifying a signing job without a script
|
||||
does not create a signing job and the job configuration attributes will be ignored.
|
||||
Signing jobs are always assigned the runner tags ``aws``, ``protected``, and ``notary``.
|
||||
|
||||
^^^^^^^^^^^^^^^^^
|
||||
Cleanup (cleanup)
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
When using ``temporary-storage-url-prefix`` the cleanup job will destroy the mirror
|
||||
created for the associated Gitlab pipeline. Cleanup jobs do not allow modifying the
|
||||
script, but do expect that the spack command is in the path and require a
|
||||
``before_script`` to be specified that sources the ``setup-env.sh`` script.
|
||||
adding ``rebuild-index: False`` inside the ``gitlab-ci`` section of your
|
||||
spack environment. Spack will assign the job any runner attributes found
|
||||
on the ``service-job-attributes``, if you have provided that in your
|
||||
``spack.yaml``.
|
||||
|
||||
.. _noop_jobs:
|
||||
|
||||
^^^^^^^^^^^^
|
||||
No Op (noop)
|
||||
^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Note about "no-op" jobs
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If no specs in an environment need to be rebuilt during a given pipeline run
|
||||
(meaning all are already up to date on the mirror), a single successful job
|
||||
(a NO-OP) is still generated to avoid an empty pipeline (which GitLab
|
||||
considers to be an error). The ``noop-job*`` sections
|
||||
considers to be an error). An optional ``service-job-attributes`` section
|
||||
can be added to your ``spack.yaml`` where you can provide ``tags`` and
|
||||
``image`` or ``variables`` for the generated NO-OP job. This section also
|
||||
supports providing ``before_script``, ``script``, and ``after_script``, in
|
||||
@@ -446,100 +499,51 @@ Following is an example of this section added to a ``spack.yaml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
spack:
|
||||
ci:
|
||||
pipeline-gen:
|
||||
- noop-job:
|
||||
tags: ['custom', 'tag']
|
||||
image:
|
||||
name: 'some.image.registry/custom-image:latest'
|
||||
entrypoint: ['/bin/bash']
|
||||
script::
|
||||
- echo "Custom message in a custom script"
|
||||
spack:
|
||||
specs:
|
||||
- openmpi
|
||||
mirrors:
|
||||
cloud_gitlab: https://mirror.spack.io
|
||||
gitlab-ci:
|
||||
mappings:
|
||||
- match:
|
||||
- os=centos8
|
||||
runner-attributes:
|
||||
tags:
|
||||
- custom
|
||||
- tag
|
||||
image: spack/centos7
|
||||
service-job-attributes:
|
||||
tags: ['custom', 'tag']
|
||||
image:
|
||||
name: 'some.image.registry/custom-image:latest'
|
||||
entrypoint: ['/bin/bash']
|
||||
script:
|
||||
- echo "Custom message in a custom script"
|
||||
|
||||
The example above illustrates how you can provide the attributes used to run
|
||||
the NO-OP job in the case of an empty pipeline. The only field for the NO-OP
|
||||
job that might be generated for you is ``script``, but that will only happen
|
||||
if you do not provide one yourself. Notice in this example the ``script``
|
||||
uses the ``::`` notation to prescribe override behavior. Without this, the
|
||||
``echo`` command would have been prepended to the automatically generated script
|
||||
rather than replacing it.
|
||||
if you do not provide one yourself.
|
||||
|
||||
------------------------------------
|
||||
ci.yaml
|
||||
------------------------------------
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Assignment of specs to runners
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Here's an example of a spack configuration file describing a build pipeline:
|
||||
The ``mappings`` section corresponds to a list of runners, and during assignment
|
||||
of specs to runners, the list is traversed in order looking for matches, the
|
||||
first runner that matches a release spec is assigned to build that spec. The
|
||||
``match`` section within each runner mapping section is a list of specs, and
|
||||
if any of those specs match the release spec (the ``spec.satisfies()`` method
|
||||
is used), then that runner is considered a match.
|
||||
|
||||
.. code-block:: yaml
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Configuration of specs/jobs for a runner
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
ci:
|
||||
target: gitlab
|
||||
|
||||
rebuild_index: True
|
||||
|
||||
broken-specs-url: https://broken.specs.url
|
||||
|
||||
broken-tests-packages:
|
||||
- gptune
|
||||
|
||||
pipeline-gen:
|
||||
- submapping:
|
||||
- match:
|
||||
- os=ubuntu18.04
|
||||
build-job:
|
||||
tags:
|
||||
- spack-kube
|
||||
image: spack/ubuntu-bionic
|
||||
- match:
|
||||
- os=centos7
|
||||
build-job:
|
||||
tags:
|
||||
- spack-kube
|
||||
image: spack/centos7
|
||||
|
||||
cdash:
|
||||
build-group: Release Testing
|
||||
url: https://cdash.spack.io
|
||||
project: Spack
|
||||
site: Spack AWS Gitlab Instance
|
||||
|
||||
The ``ci`` config section is used to configure how the pipeline workload should be
|
||||
generated, mainly how the jobs for building specs should be assigned to the
|
||||
configured runners on your instance. The main section for configuring pipelines
|
||||
is ``pipeline-gen``, which is a list of job attribute sections that are merged,
|
||||
using the same rules as Spack configs (:ref:`config-scope-precedence`), from the bottom up.
|
||||
The order sections are applied is to be consistent with how spack orders scope precedence when merging lists.
|
||||
There are two main section types, ``<type>-job`` sections and ``submapping``
|
||||
sections.
|
||||
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
Job Attribute Sections
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Each type of job may have attributes added or removed via sections in the ``pipeline-gen``
|
||||
list. Job type specific attributes may be specified using the keys ``<type>-job`` to
|
||||
add attributes to all jobs of type ``<type>`` or ``<type>-job-remove`` to remove attributes
|
||||
of type ``<type>``. Each section may only contain one type of job attribute specification, ie. ,
|
||||
``build-job`` and ``noop-job`` may not coexist but ``build-job`` and ``build-job-remove`` may.
|
||||
|
||||
.. note::
|
||||
The ``*-remove`` specifications are applied before the additive attribute specification.
|
||||
For example, in the case where both ``build-job`` and ``build-job-remove`` are listed in
|
||||
the same ``pipeline-gen`` section, the value will still exist in the merged build-job after
|
||||
applying the section.
|
||||
|
||||
All of the attributes specified are forwarded to the generated CI jobs, however special
|
||||
treatment is applied to the attributes ``tags``, ``image``, ``variables``, ``script``,
|
||||
``before_script``, and ``after_script`` as they are components recognized explicitly by the
|
||||
Spack CI generator. For the ``tags`` attribute, Spack will remove reserved tags
|
||||
(:ref:`reserved_tags`) from all jobs specified in the config. In some cases, such as for
|
||||
``signing`` jobs, reserved tags will be added back based on the type of CI that is being run.
|
||||
|
||||
Once a runner has been chosen to build a release spec, the ``build-job*``
|
||||
sections provide information determining details of the job in the context of
|
||||
the runner. At lease one of the ``build-job*`` sections must contain a ``tags`` key, which
|
||||
Once a runner has been chosen to build a release spec, the ``runner-attributes``
|
||||
section provides information determining details of the job in the context of
|
||||
the runner. The ``runner-attributes`` section must have a ``tags`` key, which
|
||||
is a list containing at least one tag used to select the runner from among the
|
||||
runners known to the gitlab instance. For Docker executor type runners, the
|
||||
``image`` key is used to specify the Docker image used to build the release spec
|
||||
@@ -550,7 +554,7 @@ information on to the runner that it needs to do its work (e.g. scheduler
|
||||
parameters, etc.). Any ``variables`` provided here will be added, verbatim, to
|
||||
each job.
|
||||
|
||||
The ``build-job`` section also allows users to supply custom ``script``,
|
||||
The ``runner-attributes`` section also allows users to supply custom ``script``,
|
||||
``before_script``, and ``after_script`` sections to be applied to every job
|
||||
scheduled on that runner. This allows users to do any custom preparation or
|
||||
cleanup tasks that fit their particular workflow, as well as completely
|
||||
@@ -561,45 +565,46 @@ environment directory is located within your ``--artifacts_root`` (or if not
|
||||
provided, within your ``$CI_PROJECT_DIR``), activates that environment for
|
||||
you, and invokes ``spack ci rebuild``.
|
||||
|
||||
Sections that specify scripts (``script``, ``before_script``, ``after_script``) are all
|
||||
read as lists of commands or lists of lists of commands. It is recommended to write scripts
|
||||
as lists of lists if scripts will be composed via merging. The default behavior of merging
|
||||
lists will remove duplicate commands and potentially apply unwanted reordering, whereas
|
||||
merging lists of lists will preserve the local ordering and never removes duplicate
|
||||
commands. When writing commands to the CI target script, all lists are expanded and
|
||||
flattened into a single list.
|
||||
.. _staging_algorithm:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
Submapping Sections
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Summary of ``.gitlab-ci.yml`` generation algorithm
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A special case of attribute specification is the ``submapping`` section which may be used
|
||||
to apply job attributes to build jobs based on the package spec associated with the rebuild
|
||||
job. Submapping is specified as a list of spec ``match`` lists associated with
|
||||
``build-job``/``build-job-remove`` sections. There are two options for ``match_behavior``,
|
||||
either ``first`` or ``merge`` may be specified. In either case, the ``submapping`` list is
|
||||
processed from the bottom up, and then each ``match`` list is searched for a string that
|
||||
satisfies the check ``spec.satisfies({match_item})`` for each concrete spec.
|
||||
All specs yielded by the matrix (or all the specs in the environment) have their
|
||||
dependencies computed, and the entire resulting set of specs are staged together
|
||||
before being run through the ``gitlab-ci/mappings`` entries, where each staged
|
||||
spec is assigned a runner. "Staging" is the name given to the process of
|
||||
figuring out in what order the specs should be built, taking into consideration
|
||||
Gitlab CI rules about jobs/stages. In the staging process the goal is to maximize
|
||||
the number of jobs in any stage of the pipeline, while ensuring that the jobs in
|
||||
any stage only depend on jobs in previous stages (since those jobs are guaranteed
|
||||
to have completed already). As a runner is determined for a job, the information
|
||||
in the ``runner-attributes`` is used to populate various parts of the job
|
||||
description that will be used by Gitlab CI. Once all the jobs have been assigned
|
||||
a runner, the ``.gitlab-ci.yml`` is written to disk.
|
||||
|
||||
The the case of ``match_behavior: first``, the first ``match`` section in the list of
|
||||
``submappings`` that contains a string that satisfies the spec will apply it's
|
||||
``build-job*`` attributes to the rebuild job associated with that spec. This is the
|
||||
default behavior and will be the method if no ``match_behavior`` is specified.
|
||||
The short example provided above would result in the ``readline``, ``ncurses``,
|
||||
and ``pkgconf`` packages getting staged and built on the runner chosen by the
|
||||
``spack-k8s`` tag. In this example, spack assumes the runner is a Docker executor
|
||||
type runner, and thus certain jobs will be run in the ``centos7`` container,
|
||||
and others in the ``ubuntu-18.04`` container. The resulting ``.gitlab-ci.yml``
|
||||
will contain 6 jobs in three stages. Once the jobs have been generated, the
|
||||
presence of a ``SPACK_CDASH_AUTH_TOKEN`` environment variable during the
|
||||
``spack ci generate`` command would result in all of the jobs being put in a
|
||||
build group on CDash called "Release Testing" (that group will be created if
|
||||
it didn't already exist).
|
||||
|
||||
The the case of ``merge`` match, all of the ``match`` sections in the list of
|
||||
``submappings`` that contain a string that satisfies the spec will have the associated
|
||||
``build-job*`` attributes applied to the rebuild job associated with that spec. Again,
|
||||
the attributes will be merged starting from the bottom match going up to the top match.
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Optional compiler bootstrapping
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In the case that no match is found in a submapping section, no additional attributes will be applied.
|
||||
|
||||
^^^^^^^^^^^^^
|
||||
Bootstrapping
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
|
||||
The ``bootstrap`` section allows you to specify lists of specs from
|
||||
your ``definitions`` that should be staged ahead of the environment's ``specs``. At the moment
|
||||
Spack pipelines also have support for bootstrapping compilers on systems that
|
||||
may not already have the desired compilers installed. The idea here is that
|
||||
you can specify a list of things to bootstrap in your ``definitions``, and
|
||||
spack will guarantee those will be installed in a phase of the pipeline before
|
||||
your release specs, so that you can rely on those packages being available in
|
||||
the binary mirror when you need them later on in the pipeline. At the moment
|
||||
the only viable use-case for bootstrapping is to install compilers.
|
||||
|
||||
Here's an example of what bootstrapping some compilers might look like:
|
||||
@@ -632,18 +637,18 @@ Here's an example of what bootstrapping some compilers might look like:
|
||||
exclude:
|
||||
- '%gcc@7.3.0 os=centos7'
|
||||
- '%gcc@5.5.0 os=ubuntu18.04'
|
||||
ci:
|
||||
gitlab-ci:
|
||||
bootstrap:
|
||||
- name: compiler-pkgs
|
||||
compiler-agnostic: true
|
||||
pipeline-gen:
|
||||
# similar to the example higher up in this description
|
||||
mappings:
|
||||
# mappings similar to the example higher up in this description
|
||||
...
|
||||
|
||||
The example above adds a list to the ``definitions`` called ``compiler-pkgs``
|
||||
(you can add any number of these), which lists compiler packages that should
|
||||
be staged ahead of the full matrix of release specs (in this example, only
|
||||
readline). Then within the ``ci`` section, note the addition of a
|
||||
readline). Then within the ``gitlab-ci`` section, note the addition of a
|
||||
``bootstrap`` section, which can contain a list of items, each referring to
|
||||
a list in the ``definitions`` section. These items can either
|
||||
be a dictionary or a string. If you supply a dictionary, it must have a name
|
||||
@@ -675,86 +680,6 @@ environment/stack file, and in that case no bootstrapping will be done (only the
|
||||
specs will be staged for building) and the runners will be expected to already
|
||||
have all needed compilers installed and configured for spack to use.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
Pipeline Buildcache
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``enable-artifacts-buildcache`` key
|
||||
takes a boolean and determines whether the pipeline uses artifacts to store and
|
||||
pass along the buildcaches from one stage to the next (the default if you don't
|
||||
provide this option is ``False``).
|
||||
|
||||
^^^^^^^^^^^^^^^^
|
||||
Broken Specs URL
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
The optional ``broken-specs-url`` key tells Spack to check against a list of
|
||||
specs that are known to be currently broken in ``develop``. If any such specs
|
||||
are found, the ``spack ci generate`` command will fail with an error message
|
||||
informing the user what broken specs were encountered. This allows the pipeline
|
||||
to fail early and avoid wasting compute resources attempting to build packages
|
||||
that will not succeed.
|
||||
|
||||
^^^^^
|
||||
CDash
|
||||
^^^^^
|
||||
|
||||
The optional ``cdash`` section provides information that will be used by the
|
||||
``spack ci generate`` command (invoked by ``spack ci start``) for reporting
|
||||
to CDash. All the jobs generated from this environment will belong to a
|
||||
"build group" within CDash that can be tracked over time. As the release
|
||||
progresses, this build group may have jobs added or removed. The url, project,
|
||||
and site are used to specify the CDash instance to which build results should
|
||||
be reported.
|
||||
|
||||
Take a look at the
|
||||
`schema <https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/ci.py>`_
|
||||
for the ci section of the spack environment file, to see precisely what
|
||||
syntax is allowed there.
|
||||
|
||||
.. _reserved_tags:
|
||||
|
||||
^^^^^^^^^^^^^
|
||||
Reserved Tags
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
Spack has a subset of tags (``public``, ``protected``, and ``notary``) that it reserves
|
||||
for classifying runners that may require special permissions or access. The tags
|
||||
``public`` and ``protected`` are used to distinguish between runners that use public
|
||||
permissions and runners with protected permissions. The ``notary`` tag is a special tag
|
||||
that is used to indicate runners that have access to the highly protected information
|
||||
used for signing binaries using the ``signing`` job.
|
||||
|
||||
.. _staging_algorithm:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Summary of ``.gitlab-ci.yml`` generation algorithm
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
All specs yielded by the matrix (or all the specs in the environment) have their
|
||||
dependencies computed, and the entire resulting set of specs are staged together
|
||||
before being run through the ``ci/pipeline-gen`` entries, where each staged
|
||||
spec is assigned a runner. "Staging" is the name given to the process of
|
||||
figuring out in what order the specs should be built, taking into consideration
|
||||
Gitlab CI rules about jobs/stages. In the staging process the goal is to maximize
|
||||
the number of jobs in any stage of the pipeline, while ensuring that the jobs in
|
||||
any stage only depend on jobs in previous stages (since those jobs are guaranteed
|
||||
to have completed already). As a runner is determined for a job, the information
|
||||
in the merged ``any-job*`` and ``build-job*`` sections is used to populate various parts of the job
|
||||
description that will be used by the target CI pipelines. Once all the jobs have been assigned
|
||||
a runner, the ``.gitlab-ci.yml`` is written to disk.
|
||||
|
||||
The short example provided above would result in the ``readline``, ``ncurses``,
|
||||
and ``pkgconf`` packages getting staged and built on the runner chosen by the
|
||||
``spack-k8s`` tag. In this example, spack assumes the runner is a Docker executor
|
||||
type runner, and thus certain jobs will be run in the ``centos7`` container,
|
||||
and others in the ``ubuntu-18.04`` container. The resulting ``.gitlab-ci.yml``
|
||||
will contain 6 jobs in three stages. Once the jobs have been generated, the
|
||||
presence of a ``SPACK_CDASH_AUTH_TOKEN`` environment variable during the
|
||||
``spack ci generate`` command would result in all of the jobs being put in a
|
||||
build group on CDash called "Release Testing" (that group will be created if
|
||||
it didn't already exist).
|
||||
|
||||
-------------------------------------
|
||||
Using a custom spack in your pipeline
|
||||
-------------------------------------
|
||||
@@ -801,21 +726,23 @@ generated by ``spack ci generate``. You also want your generated rebuild jobs
|
||||
|
||||
spack:
|
||||
...
|
||||
ci:
|
||||
pipeline-gen:
|
||||
- build-job:
|
||||
tags:
|
||||
- spack-kube
|
||||
image: spack/ubuntu-bionic
|
||||
before_script:
|
||||
- git clone ${SPACK_REPO}
|
||||
- pushd spack && git checkout ${SPACK_REF} && popd
|
||||
- . "./spack/share/spack/setup-env.sh"
|
||||
script:
|
||||
- spack env activate --without-view ${SPACK_CONCRETE_ENV_DIR}
|
||||
- spack -d ci rebuild
|
||||
after_script:
|
||||
- rm -rf ./spack
|
||||
gitlab-ci:
|
||||
mappings:
|
||||
- match:
|
||||
- os=ubuntu18.04
|
||||
runner-attributes:
|
||||
tags:
|
||||
- spack-kube
|
||||
image: spack/ubuntu-bionic
|
||||
before_script:
|
||||
- git clone ${SPACK_REPO}
|
||||
- pushd spack && git checkout ${SPACK_REF} && popd
|
||||
- . "./spack/share/spack/setup-env.sh"
|
||||
script:
|
||||
- spack env activate --without-view ${SPACK_CONCRETE_ENV_DIR}
|
||||
- spack -d ci rebuild
|
||||
after_script:
|
||||
- rm -rf ./spack
|
||||
|
||||
Now all of the generated rebuild jobs will use the same shell script to clone
|
||||
spack before running their actual workload.
|
||||
@@ -904,4 +831,3 @@ verify binary packages (when installing or creating buildcaches). You could
|
||||
also have already trusted a key spack know about, or if no key is present anywhere,
|
||||
spack will install specs using ``--no-check-signature`` and create buildcaches
|
||||
using ``-u`` (for unsigned binaries).
|
||||
|
||||
|
178
lib/spack/env/cc
vendored
178
lib/spack/env/cc
vendored
@@ -427,55 +427,6 @@ isystem_include_dirs_list=""
|
||||
libs_list=""
|
||||
other_args_list=""
|
||||
|
||||
# Global state for keeping track of -Wl,-rpath -Wl,/path
|
||||
wl_expect_rpath=no
|
||||
|
||||
# Same, but for -Xlinker -rpath -Xlinker /path
|
||||
xlinker_expect_rpath=no
|
||||
|
||||
parse_Wl() {
|
||||
# drop -Wl
|
||||
shift
|
||||
while [ $# -ne 0 ]; do
|
||||
if [ "$wl_expect_rpath" = yes ]; then
|
||||
if system_dir "$1"; then
|
||||
append system_rpath_dirs_list "$1"
|
||||
else
|
||||
append rpath_dirs_list "$1"
|
||||
fi
|
||||
wl_expect_rpath=no
|
||||
else
|
||||
case "$1" in
|
||||
-rpath=*)
|
||||
arg="${1#-rpath=}"
|
||||
if system_dir "$arg"; then
|
||||
append system_rpath_dirs_list "$arg"
|
||||
else
|
||||
append rpath_dirs_list "$arg"
|
||||
fi
|
||||
;;
|
||||
--rpath=*)
|
||||
arg="${1#--rpath=}"
|
||||
if system_dir "$arg"; then
|
||||
append system_rpath_dirs_list "$arg"
|
||||
else
|
||||
append rpath_dirs_list "$arg"
|
||||
fi
|
||||
;;
|
||||
-rpath|--rpath)
|
||||
wl_expect_rpath=yes
|
||||
;;
|
||||
"$dtags_to_strip")
|
||||
;;
|
||||
*)
|
||||
append other_args_list "-Wl,$1"
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
shift
|
||||
done
|
||||
}
|
||||
|
||||
|
||||
while [ $# -ne 0 ]; do
|
||||
|
||||
@@ -575,77 +526,88 @@ while [ $# -ne 0 ]; do
|
||||
append other_args_list "-l$arg"
|
||||
;;
|
||||
-Wl,*)
|
||||
IFS=,
|
||||
parse_Wl $1
|
||||
unset IFS
|
||||
arg="${1#-Wl,}"
|
||||
if [ -z "$arg" ]; then shift; arg="$1"; fi
|
||||
case "$arg" in
|
||||
-rpath=*) rp="${arg#-rpath=}" ;;
|
||||
--rpath=*) rp="${arg#--rpath=}" ;;
|
||||
-rpath,*) rp="${arg#-rpath,}" ;;
|
||||
--rpath,*) rp="${arg#--rpath,}" ;;
|
||||
-rpath|--rpath)
|
||||
shift; arg="$1"
|
||||
case "$arg" in
|
||||
-Wl,*)
|
||||
rp="${arg#-Wl,}"
|
||||
;;
|
||||
*)
|
||||
die "-Wl,-rpath was not followed by -Wl,*"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
"$dtags_to_strip")
|
||||
: # We want to remove explicitly this flag
|
||||
;;
|
||||
*)
|
||||
append other_args_list "-Wl,$arg"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
-Xlinker,*)
|
||||
arg="${1#-Xlinker,}"
|
||||
if [ -z "$arg" ]; then shift; arg="$1"; fi
|
||||
|
||||
case "$arg" in
|
||||
-rpath=*) rp="${arg#-rpath=}" ;;
|
||||
--rpath=*) rp="${arg#--rpath=}" ;;
|
||||
-rpath|--rpath)
|
||||
shift; arg="$1"
|
||||
case "$arg" in
|
||||
-Xlinker,*)
|
||||
rp="${arg#-Xlinker,}"
|
||||
;;
|
||||
*)
|
||||
die "-Xlinker,-rpath was not followed by -Xlinker,*"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
*)
|
||||
append other_args_list "-Xlinker,$arg"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
-Xlinker)
|
||||
shift
|
||||
if [ $# -eq 0 ]; then
|
||||
# -Xlinker without value: let the compiler error about it.
|
||||
append other_args_list -Xlinker
|
||||
xlinker_expect_rpath=no
|
||||
break
|
||||
elif [ "$xlinker_expect_rpath" = yes ]; then
|
||||
# Register the path of -Xlinker -rpath <other args> -Xlinker <path>
|
||||
if system_dir "$1"; then
|
||||
append system_rpath_dirs_list "$1"
|
||||
else
|
||||
append rpath_dirs_list "$1"
|
||||
if [ "$2" = "-rpath" ]; then
|
||||
if [ "$3" != "-Xlinker" ]; then
|
||||
die "-Xlinker,-rpath was not followed by -Xlinker,*"
|
||||
fi
|
||||
xlinker_expect_rpath=no
|
||||
shift 3;
|
||||
rp="$1"
|
||||
elif [ "$2" = "$dtags_to_strip" ]; then
|
||||
shift # We want to remove explicitly this flag
|
||||
else
|
||||
case "$1" in
|
||||
-rpath=*)
|
||||
arg="${1#-rpath=}"
|
||||
if system_dir "$arg"; then
|
||||
append system_rpath_dirs_list "$arg"
|
||||
else
|
||||
append rpath_dirs_list "$arg"
|
||||
fi
|
||||
;;
|
||||
--rpath=*)
|
||||
arg="${1#--rpath=}"
|
||||
if system_dir "$arg"; then
|
||||
append system_rpath_dirs_list "$arg"
|
||||
else
|
||||
append rpath_dirs_list "$arg"
|
||||
fi
|
||||
;;
|
||||
-rpath|--rpath)
|
||||
xlinker_expect_rpath=yes
|
||||
;;
|
||||
"$dtags_to_strip")
|
||||
;;
|
||||
*)
|
||||
append other_args_list -Xlinker
|
||||
append other_args_list "$1"
|
||||
;;
|
||||
esac
|
||||
append other_args_list "$1"
|
||||
fi
|
||||
;;
|
||||
"$dtags_to_strip")
|
||||
;;
|
||||
*)
|
||||
append other_args_list "$1"
|
||||
if [ "$1" = "$dtags_to_strip" ]; then
|
||||
: # We want to remove explicitly this flag
|
||||
else
|
||||
append other_args_list "$1"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
# test rpaths against system directories in one place.
|
||||
if [ -n "$rp" ]; then
|
||||
if system_dir "$rp"; then
|
||||
append system_rpath_dirs_list "$rp"
|
||||
else
|
||||
append rpath_dirs_list "$rp"
|
||||
fi
|
||||
fi
|
||||
shift
|
||||
done
|
||||
|
||||
# We found `-Xlinker -rpath` but no matching value `-Xlinker /path`. Just append
|
||||
# `-Xlinker -rpath` again and let the compiler or linker handle the error during arg
|
||||
# parsing.
|
||||
if [ "$xlinker_expect_rpath" = yes ]; then
|
||||
append other_args_list -Xlinker
|
||||
append other_args_list -rpath
|
||||
fi
|
||||
|
||||
# Same, but for -Wl flags.
|
||||
if [ "$wl_expect_rpath" = yes ]; then
|
||||
append other_args_list -Wl,-rpath
|
||||
fi
|
||||
|
||||
#
|
||||
# Add flags from Spack's cppflags, cflags, cxxflags, fcflags, fflags, and
|
||||
# ldflags. We stick to the order that gmake puts the flags in by default.
|
||||
|
2
lib/spack/external/__init__.py
vendored
2
lib/spack/external/__init__.py
vendored
@@ -18,7 +18,7 @@
|
||||
|
||||
* Homepage: https://pypi.python.org/pypi/archspec
|
||||
* Usage: Labeling, comparison and detection of microarchitectures
|
||||
* Version: 0.2.0-dev (commit f3667f95030c6573842fb5f6df0d647285597509)
|
||||
* Version: 0.2.0 (commit e44bad9c7b6defac73696f64078b2fe634719b62)
|
||||
|
||||
astunparse
|
||||
----------------
|
||||
|
60
lib/spack/external/archspec/cli.py
vendored
60
lib/spack/external/archspec/cli.py
vendored
@@ -6,61 +6,19 @@
|
||||
archspec command line interface
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import typing
|
||||
import click
|
||||
|
||||
import archspec
|
||||
import archspec.cpu
|
||||
|
||||
|
||||
def _make_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(
|
||||
"archspec",
|
||||
description="archspec command line interface",
|
||||
add_help=False,
|
||||
)
|
||||
parser.add_argument(
|
||||
"--version",
|
||||
"-V",
|
||||
help="Show the version and exit.",
|
||||
action="version",
|
||||
version=f"archspec, version {archspec.__version__}",
|
||||
)
|
||||
parser.add_argument("--help", "-h", help="Show the help and exit.", action="help")
|
||||
|
||||
subcommands = parser.add_subparsers(
|
||||
title="command",
|
||||
metavar="COMMAND",
|
||||
dest="command",
|
||||
)
|
||||
|
||||
cpu_command = subcommands.add_parser(
|
||||
"cpu",
|
||||
help="archspec command line interface for CPU",
|
||||
description="archspec command line interface for CPU",
|
||||
)
|
||||
cpu_command.set_defaults(run=cpu)
|
||||
|
||||
return parser
|
||||
@click.group(name="archspec")
|
||||
@click.version_option(version=archspec.__version__)
|
||||
def main():
|
||||
"""archspec command line interface"""
|
||||
|
||||
|
||||
def cpu() -> int:
|
||||
"""Run the `archspec cpu` subcommand."""
|
||||
print(archspec.cpu.host())
|
||||
return 0
|
||||
|
||||
|
||||
def main(argv: typing.Optional[typing.List[str]] = None) -> int:
|
||||
"""Run the `archspec` command line interface."""
|
||||
parser = _make_parser()
|
||||
|
||||
try:
|
||||
args = parser.parse_args(argv)
|
||||
except SystemExit as err:
|
||||
return err.code
|
||||
|
||||
if args.command is None:
|
||||
parser.print_help()
|
||||
return 0
|
||||
|
||||
return args.run()
|
||||
@main.command()
|
||||
def cpu():
|
||||
"""archspec command line interface for CPU"""
|
||||
click.echo(archspec.cpu.host())
|
||||
|
@@ -2782,10 +2782,6 @@
|
||||
{
|
||||
"versions": "13.0:",
|
||||
"flags" : "-mcpu=apple-m1"
|
||||
},
|
||||
{
|
||||
"versions": "16.0:",
|
||||
"flags" : "-mcpu=apple-m2"
|
||||
}
|
||||
],
|
||||
"apple-clang": [
|
||||
@@ -2794,12 +2790,8 @@
|
||||
"flags" : "-march=armv8.5-a"
|
||||
},
|
||||
{
|
||||
"versions": "13.0:14.0.2",
|
||||
"flags" : "-mcpu=apple-m1"
|
||||
},
|
||||
{
|
||||
"versions": "14.0.2:",
|
||||
"flags" : "-mcpu=apple-m2"
|
||||
"versions": "13.0:",
|
||||
"flags" : "-mcpu=vortex"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
@@ -5,20 +5,19 @@
|
||||
import collections
|
||||
import collections.abc
|
||||
import errno
|
||||
import fnmatch
|
||||
import glob
|
||||
import hashlib
|
||||
import itertools
|
||||
import numbers
|
||||
import os
|
||||
import posixpath
|
||||
import re
|
||||
import shutil
|
||||
import stat
|
||||
import sys
|
||||
import tempfile
|
||||
from contextlib import contextmanager
|
||||
from typing import Callable, Iterable, List, Match, Optional, Tuple, Union
|
||||
from sys import platform as _platform
|
||||
from typing import Callable, List, Match, Optional, Tuple, Union
|
||||
|
||||
from llnl.util import tty
|
||||
from llnl.util.lang import dedupe, memoized
|
||||
@@ -27,7 +26,9 @@
|
||||
from spack.util.executable import Executable, which
|
||||
from spack.util.path import path_to_os_path, system_path_filter
|
||||
|
||||
if sys.platform != "win32":
|
||||
is_windows = _platform == "win32"
|
||||
|
||||
if not is_windows:
|
||||
import grp
|
||||
import pwd
|
||||
else:
|
||||
@@ -153,7 +154,7 @@ def lookup(name):
|
||||
|
||||
|
||||
def getuid():
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
import ctypes
|
||||
|
||||
if ctypes.windll.shell32.IsUserAnAdmin() == 0:
|
||||
@@ -166,7 +167,7 @@ def getuid():
|
||||
@system_path_filter
|
||||
def rename(src, dst):
|
||||
# On Windows, os.rename will fail if the destination file already exists
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
# Windows path existence checks will sometimes fail on junctions/links/symlinks
|
||||
# so check for that case
|
||||
if os.path.exists(dst) or os.path.islink(dst):
|
||||
@@ -195,7 +196,7 @@ def _get_mime_type():
|
||||
"""Generate method to call `file` system command to aquire mime type
|
||||
for a specified path
|
||||
"""
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
# -h option (no-dereference) does not exist in Windows
|
||||
return file_command("-b", "--mime-type")
|
||||
else:
|
||||
@@ -550,7 +551,7 @@ def get_owner_uid(path, err_msg=None):
|
||||
else:
|
||||
p_stat = os.stat(path)
|
||||
|
||||
if sys.platform != "win32":
|
||||
if _platform != "win32":
|
||||
owner_uid = p_stat.st_uid
|
||||
else:
|
||||
sid = win32security.GetFileSecurity(
|
||||
@@ -583,7 +584,7 @@ def group_ids(uid=None):
|
||||
Returns:
|
||||
(list of int): gids of groups the user is a member of
|
||||
"""
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
tty.warn("Function is not supported on Windows")
|
||||
return []
|
||||
|
||||
@@ -603,7 +604,7 @@ def group_ids(uid=None):
|
||||
@system_path_filter(arg_slice=slice(1))
|
||||
def chgrp(path, group, follow_symlinks=True):
|
||||
"""Implement the bash chgrp function on a single path"""
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
raise OSError("Function 'chgrp' is not supported on Windows")
|
||||
|
||||
if isinstance(group, str):
|
||||
@@ -1130,7 +1131,7 @@ def open_if_filename(str_or_file, mode="r"):
|
||||
@system_path_filter
|
||||
def touch(path):
|
||||
"""Creates an empty file at the specified path."""
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
perms = os.O_WRONLY | os.O_CREAT
|
||||
else:
|
||||
perms = os.O_WRONLY | os.O_CREAT | os.O_NONBLOCK | os.O_NOCTTY
|
||||
@@ -1192,7 +1193,7 @@ def temp_cwd():
|
||||
yield tmp_dir
|
||||
finally:
|
||||
kwargs = {}
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
kwargs["ignore_errors"] = False
|
||||
kwargs["onerror"] = readonly_file_handler(ignore_errors=True)
|
||||
shutil.rmtree(tmp_dir, **kwargs)
|
||||
@@ -1437,7 +1438,7 @@ def visit_directory_tree(root, visitor, rel_path="", depth=0):
|
||||
try:
|
||||
isdir = f.is_dir()
|
||||
except OSError as e:
|
||||
if sys.platform == "win32" and hasattr(e, "winerror") and e.winerror == 5 and islink:
|
||||
if is_windows and hasattr(e, "winerror") and e.winerror == 5 and islink:
|
||||
# if path is a symlink, determine destination and
|
||||
# evaluate file vs directory
|
||||
link_target = resolve_link_target_relative_to_the_link(f)
|
||||
@@ -1546,11 +1547,11 @@ def readonly_file_handler(ignore_errors=False):
|
||||
"""
|
||||
|
||||
def error_remove_readonly(func, path, exc):
|
||||
if sys.platform != "win32":
|
||||
if not is_windows:
|
||||
raise RuntimeError("This method should only be invoked on Windows")
|
||||
excvalue = exc[1]
|
||||
if (
|
||||
sys.platform == "win32"
|
||||
is_windows
|
||||
and func in (os.rmdir, os.remove, os.unlink)
|
||||
and excvalue.errno == errno.EACCES
|
||||
):
|
||||
@@ -1580,7 +1581,7 @@ def remove_linked_tree(path):
|
||||
|
||||
# Windows readonly files cannot be removed by Python
|
||||
# directly.
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
kwargs["ignore_errors"] = False
|
||||
kwargs["onerror"] = readonly_file_handler(ignore_errors=True)
|
||||
|
||||
@@ -1673,38 +1674,6 @@ def fix_darwin_install_name(path):
|
||||
break
|
||||
|
||||
|
||||
def find_first(root: str, files: Union[Iterable[str], str], bfs_depth: int = 2) -> Optional[str]:
|
||||
"""Find the first file matching a pattern.
|
||||
|
||||
The following
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ find /usr -name 'abc*' -o -name 'def*' -quit
|
||||
|
||||
is equivalent to:
|
||||
|
||||
>>> find_first("/usr", ["abc*", "def*"])
|
||||
|
||||
Any glob pattern supported by fnmatch can be used.
|
||||
|
||||
The search order of this method is breadth-first over directories,
|
||||
until depth bfs_depth, after which depth-first search is used.
|
||||
|
||||
Parameters:
|
||||
root (str): The root directory to start searching from
|
||||
files (str or Iterable): File pattern(s) to search for
|
||||
bfs_depth (int): (advanced) parameter that specifies at which
|
||||
depth to switch to depth-first search.
|
||||
|
||||
Returns:
|
||||
str or None: The matching file or None when no file is found.
|
||||
"""
|
||||
if isinstance(files, str):
|
||||
files = [files]
|
||||
return FindFirstFile(root, *files, bfs_depth=bfs_depth).find()
|
||||
|
||||
|
||||
def find(root, files, recursive=True):
|
||||
"""Search for ``files`` starting from the ``root`` directory.
|
||||
|
||||
@@ -2126,7 +2095,7 @@ def names(self):
|
||||
# on non Windows platform
|
||||
# Windows valid library extensions are:
|
||||
# ['.dll', '.lib']
|
||||
valid_exts = [".dll", ".lib"] if sys.platform == "win32" else [".dylib", ".so", ".a"]
|
||||
valid_exts = [".dll", ".lib"] if is_windows else [".dylib", ".so", ".a"]
|
||||
for ext in valid_exts:
|
||||
i = name.rfind(ext)
|
||||
if i != -1:
|
||||
@@ -2274,7 +2243,7 @@ def find_libraries(libraries, root, shared=True, recursive=False, runtime=True):
|
||||
message = message.format(find_libraries.__name__, type(libraries))
|
||||
raise TypeError(message)
|
||||
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
static_ext = "lib"
|
||||
# For linking (runtime=False) you need the .lib files regardless of
|
||||
# whether you are doing a shared or static link
|
||||
@@ -2306,7 +2275,7 @@ def find_libraries(libraries, root, shared=True, recursive=False, runtime=True):
|
||||
# finally search all of root recursively. The search stops when the first
|
||||
# match is found.
|
||||
common_lib_dirs = ["lib", "lib64"]
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
common_lib_dirs.extend(["bin", "Lib"])
|
||||
|
||||
for subdir in common_lib_dirs:
|
||||
@@ -2441,7 +2410,7 @@ def _link(self, path, dest_dir):
|
||||
# For py2 compatibility, we have to catch the specific Windows error code
|
||||
# associate with trying to create a file that already exists (winerror 183)
|
||||
except OSError as e:
|
||||
if sys.platform == "win32" and (e.winerror == 183 or e.errno == errno.EEXIST):
|
||||
if e.winerror == 183:
|
||||
# We have either already symlinked or we are encoutering a naming clash
|
||||
# either way, we don't want to overwrite existing libraries
|
||||
already_linked = islink(dest_file)
|
||||
@@ -2754,105 +2723,3 @@ def filesummary(path, print_bytes=16) -> Tuple[int, bytes]:
|
||||
return size, short_contents
|
||||
except OSError:
|
||||
return 0, b""
|
||||
|
||||
|
||||
class FindFirstFile:
|
||||
"""Uses hybrid iterative deepening to locate the first matching
|
||||
file. Up to depth ``bfs_depth`` it uses iterative deepening, which
|
||||
mimics breadth-first with the same memory footprint as depth-first
|
||||
search, after which it switches to ordinary depth-first search using
|
||||
``os.walk``."""
|
||||
|
||||
def __init__(self, root: str, *file_patterns: str, bfs_depth: int = 2):
|
||||
"""Create a small summary of the given file. Does not error
|
||||
when file does not exist.
|
||||
|
||||
Args:
|
||||
root (str): directory in which to recursively search
|
||||
file_patterns (str): glob file patterns understood by fnmatch
|
||||
bfs_depth (int): until this depth breadth-first traversal is used,
|
||||
when no match is found, the mode is switched to depth-first search.
|
||||
"""
|
||||
self.root = root
|
||||
self.bfs_depth = bfs_depth
|
||||
self.match: Callable
|
||||
|
||||
# normcase is trivial on posix
|
||||
regex = re.compile("|".join(fnmatch.translate(os.path.normcase(p)) for p in file_patterns))
|
||||
|
||||
# On case sensitive filesystems match against normcase'd paths.
|
||||
if os.path is posixpath:
|
||||
self.match = regex.match
|
||||
else:
|
||||
self.match = lambda p: regex.match(os.path.normcase(p))
|
||||
|
||||
def find(self) -> Optional[str]:
|
||||
"""Run the file search
|
||||
|
||||
Returns:
|
||||
str or None: path of the matching file
|
||||
"""
|
||||
self.file = None
|
||||
|
||||
# First do iterative deepening (i.e. bfs through limited depth dfs)
|
||||
for i in range(self.bfs_depth + 1):
|
||||
if self._find_at_depth(self.root, i):
|
||||
return self.file
|
||||
|
||||
# Then fall back to depth-first search
|
||||
return self._find_dfs()
|
||||
|
||||
def _find_at_depth(self, path, max_depth, depth=0) -> bool:
|
||||
"""Returns True when done. Notice search can be done
|
||||
either because a file was found, or because it recursed
|
||||
through all directories."""
|
||||
try:
|
||||
entries = os.scandir(path)
|
||||
except OSError:
|
||||
return True
|
||||
|
||||
done = True
|
||||
|
||||
with entries:
|
||||
# At max depth we look for matching files.
|
||||
if depth == max_depth:
|
||||
for f in entries:
|
||||
# Exit on match
|
||||
if self.match(f.name):
|
||||
self.file = os.path.join(path, f.name)
|
||||
return True
|
||||
|
||||
# is_dir should not require a stat call, so it's a good optimization.
|
||||
if self._is_dir(f):
|
||||
done = False
|
||||
return done
|
||||
|
||||
# At lower depth only recurse into subdirs
|
||||
for f in entries:
|
||||
if not self._is_dir(f):
|
||||
continue
|
||||
|
||||
# If any subdir is not fully traversed, we're not done yet.
|
||||
if not self._find_at_depth(os.path.join(path, f.name), max_depth, depth + 1):
|
||||
done = False
|
||||
|
||||
# Early exit when we've found something.
|
||||
if self.file:
|
||||
return True
|
||||
|
||||
return done
|
||||
|
||||
def _is_dir(self, f: os.DirEntry) -> bool:
|
||||
"""Returns True when f is dir we can enter (and not a symlink)."""
|
||||
try:
|
||||
return f.is_dir(follow_symlinks=False)
|
||||
except OSError:
|
||||
return False
|
||||
|
||||
def _find_dfs(self) -> Optional[str]:
|
||||
"""Returns match or None"""
|
||||
for dirpath, _, filenames in os.walk(self.root):
|
||||
for file in filenames:
|
||||
if self.match(file):
|
||||
return os.path.join(dirpath, file)
|
||||
return None
|
||||
|
@@ -5,13 +5,15 @@
|
||||
import errno
|
||||
import os
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
from os.path import exists, join
|
||||
from sys import platform as _platform
|
||||
|
||||
from llnl.util import lang
|
||||
|
||||
if sys.platform == "win32":
|
||||
is_windows = _platform == "win32"
|
||||
|
||||
if is_windows:
|
||||
from win32file import CreateHardLink
|
||||
|
||||
|
||||
@@ -21,7 +23,7 @@ def symlink(real_path, link_path):
|
||||
|
||||
On Windows, use junctions if os.symlink fails.
|
||||
"""
|
||||
if sys.platform != "win32":
|
||||
if not is_windows:
|
||||
os.symlink(real_path, link_path)
|
||||
elif _win32_can_symlink():
|
||||
# Windows requires target_is_directory=True when the target is a dir.
|
||||
@@ -30,15 +32,9 @@ def symlink(real_path, link_path):
|
||||
try:
|
||||
# Try to use junctions
|
||||
_win32_junction(real_path, link_path)
|
||||
except OSError as e:
|
||||
if e.errno == errno.EEXIST:
|
||||
# EEXIST error indicates that file we're trying to "link"
|
||||
# is already present, don't bother trying to copy which will also fail
|
||||
# just raise
|
||||
raise
|
||||
else:
|
||||
# If all else fails, fall back to copying files
|
||||
shutil.copyfile(real_path, link_path)
|
||||
except OSError:
|
||||
# If all else fails, fall back to copying files
|
||||
shutil.copyfile(real_path, link_path)
|
||||
|
||||
|
||||
def islink(path):
|
||||
@@ -103,7 +99,7 @@ def _win32_is_junction(path):
|
||||
if os.path.islink(path):
|
||||
return False
|
||||
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
import ctypes.wintypes
|
||||
|
||||
GetFileAttributes = ctypes.windll.kernel32.GetFileAttributesW
|
||||
|
@@ -25,7 +25,7 @@ def architecture_compatible(self, target, constraint):
|
||||
return (
|
||||
not target.architecture
|
||||
or not constraint.architecture
|
||||
or target.architecture.intersects(constraint.architecture)
|
||||
or target.architecture.satisfies(constraint.architecture)
|
||||
)
|
||||
|
||||
@memoized
|
||||
@@ -104,7 +104,7 @@ def compiler_compatible(self, parent, child, **kwargs):
|
||||
for cversion in child.compiler.versions:
|
||||
# For a few compilers use specialized comparisons.
|
||||
# Otherwise match on version match.
|
||||
if pversion.intersects(cversion):
|
||||
if pversion.satisfies(cversion):
|
||||
return True
|
||||
elif parent.compiler.name == "gcc" and self._gcc_compiler_compare(
|
||||
pversion, cversion
|
||||
|
@@ -695,11 +695,8 @@ def _ensure_variant_defaults_are_parsable(pkgs, error_cls):
|
||||
try:
|
||||
variant.validate_or_raise(vspec, pkg_cls=pkg_cls)
|
||||
except spack.variant.InvalidVariantValueError:
|
||||
error_msg = (
|
||||
"The default value of the variant '{}' in package '{}' failed validation"
|
||||
)
|
||||
question = "Is it among the allowed values?"
|
||||
errors.append(error_cls(error_msg.format(variant_name, pkg_name), [question]))
|
||||
error_msg = "The variant '{}' default value in package '{}' cannot be validated"
|
||||
errors.append(error_cls(error_msg.format(variant_name, pkg_name), []))
|
||||
|
||||
return errors
|
||||
|
||||
@@ -724,7 +721,7 @@ def _version_constraints_are_satisfiable_by_some_version_in_repo(pkgs, error_cls
|
||||
dependency_pkg_cls = None
|
||||
try:
|
||||
dependency_pkg_cls = spack.repo.path.get_pkg_class(s.name)
|
||||
assert any(v.intersects(s.versions) for v in list(dependency_pkg_cls.versions))
|
||||
assert any(v.satisfies(s.versions) for v in list(dependency_pkg_cls.versions))
|
||||
except Exception:
|
||||
summary = (
|
||||
"{0}: dependency on {1} cannot be satisfied " "by known versions of {1.name}"
|
||||
|
@@ -6,8 +6,6 @@
|
||||
import codecs
|
||||
import collections
|
||||
import hashlib
|
||||
import io
|
||||
import itertools
|
||||
import json
|
||||
import multiprocessing.pool
|
||||
import os
|
||||
@@ -22,9 +20,7 @@
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
import warnings
|
||||
from contextlib import closing, contextmanager
|
||||
from gzip import GzipFile
|
||||
from typing import Union
|
||||
from contextlib import closing
|
||||
from urllib.error import HTTPError, URLError
|
||||
|
||||
import ruamel.yaml as yaml
|
||||
@@ -43,7 +39,6 @@
|
||||
import spack.platforms
|
||||
import spack.relocate as relocate
|
||||
import spack.repo
|
||||
import spack.stage
|
||||
import spack.store
|
||||
import spack.traverse as traverse
|
||||
import spack.util.crypto
|
||||
@@ -503,9 +498,7 @@ def _binary_index():
|
||||
|
||||
|
||||
#: Singleton binary_index instance
|
||||
binary_index: Union[BinaryCacheIndex, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(
|
||||
_binary_index
|
||||
)
|
||||
binary_index = llnl.util.lang.Singleton(_binary_index)
|
||||
|
||||
|
||||
class NoOverwriteException(spack.error.SpackError):
|
||||
@@ -746,31 +739,34 @@ def get_buildfile_manifest(spec):
|
||||
return data
|
||||
|
||||
|
||||
def prefixes_to_hashes(spec):
|
||||
return {
|
||||
str(s.prefix): s.dag_hash()
|
||||
for s in itertools.chain(
|
||||
spec.traverse(root=True, deptype="link"), spec.dependencies(deptype="run")
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
def get_buildinfo_dict(spec, rel=False):
|
||||
"""Create metadata for a tarball"""
|
||||
def write_buildinfo_file(spec, workdir, rel=False):
|
||||
"""
|
||||
Create a cache file containing information
|
||||
required for the relocation
|
||||
"""
|
||||
manifest = get_buildfile_manifest(spec)
|
||||
|
||||
return {
|
||||
"sbang_install_path": spack.hooks.sbang.sbang_install_path(),
|
||||
"relative_rpaths": rel,
|
||||
"buildpath": spack.store.layout.root,
|
||||
"spackprefix": spack.paths.prefix,
|
||||
"relative_prefix": os.path.relpath(spec.prefix, spack.store.layout.root),
|
||||
"relocate_textfiles": manifest["text_to_relocate"],
|
||||
"relocate_binaries": manifest["binary_to_relocate"],
|
||||
"relocate_links": manifest["link_to_relocate"],
|
||||
"hardlinks_deduped": manifest["hardlinks_deduped"],
|
||||
"prefix_to_hash": prefixes_to_hashes(spec),
|
||||
}
|
||||
prefix_to_hash = dict()
|
||||
prefix_to_hash[str(spec.package.prefix)] = spec.dag_hash()
|
||||
deps = spack.build_environment.get_rpath_deps(spec.package)
|
||||
for d in deps + spec.dependencies(deptype="run"):
|
||||
prefix_to_hash[str(d.prefix)] = d.dag_hash()
|
||||
|
||||
# Create buildinfo data and write it to disk
|
||||
buildinfo = {}
|
||||
buildinfo["sbang_install_path"] = spack.hooks.sbang.sbang_install_path()
|
||||
buildinfo["relative_rpaths"] = rel
|
||||
buildinfo["buildpath"] = spack.store.layout.root
|
||||
buildinfo["spackprefix"] = spack.paths.prefix
|
||||
buildinfo["relative_prefix"] = os.path.relpath(spec.prefix, spack.store.layout.root)
|
||||
buildinfo["relocate_textfiles"] = manifest["text_to_relocate"]
|
||||
buildinfo["relocate_binaries"] = manifest["binary_to_relocate"]
|
||||
buildinfo["relocate_links"] = manifest["link_to_relocate"]
|
||||
buildinfo["hardlinks_deduped"] = manifest["hardlinks_deduped"]
|
||||
buildinfo["prefix_to_hash"] = prefix_to_hash
|
||||
filename = buildinfo_file_name(workdir)
|
||||
with open(filename, "w") as outfile:
|
||||
outfile.write(syaml.dump(buildinfo, default_flow_style=True))
|
||||
|
||||
|
||||
def tarball_directory_name(spec):
|
||||
@@ -1143,68 +1139,6 @@ def generate_key_index(key_prefix, tmpdir=None):
|
||||
shutil.rmtree(tmpdir)
|
||||
|
||||
|
||||
@contextmanager
|
||||
def gzip_compressed_tarfile(path):
|
||||
"""Create a reproducible, compressed tarfile"""
|
||||
# Create gzip compressed tarball of the install prefix
|
||||
# 1) Use explicit empty filename and mtime 0 for gzip header reproducibility.
|
||||
# If the filename="" is dropped, Python will use fileobj.name instead.
|
||||
# This should effectively mimick `gzip --no-name`.
|
||||
# 2) On AMD Ryzen 3700X and an SSD disk, we have the following on compression speed:
|
||||
# compresslevel=6 gzip default: llvm takes 4mins, roughly 2.1GB
|
||||
# compresslevel=9 python default: llvm takes 12mins, roughly 2.1GB
|
||||
# So we follow gzip.
|
||||
with open(path, "wb") as fileobj, closing(
|
||||
GzipFile(filename="", mode="wb", compresslevel=6, mtime=0, fileobj=fileobj)
|
||||
) as gzip_file, tarfile.TarFile(name="", mode="w", fileobj=gzip_file) as tar:
|
||||
yield tar
|
||||
|
||||
|
||||
def deterministic_tarinfo(tarinfo: tarfile.TarInfo):
|
||||
# We only add files, symlinks, hardlinks, and directories
|
||||
# No character devices, block devices and FIFOs should ever enter a tarball.
|
||||
if tarinfo.isdev():
|
||||
return None
|
||||
|
||||
# For distribution, it makes no sense to user/group data; since (a) they don't exist
|
||||
# on other machines, and (b) they lead to surprises as `tar x` run as root will change
|
||||
# ownership if it can. We want to extract as the current user. By setting owner to root,
|
||||
# root will extract as root, and non-privileged user will extract as themselves.
|
||||
tarinfo.uid = 0
|
||||
tarinfo.gid = 0
|
||||
tarinfo.uname = ""
|
||||
tarinfo.gname = ""
|
||||
|
||||
# Reset mtime to epoch time, our prefixes are not truly immutable, so files may get
|
||||
# touched; as long as the content does not change, this ensures we get stable tarballs.
|
||||
tarinfo.mtime = 0
|
||||
|
||||
# Normalize mode
|
||||
if tarinfo.isfile() or tarinfo.islnk():
|
||||
# If user can execute, use 0o755; else 0o644
|
||||
# This is to avoid potentially unsafe world writable & exeutable files that may get
|
||||
# extracted when Python or tar is run with privileges
|
||||
tarinfo.mode = 0o644 if tarinfo.mode & 0o100 == 0 else 0o755
|
||||
else: # symbolic link and directories
|
||||
tarinfo.mode = 0o755
|
||||
|
||||
return tarinfo
|
||||
|
||||
|
||||
def tar_add_metadata(tar: tarfile.TarFile, path: str, data: dict):
|
||||
# Serialize buildinfo for the tarball
|
||||
bstring = syaml.dump(data, default_flow_style=True).encode("utf-8")
|
||||
tarinfo = tarfile.TarInfo(name=path)
|
||||
tarinfo.size = len(bstring)
|
||||
tar.addfile(deterministic_tarinfo(tarinfo), io.BytesIO(bstring))
|
||||
|
||||
|
||||
def _do_create_tarball(tarfile_path, binaries_dir, pkg_dir, buildinfo):
|
||||
with gzip_compressed_tarfile(tarfile_path) as tar:
|
||||
tar.add(name=binaries_dir, arcname=pkg_dir, filter=deterministic_tarinfo)
|
||||
tar_add_metadata(tar, buildinfo_file_name(pkg_dir), buildinfo)
|
||||
|
||||
|
||||
def _build_tarball(
|
||||
spec,
|
||||
out_url,
|
||||
@@ -1222,37 +1156,15 @@ def _build_tarball(
|
||||
if not spec.concrete:
|
||||
raise ValueError("spec must be concrete to build tarball")
|
||||
|
||||
with tempfile.TemporaryDirectory(dir=spack.stage.get_stage_root()) as tmpdir:
|
||||
_build_tarball_in_stage_dir(
|
||||
spec,
|
||||
out_url,
|
||||
stage_dir=tmpdir,
|
||||
force=force,
|
||||
relative=relative,
|
||||
unsigned=unsigned,
|
||||
allow_root=allow_root,
|
||||
key=key,
|
||||
regenerate_index=regenerate_index,
|
||||
)
|
||||
# set up some paths
|
||||
tmpdir = tempfile.mkdtemp()
|
||||
cache_prefix = build_cache_prefix(tmpdir)
|
||||
|
||||
|
||||
def _build_tarball_in_stage_dir(
|
||||
spec,
|
||||
out_url,
|
||||
stage_dir,
|
||||
force=False,
|
||||
relative=False,
|
||||
unsigned=False,
|
||||
allow_root=False,
|
||||
key=None,
|
||||
regenerate_index=False,
|
||||
):
|
||||
cache_prefix = build_cache_prefix(stage_dir)
|
||||
tarfile_name = tarball_name(spec, ".spack")
|
||||
tarfile_dir = os.path.join(cache_prefix, tarball_directory_name(spec))
|
||||
tarfile_path = os.path.join(tarfile_dir, tarfile_name)
|
||||
spackfile_path = os.path.join(cache_prefix, tarball_path_name(spec, ".spack"))
|
||||
remote_spackfile_path = url_util.join(out_url, os.path.relpath(spackfile_path, stage_dir))
|
||||
remote_spackfile_path = url_util.join(out_url, os.path.relpath(spackfile_path, tmpdir))
|
||||
|
||||
mkdirp(tarfile_dir)
|
||||
if web_util.url_exists(remote_spackfile_path):
|
||||
@@ -1271,7 +1183,7 @@ def _build_tarball_in_stage_dir(
|
||||
signed_specfile_path = "{0}.sig".format(specfile_path)
|
||||
|
||||
remote_specfile_path = url_util.join(
|
||||
out_url, os.path.relpath(specfile_path, os.path.realpath(stage_dir))
|
||||
out_url, os.path.relpath(specfile_path, os.path.realpath(tmpdir))
|
||||
)
|
||||
remote_signed_specfile_path = "{0}.sig".format(remote_specfile_path)
|
||||
|
||||
@@ -1287,7 +1199,7 @@ def _build_tarball_in_stage_dir(
|
||||
raise NoOverwriteException(url_util.format(remote_specfile_path))
|
||||
|
||||
pkg_dir = os.path.basename(spec.prefix.rstrip(os.path.sep))
|
||||
workdir = os.path.join(stage_dir, pkg_dir)
|
||||
workdir = os.path.join(tmpdir, pkg_dir)
|
||||
|
||||
# TODO: We generally don't want to mutate any files, but when using relative
|
||||
# mode, Spack unfortunately *does* mutate rpaths and links ahead of time.
|
||||
@@ -1305,22 +1217,39 @@ def _build_tarball_in_stage_dir(
|
||||
os.remove(temp_tarfile_path)
|
||||
else:
|
||||
binaries_dir = spec.prefix
|
||||
mkdirp(os.path.join(workdir, ".spack"))
|
||||
|
||||
# create info for later relocation and create tar
|
||||
buildinfo = get_buildinfo_dict(spec, relative)
|
||||
write_buildinfo_file(spec, workdir, relative)
|
||||
|
||||
# optionally make the paths in the binaries relative to each other
|
||||
# in the spack install tree before creating tarball
|
||||
if relative:
|
||||
make_package_relative(workdir, spec, buildinfo, allow_root)
|
||||
elif not allow_root:
|
||||
ensure_package_relocatable(buildinfo, binaries_dir)
|
||||
try:
|
||||
if relative:
|
||||
make_package_relative(workdir, spec, allow_root)
|
||||
elif not allow_root:
|
||||
ensure_package_relocatable(workdir, binaries_dir)
|
||||
except Exception as e:
|
||||
shutil.rmtree(workdir)
|
||||
shutil.rmtree(tarfile_dir)
|
||||
shutil.rmtree(tmpdir)
|
||||
tty.die(e)
|
||||
|
||||
_do_create_tarball(tarfile_path, binaries_dir, pkg_dir, buildinfo)
|
||||
# create gzip compressed tarball of the install prefix
|
||||
# On AMD Ryzen 3700X and an SSD disk, we have the following on compression speed:
|
||||
# compresslevel=6 gzip default: llvm takes 4mins, roughly 2.1GB
|
||||
# compresslevel=9 python default: llvm takes 12mins, roughly 2.1GB
|
||||
# So we follow gzip.
|
||||
with closing(tarfile.open(tarfile_path, "w:gz", compresslevel=6)) as tar:
|
||||
tar.add(name=binaries_dir, arcname=pkg_dir)
|
||||
if not relative:
|
||||
# Add buildinfo file
|
||||
buildinfo_path = buildinfo_file_name(workdir)
|
||||
buildinfo_arcname = buildinfo_file_name(pkg_dir)
|
||||
tar.add(name=buildinfo_path, arcname=buildinfo_arcname)
|
||||
|
||||
# remove copy of install directory
|
||||
if relative:
|
||||
shutil.rmtree(workdir)
|
||||
shutil.rmtree(workdir)
|
||||
|
||||
# get the sha256 checksum of the tarball
|
||||
checksum = checksum_tarball(tarfile_path)
|
||||
@@ -1346,11 +1275,7 @@ def _build_tarball_in_stage_dir(
|
||||
spec_dict["buildinfo"] = buildinfo
|
||||
|
||||
with open(specfile_path, "w") as outfile:
|
||||
# Note: when using gpg clear sign, we need to avoid long lines (19995 chars).
|
||||
# If lines are longer, they are truncated without error. Thanks GPG!
|
||||
# So, here we still add newlines, but no indent, so save on file size and
|
||||
# line length.
|
||||
json.dump(spec_dict, outfile, indent=0, separators=(",", ":"))
|
||||
outfile.write(sjson.dump(spec_dict))
|
||||
|
||||
# sign the tarball and spec file with gpg
|
||||
if not unsigned:
|
||||
@@ -1367,15 +1292,18 @@ def _build_tarball_in_stage_dir(
|
||||
|
||||
tty.debug('Buildcache for "{0}" written to \n {1}'.format(spec, remote_spackfile_path))
|
||||
|
||||
# push the key to the build cache's _pgp directory so it can be
|
||||
# imported
|
||||
if not unsigned:
|
||||
push_keys(out_url, keys=[key], regenerate_index=regenerate_index, tmpdir=stage_dir)
|
||||
try:
|
||||
# push the key to the build cache's _pgp directory so it can be
|
||||
# imported
|
||||
if not unsigned:
|
||||
push_keys(out_url, keys=[key], regenerate_index=regenerate_index, tmpdir=tmpdir)
|
||||
|
||||
# create an index.json for the build_cache directory so specs can be
|
||||
# found
|
||||
if regenerate_index:
|
||||
generate_package_index(url_util.join(out_url, os.path.relpath(cache_prefix, stage_dir)))
|
||||
# create an index.json for the build_cache directory so specs can be
|
||||
# found
|
||||
if regenerate_index:
|
||||
generate_package_index(url_util.join(out_url, os.path.relpath(cache_prefix, tmpdir)))
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
|
||||
return None
|
||||
|
||||
@@ -1608,12 +1536,13 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
|
||||
return None
|
||||
|
||||
|
||||
def make_package_relative(workdir, spec, buildinfo, allow_root):
|
||||
def make_package_relative(workdir, spec, allow_root):
|
||||
"""
|
||||
Change paths in binaries to relative paths. Change absolute symlinks
|
||||
to relative symlinks.
|
||||
"""
|
||||
prefix = spec.prefix
|
||||
buildinfo = read_buildinfo_file(workdir)
|
||||
old_layout_root = buildinfo["buildpath"]
|
||||
orig_path_names = list()
|
||||
cur_path_names = list()
|
||||
@@ -1637,8 +1566,9 @@ def make_package_relative(workdir, spec, buildinfo, allow_root):
|
||||
relocate.make_link_relative(cur_path_names, orig_path_names)
|
||||
|
||||
|
||||
def ensure_package_relocatable(buildinfo, binaries_dir):
|
||||
def ensure_package_relocatable(workdir, binaries_dir):
|
||||
"""Check if package binaries are relocatable."""
|
||||
buildinfo = read_buildinfo_file(workdir)
|
||||
binaries = [os.path.join(binaries_dir, f) for f in buildinfo["relocate_binaries"]]
|
||||
relocate.ensure_binaries_are_relocatable(binaries)
|
||||
|
||||
@@ -1799,15 +1729,7 @@ def is_backup_file(file):
|
||||
relocate.relocate_text(text_names, prefix_to_prefix_text)
|
||||
|
||||
# relocate the install prefixes in binary files including dependencies
|
||||
changed_files = relocate.relocate_text_bin(files_to_relocate, prefix_to_prefix_bin)
|
||||
|
||||
# Add ad-hoc signatures to patched macho files when on macOS.
|
||||
if "macho" in platform.binary_formats and sys.platform == "darwin":
|
||||
codesign = which("codesign")
|
||||
if not codesign:
|
||||
return
|
||||
for binary in changed_files:
|
||||
codesign("-fs-", binary)
|
||||
relocate.relocate_text_bin(files_to_relocate, prefix_to_prefix_bin)
|
||||
|
||||
# If we are installing back to the same location
|
||||
# relocate the sbang location if the spack directory changed
|
||||
@@ -2026,7 +1948,7 @@ def install_root_node(spec, allow_root, unsigned=False, force=False, sha256=None
|
||||
with spack.util.path.filter_padding():
|
||||
tty.msg('Installing "{0}" from a buildcache'.format(spec.format()))
|
||||
extract_tarball(spec, download_result, allow_root, unsigned, force)
|
||||
spack.hooks.post_install(spec, False)
|
||||
spack.hooks.post_install(spec)
|
||||
spack.store.db.add(spec, spack.store.layout)
|
||||
|
||||
|
||||
|
@@ -9,7 +9,6 @@
|
||||
import sys
|
||||
import sysconfig
|
||||
import warnings
|
||||
from typing import Dict, Optional, Sequence, Union
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
@@ -22,10 +21,8 @@
|
||||
|
||||
from .config import spec_for_current_python
|
||||
|
||||
QueryInfo = Dict[str, "spack.spec.Spec"]
|
||||
|
||||
|
||||
def _python_import(module: str) -> bool:
|
||||
def _python_import(module):
|
||||
try:
|
||||
__import__(module)
|
||||
except ImportError:
|
||||
@@ -33,9 +30,7 @@ def _python_import(module: str) -> bool:
|
||||
return True
|
||||
|
||||
|
||||
def _try_import_from_store(
|
||||
module: str, query_spec: Union[str, "spack.spec.Spec"], query_info: Optional[QueryInfo] = None
|
||||
) -> bool:
|
||||
def _try_import_from_store(module, query_spec, query_info=None):
|
||||
"""Return True if the module can be imported from an already
|
||||
installed spec, False otherwise.
|
||||
|
||||
@@ -57,7 +52,7 @@ def _try_import_from_store(
|
||||
module_paths = [
|
||||
os.path.join(candidate_spec.prefix, pkg.purelib),
|
||||
os.path.join(candidate_spec.prefix, pkg.platlib),
|
||||
]
|
||||
] # type: list[str]
|
||||
path_before = list(sys.path)
|
||||
|
||||
# NOTE: try module_paths first and last, last allows an existing version in path
|
||||
@@ -94,7 +89,7 @@ def _try_import_from_store(
|
||||
return False
|
||||
|
||||
|
||||
def _fix_ext_suffix(candidate_spec: "spack.spec.Spec"):
|
||||
def _fix_ext_suffix(candidate_spec):
|
||||
"""Fix the external suffixes of Python extensions on the fly for
|
||||
platforms that may need it
|
||||
|
||||
@@ -162,11 +157,7 @@ def _fix_ext_suffix(candidate_spec: "spack.spec.Spec"):
|
||||
os.symlink(abs_path, link_name)
|
||||
|
||||
|
||||
def _executables_in_store(
|
||||
executables: Sequence[str],
|
||||
query_spec: Union["spack.spec.Spec", str],
|
||||
query_info: Optional[QueryInfo] = None,
|
||||
) -> bool:
|
||||
def _executables_in_store(executables, query_spec, query_info=None):
|
||||
"""Return True if at least one of the executables can be retrieved from
|
||||
a spec in store, False otherwise.
|
||||
|
||||
@@ -202,7 +193,7 @@ def _executables_in_store(
|
||||
return False
|
||||
|
||||
|
||||
def _root_spec(spec_str: str) -> str:
|
||||
def _root_spec(spec_str):
|
||||
"""Add a proper compiler and target to a spec used during bootstrapping.
|
||||
|
||||
Args:
|
||||
|
@@ -7,7 +7,6 @@
|
||||
import contextlib
|
||||
import os.path
|
||||
import sys
|
||||
from typing import Any, Dict, Generator, MutableSequence, Sequence
|
||||
|
||||
from llnl.util import tty
|
||||
|
||||
@@ -25,12 +24,12 @@
|
||||
_REF_COUNT = 0
|
||||
|
||||
|
||||
def is_bootstrapping() -> bool:
|
||||
def is_bootstrapping():
|
||||
"""Return True if we are in a bootstrapping context, False otherwise."""
|
||||
return _REF_COUNT > 0
|
||||
|
||||
|
||||
def spec_for_current_python() -> str:
|
||||
def spec_for_current_python():
|
||||
"""For bootstrapping purposes we are just interested in the Python
|
||||
minor version (all patches are ABI compatible with the same minor).
|
||||
|
||||
@@ -42,14 +41,14 @@ def spec_for_current_python() -> str:
|
||||
return f"python@{version_str}"
|
||||
|
||||
|
||||
def root_path() -> str:
|
||||
def root_path():
|
||||
"""Root of all the bootstrap related folders"""
|
||||
return spack.util.path.canonicalize_path(
|
||||
spack.config.get("bootstrap:root", spack.paths.default_user_bootstrap_path)
|
||||
)
|
||||
|
||||
|
||||
def store_path() -> str:
|
||||
def store_path():
|
||||
"""Path to the store used for bootstrapped software"""
|
||||
enabled = spack.config.get("bootstrap:enable", True)
|
||||
if not enabled:
|
||||
@@ -60,7 +59,7 @@ def store_path() -> str:
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def spack_python_interpreter() -> Generator:
|
||||
def spack_python_interpreter():
|
||||
"""Override the current configuration to set the interpreter under
|
||||
which Spack is currently running as the only Python external spec
|
||||
available.
|
||||
@@ -77,18 +76,18 @@ def spack_python_interpreter() -> Generator:
|
||||
yield
|
||||
|
||||
|
||||
def _store_path() -> str:
|
||||
def _store_path():
|
||||
bootstrap_root_path = root_path()
|
||||
return spack.util.path.canonicalize_path(os.path.join(bootstrap_root_path, "store"))
|
||||
|
||||
|
||||
def _config_path() -> str:
|
||||
def _config_path():
|
||||
bootstrap_root_path = root_path()
|
||||
return spack.util.path.canonicalize_path(os.path.join(bootstrap_root_path, "config"))
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def ensure_bootstrap_configuration() -> Generator:
|
||||
def ensure_bootstrap_configuration():
|
||||
"""Swap the current configuration for the one used to bootstrap Spack.
|
||||
|
||||
The context manager is reference counted to ensure we don't swap multiple
|
||||
@@ -108,7 +107,7 @@ def ensure_bootstrap_configuration() -> Generator:
|
||||
_REF_COUNT -= 1
|
||||
|
||||
|
||||
def _read_and_sanitize_configuration() -> Dict[str, Any]:
|
||||
def _read_and_sanitize_configuration():
|
||||
"""Read the user configuration that needs to be reused for bootstrapping
|
||||
and remove the entries that should not be copied over.
|
||||
"""
|
||||
@@ -121,11 +120,9 @@ def _read_and_sanitize_configuration() -> Dict[str, Any]:
|
||||
return user_configuration
|
||||
|
||||
|
||||
def _bootstrap_config_scopes() -> Sequence["spack.config.ConfigScope"]:
|
||||
def _bootstrap_config_scopes():
|
||||
tty.debug("[BOOTSTRAP CONFIG SCOPE] name=_builtin")
|
||||
config_scopes: MutableSequence["spack.config.ConfigScope"] = [
|
||||
spack.config.InternalConfigScope("_builtin", spack.config.config_defaults)
|
||||
]
|
||||
config_scopes = [spack.config.InternalConfigScope("_builtin", spack.config.config_defaults)]
|
||||
configuration_paths = (spack.config.configuration_defaults_path, ("bootstrap", _config_path()))
|
||||
for name, path in configuration_paths:
|
||||
platform = spack.platforms.host().name
|
||||
@@ -140,7 +137,7 @@ def _bootstrap_config_scopes() -> Sequence["spack.config.ConfigScope"]:
|
||||
return config_scopes
|
||||
|
||||
|
||||
def _add_compilers_if_missing() -> None:
|
||||
def _add_compilers_if_missing():
|
||||
arch = spack.spec.ArchSpec.frontend_arch()
|
||||
if not spack.compilers.compilers_for_arch(arch):
|
||||
new_compilers = spack.compilers.find_new_compilers()
|
||||
@@ -149,7 +146,7 @@ def _add_compilers_if_missing() -> None:
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _ensure_bootstrap_configuration() -> Generator:
|
||||
def _ensure_bootstrap_configuration():
|
||||
bootstrap_store_path = store_path()
|
||||
user_configuration = _read_and_sanitize_configuration()
|
||||
with spack.environment.no_active_environment():
|
||||
|
@@ -29,7 +29,7 @@
|
||||
import os.path
|
||||
import sys
|
||||
import uuid
|
||||
from typing import Any, Callable, Dict, List, Optional, Tuple
|
||||
from typing import Callable, List, Optional
|
||||
|
||||
from llnl.util import tty
|
||||
from llnl.util.lang import GroupedExceptionHandler
|
||||
@@ -66,9 +66,6 @@
|
||||
_bootstrap_methods = {}
|
||||
|
||||
|
||||
ConfigDictionary = Dict[str, Any]
|
||||
|
||||
|
||||
def bootstrapper(bootstrapper_type: str):
|
||||
"""Decorator to register classes implementing bootstrapping
|
||||
methods.
|
||||
@@ -89,7 +86,7 @@ class Bootstrapper:
|
||||
|
||||
config_scope_name = ""
|
||||
|
||||
def __init__(self, conf: ConfigDictionary) -> None:
|
||||
def __init__(self, conf):
|
||||
self.conf = conf
|
||||
self.name = conf["name"]
|
||||
self.metadata_dir = spack.util.path.canonicalize_path(conf["metadata"])
|
||||
@@ -103,7 +100,7 @@ def __init__(self, conf: ConfigDictionary) -> None:
|
||||
self.url = url
|
||||
|
||||
@property
|
||||
def mirror_scope(self) -> spack.config.InternalConfigScope:
|
||||
def mirror_scope(self):
|
||||
"""Mirror scope to be pushed onto the bootstrapping configuration when using
|
||||
this bootstrapper.
|
||||
"""
|
||||
@@ -124,7 +121,7 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
|
||||
"""
|
||||
return False
|
||||
|
||||
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
|
||||
def try_search_path(self, executables: List[str], abstract_spec_str: str) -> bool:
|
||||
"""Try to search some executables in the prefix of specs satisfying the abstract
|
||||
spec passed as argument.
|
||||
|
||||
@@ -142,15 +139,13 @@ def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bo
|
||||
class BuildcacheBootstrapper(Bootstrapper):
|
||||
"""Install the software needed during bootstrapping from a buildcache."""
|
||||
|
||||
def __init__(self, conf) -> None:
|
||||
def __init__(self, conf):
|
||||
super().__init__(conf)
|
||||
self.last_search: Optional[ConfigDictionary] = None
|
||||
self.last_search = None
|
||||
self.config_scope_name = f"bootstrap_buildcache-{uuid.uuid4()}"
|
||||
|
||||
@staticmethod
|
||||
def _spec_and_platform(
|
||||
abstract_spec_str: str,
|
||||
) -> Tuple[spack.spec.Spec, spack.platforms.Platform]:
|
||||
def _spec_and_platform(abstract_spec_str):
|
||||
"""Return the spec object and platform we need to use when
|
||||
querying the buildcache.
|
||||
|
||||
@@ -163,7 +158,7 @@ def _spec_and_platform(
|
||||
bincache_platform = spack.platforms.real_host()
|
||||
return abstract_spec, bincache_platform
|
||||
|
||||
def _read_metadata(self, package_name: str) -> Any:
|
||||
def _read_metadata(self, package_name):
|
||||
"""Return metadata about the given package."""
|
||||
json_filename = f"{package_name}.json"
|
||||
json_dir = self.metadata_dir
|
||||
@@ -172,13 +167,7 @@ def _read_metadata(self, package_name: str) -> Any:
|
||||
data = json.load(stream)
|
||||
return data
|
||||
|
||||
def _install_by_hash(
|
||||
self,
|
||||
pkg_hash: str,
|
||||
pkg_sha256: str,
|
||||
index: List[spack.spec.Spec],
|
||||
bincache_platform: spack.platforms.Platform,
|
||||
) -> None:
|
||||
def _install_by_hash(self, pkg_hash, pkg_sha256, index, bincache_platform):
|
||||
index_spec = next(x for x in index if x.dag_hash() == pkg_hash)
|
||||
# Reconstruct the compiler that we need to use for bootstrapping
|
||||
compiler_entry = {
|
||||
@@ -203,13 +192,7 @@ def _install_by_hash(
|
||||
match, allow_root=True, unsigned=True, force=True, sha256=pkg_sha256
|
||||
)
|
||||
|
||||
def _install_and_test(
|
||||
self,
|
||||
abstract_spec: spack.spec.Spec,
|
||||
bincache_platform: spack.platforms.Platform,
|
||||
bincache_data,
|
||||
test_fn,
|
||||
) -> bool:
|
||||
def _install_and_test(self, abstract_spec, bincache_platform, bincache_data, test_fn):
|
||||
# Ensure we see only the buildcache being used to bootstrap
|
||||
with spack.config.override(self.mirror_scope):
|
||||
# This index is currently needed to get the compiler used to build some
|
||||
@@ -225,7 +208,7 @@ def _install_and_test(
|
||||
# This will be None for things that don't depend on python
|
||||
python_spec = item.get("python", None)
|
||||
# Skip specs which are not compatible
|
||||
if not abstract_spec.intersects(candidate_spec):
|
||||
if not abstract_spec.satisfies(candidate_spec):
|
||||
continue
|
||||
|
||||
if python_spec is not None and python_spec not in abstract_spec:
|
||||
@@ -234,14 +217,13 @@ def _install_and_test(
|
||||
for _, pkg_hash, pkg_sha256 in item["binaries"]:
|
||||
self._install_by_hash(pkg_hash, pkg_sha256, index, bincache_platform)
|
||||
|
||||
info: ConfigDictionary = {}
|
||||
info = {}
|
||||
if test_fn(query_spec=abstract_spec, query_info=info):
|
||||
self.last_search = info
|
||||
return True
|
||||
return False
|
||||
|
||||
def try_import(self, module: str, abstract_spec_str: str) -> bool:
|
||||
info: ConfigDictionary
|
||||
def try_import(self, module, abstract_spec_str):
|
||||
test_fn, info = functools.partial(_try_import_from_store, module), {}
|
||||
if test_fn(query_spec=abstract_spec_str, query_info=info):
|
||||
return True
|
||||
@@ -253,8 +235,7 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
|
||||
data = self._read_metadata(module)
|
||||
return self._install_and_test(abstract_spec, bincache_platform, data, test_fn)
|
||||
|
||||
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
|
||||
info: ConfigDictionary
|
||||
def try_search_path(self, executables, abstract_spec_str):
|
||||
test_fn, info = functools.partial(_executables_in_store, executables), {}
|
||||
if test_fn(query_spec=abstract_spec_str, query_info=info):
|
||||
self.last_search = info
|
||||
@@ -270,13 +251,13 @@ def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bo
|
||||
class SourceBootstrapper(Bootstrapper):
|
||||
"""Install the software needed during bootstrapping from sources."""
|
||||
|
||||
def __init__(self, conf) -> None:
|
||||
def __init__(self, conf):
|
||||
super().__init__(conf)
|
||||
self.last_search: Optional[ConfigDictionary] = None
|
||||
self.last_search = None
|
||||
self.config_scope_name = f"bootstrap_source-{uuid.uuid4()}"
|
||||
|
||||
def try_import(self, module: str, abstract_spec_str: str) -> bool:
|
||||
info: ConfigDictionary = {}
|
||||
def try_import(self, module, abstract_spec_str):
|
||||
info = {}
|
||||
if _try_import_from_store(module, abstract_spec_str, query_info=info):
|
||||
self.last_search = info
|
||||
return True
|
||||
@@ -312,8 +293,8 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
|
||||
return True
|
||||
return False
|
||||
|
||||
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
|
||||
info: ConfigDictionary = {}
|
||||
def try_search_path(self, executables, abstract_spec_str):
|
||||
info = {}
|
||||
if _executables_in_store(executables, abstract_spec_str, query_info=info):
|
||||
self.last_search = info
|
||||
return True
|
||||
@@ -342,13 +323,13 @@ def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bo
|
||||
return False
|
||||
|
||||
|
||||
def create_bootstrapper(conf: ConfigDictionary):
|
||||
def create_bootstrapper(conf):
|
||||
"""Return a bootstrap object built according to the configuration argument"""
|
||||
btype = conf["type"]
|
||||
return _bootstrap_methods[btype](conf)
|
||||
|
||||
|
||||
def source_is_enabled_or_raise(conf: ConfigDictionary):
|
||||
def source_is_enabled_or_raise(conf):
|
||||
"""Raise ValueError if the source is not enabled for bootstrapping"""
|
||||
trusted, name = spack.config.get("bootstrap:trusted"), conf["name"]
|
||||
if not trusted.get(name, False):
|
||||
@@ -473,7 +454,7 @@ def ensure_executables_in_path_or_raise(
|
||||
raise RuntimeError(msg)
|
||||
|
||||
|
||||
def _add_externals_if_missing() -> None:
|
||||
def _add_externals_if_missing():
|
||||
search_list = [
|
||||
# clingo
|
||||
spack.repo.path.get_pkg_class("cmake"),
|
||||
@@ -487,41 +468,41 @@ def _add_externals_if_missing() -> None:
|
||||
spack.detection.update_configuration(detected_packages, scope="bootstrap")
|
||||
|
||||
|
||||
def clingo_root_spec() -> str:
|
||||
def clingo_root_spec():
|
||||
"""Return the root spec used to bootstrap clingo"""
|
||||
return _root_spec("clingo-bootstrap@spack+python")
|
||||
|
||||
|
||||
def ensure_clingo_importable_or_raise() -> None:
|
||||
def ensure_clingo_importable_or_raise():
|
||||
"""Ensure that the clingo module is available for import."""
|
||||
ensure_module_importable_or_raise(module="clingo", abstract_spec=clingo_root_spec())
|
||||
|
||||
|
||||
def gnupg_root_spec() -> str:
|
||||
def gnupg_root_spec():
|
||||
"""Return the root spec used to bootstrap GnuPG"""
|
||||
return _root_spec("gnupg@2.3:")
|
||||
|
||||
|
||||
def ensure_gpg_in_path_or_raise() -> None:
|
||||
def ensure_gpg_in_path_or_raise():
|
||||
"""Ensure gpg or gpg2 are in the PATH or raise."""
|
||||
return ensure_executables_in_path_or_raise(
|
||||
executables=["gpg2", "gpg"], abstract_spec=gnupg_root_spec()
|
||||
)
|
||||
|
||||
|
||||
def patchelf_root_spec() -> str:
|
||||
def patchelf_root_spec():
|
||||
"""Return the root spec used to bootstrap patchelf"""
|
||||
# 0.13.1 is the last version not to require C++17.
|
||||
return _root_spec("patchelf@0.13.1:")
|
||||
|
||||
|
||||
def verify_patchelf(patchelf: "spack.util.executable.Executable") -> bool:
|
||||
def verify_patchelf(patchelf):
|
||||
"""Older patchelf versions can produce broken binaries, so we
|
||||
verify the version here.
|
||||
|
||||
Arguments:
|
||||
|
||||
patchelf: patchelf executable
|
||||
patchelf (spack.util.executable.Executable): patchelf executable
|
||||
"""
|
||||
out = patchelf("--version", output=str, error=os.devnull, fail_on_error=False).strip()
|
||||
if patchelf.returncode != 0:
|
||||
@@ -536,7 +517,7 @@ def verify_patchelf(patchelf: "spack.util.executable.Executable") -> bool:
|
||||
return version >= spack.version.Version("0.13.1")
|
||||
|
||||
|
||||
def ensure_patchelf_in_path_or_raise() -> None:
|
||||
def ensure_patchelf_in_path_or_raise():
|
||||
"""Ensure patchelf is in the PATH or raise."""
|
||||
# The old concretizer is not smart and we're doing its job: if the latest patchelf
|
||||
# does not concretize because the compiler doesn't support C++17, we try to
|
||||
@@ -553,7 +534,7 @@ def ensure_patchelf_in_path_or_raise() -> None:
|
||||
)
|
||||
|
||||
|
||||
def ensure_core_dependencies() -> None:
|
||||
def ensure_core_dependencies():
|
||||
"""Ensure the presence of all the core dependencies."""
|
||||
if sys.platform.lower() == "linux":
|
||||
ensure_patchelf_in_path_or_raise()
|
||||
@@ -562,7 +543,7 @@ def ensure_core_dependencies() -> None:
|
||||
ensure_clingo_importable_or_raise()
|
||||
|
||||
|
||||
def all_core_root_specs() -> List[str]:
|
||||
def all_core_root_specs():
|
||||
"""Return a list of all the core root specs that may be used to bootstrap Spack"""
|
||||
return [clingo_root_spec(), gnupg_root_spec(), patchelf_root_spec()]
|
||||
|
||||
|
@@ -9,7 +9,6 @@
|
||||
import pathlib
|
||||
import sys
|
||||
import warnings
|
||||
from typing import List
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
@@ -28,7 +27,7 @@ class BootstrapEnvironment(spack.environment.Environment):
|
||||
"""Environment to install dependencies of Spack for a given interpreter and architecture"""
|
||||
|
||||
@classmethod
|
||||
def spack_dev_requirements(cls) -> List[str]:
|
||||
def spack_dev_requirements(cls):
|
||||
"""Spack development requirements"""
|
||||
return [
|
||||
isort_root_spec(),
|
||||
@@ -39,7 +38,7 @@ def spack_dev_requirements(cls) -> List[str]:
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def environment_root(cls) -> pathlib.Path:
|
||||
def environment_root(cls):
|
||||
"""Environment root directory"""
|
||||
bootstrap_root_path = root_path()
|
||||
python_part = spec_for_current_python().replace("@", "")
|
||||
@@ -53,12 +52,12 @@ def environment_root(cls) -> pathlib.Path:
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def view_root(cls) -> pathlib.Path:
|
||||
def view_root(cls):
|
||||
"""Location of the view"""
|
||||
return cls.environment_root().joinpath("view")
|
||||
|
||||
@classmethod
|
||||
def pythonpaths(cls) -> List[str]:
|
||||
def pythonpaths(cls):
|
||||
"""Paths to be added to sys.path or PYTHONPATH"""
|
||||
python_dir_part = f"python{'.'.join(str(x) for x in sys.version_info[:2])}"
|
||||
glob_expr = str(cls.view_root().joinpath("**", python_dir_part, "**"))
|
||||
@@ -69,21 +68,21 @@ def pythonpaths(cls) -> List[str]:
|
||||
return result
|
||||
|
||||
@classmethod
|
||||
def bin_dirs(cls) -> List[pathlib.Path]:
|
||||
def bin_dirs(cls):
|
||||
"""Paths to be added to PATH"""
|
||||
return [cls.view_root().joinpath("bin")]
|
||||
|
||||
@classmethod
|
||||
def spack_yaml(cls) -> pathlib.Path:
|
||||
def spack_yaml(cls):
|
||||
"""Environment spack.yaml file"""
|
||||
return cls.environment_root().joinpath("spack.yaml")
|
||||
|
||||
def __init__(self) -> None:
|
||||
def __init__(self):
|
||||
if not self.spack_yaml().exists():
|
||||
self._write_spack_yaml_file()
|
||||
super().__init__(self.environment_root())
|
||||
|
||||
def update_installations(self) -> None:
|
||||
def update_installations(self):
|
||||
"""Update the installations of this environment.
|
||||
|
||||
The update is done using a depfile on Linux and macOS, and using the ``install_all``
|
||||
@@ -104,7 +103,7 @@ def update_installations(self) -> None:
|
||||
self._install_with_depfile()
|
||||
self.write(regenerate=True)
|
||||
|
||||
def update_syspath_and_environ(self) -> None:
|
||||
def update_syspath_and_environ(self):
|
||||
"""Update ``sys.path`` and the PATH, PYTHONPATH environment variables to point to
|
||||
the environment view.
|
||||
"""
|
||||
@@ -120,7 +119,7 @@ def update_syspath_and_environ(self) -> None:
|
||||
+ [str(x) for x in self.pythonpaths()]
|
||||
)
|
||||
|
||||
def _install_with_depfile(self) -> None:
|
||||
def _install_with_depfile(self):
|
||||
spackcmd = spack.util.executable.which("spack")
|
||||
spackcmd(
|
||||
"-e",
|
||||
@@ -142,7 +141,7 @@ def _install_with_depfile(self) -> None:
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
def _write_spack_yaml_file(self) -> None:
|
||||
def _write_spack_yaml_file(self):
|
||||
tty.msg(
|
||||
"[BOOTSTRAPPING] Spack has missing dependencies, creating a bootstrapping environment"
|
||||
)
|
||||
@@ -160,32 +159,32 @@ def _write_spack_yaml_file(self) -> None:
|
||||
self.spack_yaml().write_text(template.render(context), encoding="utf-8")
|
||||
|
||||
|
||||
def isort_root_spec() -> str:
|
||||
def isort_root_spec():
|
||||
"""Return the root spec used to bootstrap isort"""
|
||||
return _root_spec("py-isort@4.3.5:")
|
||||
|
||||
|
||||
def mypy_root_spec() -> str:
|
||||
def mypy_root_spec():
|
||||
"""Return the root spec used to bootstrap mypy"""
|
||||
return _root_spec("py-mypy@0.900:")
|
||||
|
||||
|
||||
def black_root_spec() -> str:
|
||||
def black_root_spec():
|
||||
"""Return the root spec used to bootstrap black"""
|
||||
return _root_spec("py-black@:23.1.0")
|
||||
|
||||
|
||||
def flake8_root_spec() -> str:
|
||||
def flake8_root_spec():
|
||||
"""Return the root spec used to bootstrap flake8"""
|
||||
return _root_spec("py-flake8")
|
||||
|
||||
|
||||
def pytest_root_spec() -> str:
|
||||
def pytest_root_spec():
|
||||
"""Return the root spec used to bootstrap flake8"""
|
||||
return _root_spec("py-pytest")
|
||||
|
||||
|
||||
def ensure_environment_dependencies() -> None:
|
||||
def ensure_environment_dependencies():
|
||||
"""Ensure Spack dependencies from the bootstrap environment are installed and ready to use"""
|
||||
with BootstrapEnvironment() as env:
|
||||
env.update_installations()
|
||||
|
@@ -4,7 +4,6 @@
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
"""Query the status of bootstrapping on this machine"""
|
||||
import platform
|
||||
from typing import List, Optional, Sequence, Tuple, Union
|
||||
|
||||
import spack.util.executable
|
||||
|
||||
@@ -20,12 +19,8 @@
|
||||
pytest_root_spec,
|
||||
)
|
||||
|
||||
ExecutablesType = Union[str, Sequence[str]]
|
||||
RequiredResponseType = Tuple[bool, Optional[str]]
|
||||
SpecLike = Union["spack.spec.Spec", str]
|
||||
|
||||
|
||||
def _required_system_executable(exes: ExecutablesType, msg: str) -> RequiredResponseType:
|
||||
def _required_system_executable(exes, msg):
|
||||
"""Search for an executable is the system path only."""
|
||||
if isinstance(exes, str):
|
||||
exes = (exes,)
|
||||
@@ -34,9 +29,7 @@ def _required_system_executable(exes: ExecutablesType, msg: str) -> RequiredResp
|
||||
return False, msg
|
||||
|
||||
|
||||
def _required_executable(
|
||||
exes: ExecutablesType, query_spec: SpecLike, msg: str
|
||||
) -> RequiredResponseType:
|
||||
def _required_executable(exes, query_spec, msg):
|
||||
"""Search for an executable in the system path or in the bootstrap store."""
|
||||
if isinstance(exes, str):
|
||||
exes = (exes,)
|
||||
@@ -45,7 +38,7 @@ def _required_executable(
|
||||
return False, msg
|
||||
|
||||
|
||||
def _required_python_module(module: str, query_spec: SpecLike, msg: str) -> RequiredResponseType:
|
||||
def _required_python_module(module, query_spec, msg):
|
||||
"""Check if a Python module is available in the current interpreter or
|
||||
if it can be loaded from the bootstrap store
|
||||
"""
|
||||
@@ -54,7 +47,7 @@ def _required_python_module(module: str, query_spec: SpecLike, msg: str) -> Requ
|
||||
return False, msg
|
||||
|
||||
|
||||
def _missing(name: str, purpose: str, system_only: bool = True) -> str:
|
||||
def _missing(name, purpose, system_only=True):
|
||||
"""Message to be printed if an executable is not found"""
|
||||
msg = '[{2}] MISSING "{0}": {1}'
|
||||
if not system_only:
|
||||
@@ -62,7 +55,7 @@ def _missing(name: str, purpose: str, system_only: bool = True) -> str:
|
||||
return msg.format(name, purpose, "@*y{{-}}")
|
||||
|
||||
|
||||
def _core_requirements() -> List[RequiredResponseType]:
|
||||
def _core_requirements():
|
||||
_core_system_exes = {
|
||||
"make": _missing("make", "required to build software from sources"),
|
||||
"patch": _missing("patch", "required to patch source code before building"),
|
||||
@@ -87,7 +80,7 @@ def _core_requirements() -> List[RequiredResponseType]:
|
||||
return result
|
||||
|
||||
|
||||
def _buildcache_requirements() -> List[RequiredResponseType]:
|
||||
def _buildcache_requirements():
|
||||
_buildcache_exes = {
|
||||
"file": _missing("file", "required to analyze files for buildcaches"),
|
||||
("gpg2", "gpg"): _missing("gpg2", "required to sign/verify buildcaches", False),
|
||||
@@ -110,7 +103,7 @@ def _buildcache_requirements() -> List[RequiredResponseType]:
|
||||
return result
|
||||
|
||||
|
||||
def _optional_requirements() -> List[RequiredResponseType]:
|
||||
def _optional_requirements():
|
||||
_optional_exes = {
|
||||
"zstd": _missing("zstd", "required to compress/decompress code archives"),
|
||||
"svn": _missing("svn", "required to manage subversion repositories"),
|
||||
@@ -121,7 +114,7 @@ def _optional_requirements() -> List[RequiredResponseType]:
|
||||
return result
|
||||
|
||||
|
||||
def _development_requirements() -> List[RequiredResponseType]:
|
||||
def _development_requirements():
|
||||
# Ensure we trigger environment modifications if we have an environment
|
||||
if BootstrapEnvironment.spack_yaml().exists():
|
||||
with BootstrapEnvironment() as env:
|
||||
@@ -146,7 +139,7 @@ def _development_requirements() -> List[RequiredResponseType]:
|
||||
]
|
||||
|
||||
|
||||
def status_message(section) -> Tuple[str, bool]:
|
||||
def status_message(section):
|
||||
"""Return a status message to be printed to screen that refers to the
|
||||
section passed as argument and a bool which is True if there are missing
|
||||
dependencies.
|
||||
@@ -168,7 +161,7 @@ def status_message(section) -> Tuple[str, bool]:
|
||||
with ensure_bootstrap_configuration():
|
||||
missing_software = False
|
||||
for found, err_msg in required_software():
|
||||
if not found and err_msg:
|
||||
if not found:
|
||||
missing_software = True
|
||||
msg += "\n " + err_msg
|
||||
msg += "\n"
|
||||
|
@@ -69,13 +69,13 @@
|
||||
from spack.installer import InstallError
|
||||
from spack.util.cpus import cpus_available
|
||||
from spack.util.environment import (
|
||||
SYSTEM_DIRS,
|
||||
EnvironmentModifications,
|
||||
env_flag,
|
||||
filter_system_paths,
|
||||
get_path,
|
||||
inspect_path,
|
||||
is_system_path,
|
||||
system_dirs,
|
||||
validate,
|
||||
)
|
||||
from spack.util.executable import Executable
|
||||
@@ -397,7 +397,7 @@ def set_compiler_environment_variables(pkg, env):
|
||||
|
||||
env.set("SPACK_COMPILER_SPEC", str(spec.compiler))
|
||||
|
||||
env.set("SPACK_SYSTEM_DIRS", ":".join(SYSTEM_DIRS))
|
||||
env.set("SPACK_SYSTEM_DIRS", ":".join(system_dirs))
|
||||
|
||||
compiler.setup_custom_environment(pkg, env)
|
||||
|
||||
@@ -485,13 +485,7 @@ def update_compiler_args_for_dep(dep):
|
||||
query = pkg.spec[dep.name]
|
||||
dep_link_dirs = list()
|
||||
try:
|
||||
# In some circumstances (particularly for externals) finding
|
||||
# libraries packages can be time consuming, so indicate that
|
||||
# we are performing this operation (and also report when it
|
||||
# finishes).
|
||||
tty.debug("Collecting libraries for {0}".format(dep.name))
|
||||
dep_link_dirs.extend(query.libs.directories)
|
||||
tty.debug("Libraries for {0} have been collected.".format(dep.name))
|
||||
except NoLibrariesError:
|
||||
tty.debug("No libraries found for {0}".format(dep.name))
|
||||
|
||||
@@ -778,9 +772,7 @@ def setup_package(pkg, dirty, context="build"):
|
||||
set_compiler_environment_variables(pkg, env_mods)
|
||||
set_wrapper_variables(pkg, env_mods)
|
||||
|
||||
tty.debug("setup_package: grabbing modifications from dependencies")
|
||||
env_mods.extend(modifications_from_dependencies(pkg.spec, context, custom_mods_only=False))
|
||||
tty.debug("setup_package: collected all modifications from dependencies")
|
||||
|
||||
# architecture specific setup
|
||||
platform = spack.platforms.by_name(pkg.spec.architecture.platform)
|
||||
@@ -788,7 +780,6 @@ def setup_package(pkg, dirty, context="build"):
|
||||
platform.setup_platform_environment(pkg, env_mods)
|
||||
|
||||
if context == "build":
|
||||
tty.debug("setup_package: setup build environment for root")
|
||||
builder = spack.builder.create(pkg)
|
||||
builder.setup_build_environment(env_mods)
|
||||
|
||||
@@ -799,7 +790,6 @@ def setup_package(pkg, dirty, context="build"):
|
||||
" includes and omit it when invoked with '--cflags'."
|
||||
)
|
||||
elif context == "test":
|
||||
tty.debug("setup_package: setup test environment for root")
|
||||
env_mods.extend(
|
||||
inspect_path(
|
||||
pkg.spec.prefix,
|
||||
@@ -816,7 +806,6 @@ def setup_package(pkg, dirty, context="build"):
|
||||
# Load modules on an already clean environment, just before applying Spack's
|
||||
# own environment modifications. This ensures Spack controls CC/CXX/... variables.
|
||||
if need_compiler:
|
||||
tty.debug("setup_package: loading compiler modules")
|
||||
for mod in pkg.compiler.modules:
|
||||
load_module(mod)
|
||||
|
||||
@@ -954,7 +943,6 @@ def default_modifications_for_dep(dep):
|
||||
_make_runnable(dep, env)
|
||||
|
||||
def add_modifications_for_dep(dep):
|
||||
tty.debug("Adding env modifications for {0}".format(dep.name))
|
||||
# Some callers of this function only want the custom modifications.
|
||||
# For callers that want both custom and default modifications, we want
|
||||
# to perform the default modifications here (this groups custom
|
||||
@@ -980,7 +968,6 @@ def add_modifications_for_dep(dep):
|
||||
builder.setup_dependent_build_environment(env, spec)
|
||||
else:
|
||||
dpkg.setup_dependent_run_environment(env, spec)
|
||||
tty.debug("Added env modifications for {0}".format(dep.name))
|
||||
|
||||
# Note that we want to perform environment modifications in a fixed order.
|
||||
# The Spec.traverse method provides this: i.e. in addition to
|
||||
|
@@ -8,7 +8,7 @@
|
||||
import platform
|
||||
import re
|
||||
import sys
|
||||
from typing import List, Optional, Tuple
|
||||
from typing import List, Tuple
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
import spack.builder
|
||||
import spack.package_base
|
||||
import spack.util.path
|
||||
from spack.directives import build_system, conflicts, depends_on, variant
|
||||
from spack.directives import build_system, depends_on, variant
|
||||
from spack.multimethod import when
|
||||
|
||||
from ._checks import BaseBuilder, execute_build_time_tests
|
||||
@@ -35,43 +35,6 @@ def _extract_primary_generator(generator):
|
||||
return primary_generator
|
||||
|
||||
|
||||
def generator(*names: str, default: Optional[str] = None):
|
||||
"""The build system generator to use.
|
||||
|
||||
See ``cmake --help`` for a list of valid generators.
|
||||
Currently, "Unix Makefiles" and "Ninja" are the only generators
|
||||
that Spack supports. Defaults to "Unix Makefiles".
|
||||
|
||||
See https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html
|
||||
for more information.
|
||||
|
||||
Args:
|
||||
names: allowed generators for this package
|
||||
default: default generator
|
||||
"""
|
||||
allowed_values = ("make", "ninja")
|
||||
if any(x not in allowed_values for x in names):
|
||||
msg = "only 'make' and 'ninja' are allowed for CMake's 'generator' directive"
|
||||
raise ValueError(msg)
|
||||
|
||||
default = default or names[0]
|
||||
not_used = [x for x in allowed_values if x not in names]
|
||||
|
||||
def _values(x):
|
||||
return x in allowed_values
|
||||
|
||||
_values.__doc__ = f"{','.join(names)}"
|
||||
|
||||
variant(
|
||||
"generator",
|
||||
default=default,
|
||||
values=_values,
|
||||
description="the build system generator to use",
|
||||
)
|
||||
for x in not_used:
|
||||
conflicts(f"generator={x}")
|
||||
|
||||
|
||||
class CMakePackage(spack.package_base.PackageBase):
|
||||
"""Specialized class for packages built using CMake
|
||||
|
||||
@@ -104,15 +67,8 @@ class CMakePackage(spack.package_base.PackageBase):
|
||||
when="^cmake@3.9:",
|
||||
description="CMake interprocedural optimization",
|
||||
)
|
||||
|
||||
if sys.platform == "win32":
|
||||
generator("ninja")
|
||||
else:
|
||||
generator("ninja", "make", default="make")
|
||||
|
||||
depends_on("cmake", type="build")
|
||||
depends_on("gmake", type="build", when="generator=make")
|
||||
depends_on("ninja", type="build", when="generator=ninja")
|
||||
depends_on("ninja", type="build", when="platform=windows")
|
||||
|
||||
def flags_to_build_system_args(self, flags):
|
||||
"""Return a list of all command line arguments to pass the specified
|
||||
@@ -182,6 +138,18 @@ class CMakeBuilder(BaseBuilder):
|
||||
| :py:meth:`~.CMakeBuilder.build_directory` | Directory where to |
|
||||
| | build the package |
|
||||
+-----------------------------------------------+--------------------+
|
||||
|
||||
The generator used by CMake can be specified by providing the ``generator``
|
||||
attribute. Per
|
||||
https://cmake.org/cmake/help/git-master/manual/cmake-generators.7.html,
|
||||
the format is: [<secondary-generator> - ]<primary_generator>.
|
||||
|
||||
The full list of primary and secondary generators supported by CMake may be found
|
||||
in the documentation for the version of CMake used; however, at this time Spack
|
||||
supports only the primary generators "Unix Makefiles" and "Ninja." Spack's CMake
|
||||
support is agnostic with respect to primary generators. Spack will generate a
|
||||
runtime error if the generator string does not follow the prescribed format, or if
|
||||
the primary generator is not supported.
|
||||
"""
|
||||
|
||||
#: Phases of a CMake package
|
||||
@@ -192,6 +160,7 @@ class CMakeBuilder(BaseBuilder):
|
||||
|
||||
#: Names associated with package attributes in the old build-system format
|
||||
legacy_attributes: Tuple[str, ...] = (
|
||||
"generator",
|
||||
"build_targets",
|
||||
"install_targets",
|
||||
"build_time_test_callbacks",
|
||||
@@ -202,6 +171,16 @@ class CMakeBuilder(BaseBuilder):
|
||||
"build_directory",
|
||||
)
|
||||
|
||||
#: The build system generator to use.
|
||||
#:
|
||||
#: See ``cmake --help`` for a list of valid generators.
|
||||
#: Currently, "Unix Makefiles" and "Ninja" are the only generators
|
||||
#: that Spack supports. Defaults to "Unix Makefiles".
|
||||
#:
|
||||
#: See https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html
|
||||
#: for more information.
|
||||
generator = "Ninja" if sys.platform == "win32" else "Unix Makefiles"
|
||||
|
||||
#: Targets to be used during the build phase
|
||||
build_targets: List[str] = []
|
||||
#: Targets to be used during the install phase
|
||||
@@ -223,20 +202,12 @@ def root_cmakelists_dir(self):
|
||||
"""
|
||||
return self.pkg.stage.source_path
|
||||
|
||||
@property
|
||||
def generator(self):
|
||||
if self.spec.satisfies("generator=make"):
|
||||
return "Unix Makefiles"
|
||||
if self.spec.satisfies("generator=ninja"):
|
||||
return "Ninja"
|
||||
msg = f'{self.spec.format()} has an unsupported value for the "generator" variant'
|
||||
raise ValueError(msg)
|
||||
|
||||
@property
|
||||
def std_cmake_args(self):
|
||||
"""Standard cmake arguments provided as a property for
|
||||
convenience of package writers
|
||||
"""
|
||||
# standard CMake arguments
|
||||
std_cmake_args = CMakeBuilder.std_args(self.pkg, generator=self.generator)
|
||||
std_cmake_args += getattr(self.pkg, "cmake_flag_args", [])
|
||||
return std_cmake_args
|
||||
|
@@ -5,7 +5,6 @@
|
||||
|
||||
"""Caches used by Spack to store data"""
|
||||
import os
|
||||
from typing import Union
|
||||
|
||||
import llnl.util.lang
|
||||
from llnl.util.filesystem import mkdirp
|
||||
@@ -35,9 +34,7 @@ def _misc_cache():
|
||||
|
||||
|
||||
#: Spack's cache for small data
|
||||
misc_cache: Union[
|
||||
spack.util.file_cache.FileCache, llnl.util.lang.Singleton
|
||||
] = llnl.util.lang.Singleton(_misc_cache)
|
||||
misc_cache = llnl.util.lang.Singleton(_misc_cache)
|
||||
|
||||
|
||||
def fetch_cache_location():
|
||||
@@ -91,6 +88,4 @@ def symlink(self, mirror_ref):
|
||||
|
||||
|
||||
#: Spack's local cache for downloaded source archives
|
||||
fetch_cache: Union[
|
||||
spack.fetch_strategy.FsCache, llnl.util.lang.Singleton
|
||||
] = llnl.util.lang.Singleton(_fetch_cache)
|
||||
fetch_cache = llnl.util.lang.Singleton(_fetch_cache)
|
||||
|
@@ -38,7 +38,6 @@
|
||||
import spack.util.spack_yaml as syaml
|
||||
import spack.util.url as url_util
|
||||
import spack.util.web as web_util
|
||||
from spack import traverse
|
||||
from spack.error import SpackError
|
||||
from spack.reporters import CDash, CDashConfiguration
|
||||
from spack.reporters.cdash import build_stamp as cdash_build_stamp
|
||||
@@ -362,7 +361,60 @@ def append_dep(s, d):
|
||||
|
||||
|
||||
def _spec_matches(spec, match_string):
|
||||
return spec.intersects(match_string)
|
||||
return spec.satisfies(match_string)
|
||||
|
||||
|
||||
def _remove_attributes(src_dict, dest_dict):
|
||||
if "tags" in src_dict and "tags" in dest_dict:
|
||||
# For 'tags', we remove any tags that are listed for removal
|
||||
for tag in src_dict["tags"]:
|
||||
while tag in dest_dict["tags"]:
|
||||
dest_dict["tags"].remove(tag)
|
||||
|
||||
|
||||
def _copy_attributes(attrs_list, src_dict, dest_dict):
|
||||
for runner_attr in attrs_list:
|
||||
if runner_attr in src_dict:
|
||||
if runner_attr in dest_dict and runner_attr == "tags":
|
||||
# For 'tags', we combine the lists of tags, while
|
||||
# avoiding duplicates
|
||||
for tag in src_dict[runner_attr]:
|
||||
if tag not in dest_dict[runner_attr]:
|
||||
dest_dict[runner_attr].append(tag)
|
||||
elif runner_attr in dest_dict and runner_attr == "variables":
|
||||
# For 'variables', we merge the dictionaries. Any conflicts
|
||||
# (i.e. 'runner-attributes' has same variable key as the
|
||||
# higher level) we resolve by keeping the more specific
|
||||
# 'runner-attributes' version.
|
||||
for src_key, src_val in src_dict[runner_attr].items():
|
||||
dest_dict[runner_attr][src_key] = copy.deepcopy(src_dict[runner_attr][src_key])
|
||||
else:
|
||||
dest_dict[runner_attr] = copy.deepcopy(src_dict[runner_attr])
|
||||
|
||||
|
||||
def _find_matching_config(spec, gitlab_ci):
|
||||
runner_attributes = {}
|
||||
overridable_attrs = ["image", "tags", "variables", "before_script", "script", "after_script"]
|
||||
|
||||
_copy_attributes(overridable_attrs, gitlab_ci, runner_attributes)
|
||||
|
||||
matched = False
|
||||
only_first = gitlab_ci.get("match_behavior", "first") == "first"
|
||||
for ci_mapping in gitlab_ci["mappings"]:
|
||||
for match_string in ci_mapping["match"]:
|
||||
if _spec_matches(spec, match_string):
|
||||
matched = True
|
||||
if "remove-attributes" in ci_mapping:
|
||||
_remove_attributes(ci_mapping["remove-attributes"], runner_attributes)
|
||||
if "runner-attributes" in ci_mapping:
|
||||
_copy_attributes(
|
||||
overridable_attrs, ci_mapping["runner-attributes"], runner_attributes
|
||||
)
|
||||
break
|
||||
if matched and only_first:
|
||||
break
|
||||
|
||||
return runner_attributes if matched else None
|
||||
|
||||
|
||||
def _format_job_needs(
|
||||
@@ -438,28 +490,16 @@ def compute_affected_packages(rev1="HEAD^", rev2="HEAD"):
|
||||
return spack.repo.get_all_package_diffs("ARC", rev1=rev1, rev2=rev2)
|
||||
|
||||
|
||||
def get_spec_filter_list(env, affected_pkgs, dependent_traverse_depth=None):
|
||||
def get_spec_filter_list(env, affected_pkgs):
|
||||
"""Given a list of package names and an active/concretized
|
||||
environment, return the set of all concrete specs from the
|
||||
environment that could have been affected by changing the
|
||||
list of packages.
|
||||
|
||||
If a ``dependent_traverse_depth`` is given, it is used to limit
|
||||
upward (in the parent direction) traversal of specs of touched
|
||||
packages. E.g. if 1 is provided, then only direct dependents
|
||||
of touched package specs are traversed to produce specs that
|
||||
could have been affected by changing the package, while if 0 is
|
||||
provided, only the changed specs themselves are traversed. If ``None``
|
||||
is given, upward traversal of touched package specs is done all
|
||||
the way to the environment roots. Providing a negative number
|
||||
results in no traversals at all, yielding an empty set.
|
||||
|
||||
Arguments:
|
||||
|
||||
env (spack.environment.Environment): Active concrete environment
|
||||
affected_pkgs (List[str]): Affected package names
|
||||
dependent_traverse_depth: Optional integer to limit dependent
|
||||
traversal, or None to disable the limit.
|
||||
|
||||
Returns:
|
||||
|
||||
@@ -472,237 +512,17 @@ def get_spec_filter_list(env, affected_pkgs, dependent_traverse_depth=None):
|
||||
tty.debug("All concrete environment specs:")
|
||||
for s in all_concrete_specs:
|
||||
tty.debug(" {0}/{1}".format(s.name, s.dag_hash()[:7]))
|
||||
affected_pkgs = frozenset(affected_pkgs)
|
||||
env_matches = [s for s in all_concrete_specs if s.name in affected_pkgs]
|
||||
env_matches = [s for s in all_concrete_specs if s.name in frozenset(affected_pkgs)]
|
||||
visited = set()
|
||||
dag_hash = lambda s: s.dag_hash()
|
||||
for depth, parent in traverse.traverse_nodes(
|
||||
env_matches, direction="parents", key=dag_hash, depth=True, order="breadth"
|
||||
):
|
||||
if dependent_traverse_depth is not None and depth > dependent_traverse_depth:
|
||||
break
|
||||
affected_specs.update(parent.traverse(direction="children", visited=visited, key=dag_hash))
|
||||
for match in env_matches:
|
||||
for parent in match.traverse(direction="parents", key=dag_hash):
|
||||
affected_specs.update(
|
||||
parent.traverse(direction="children", visited=visited, key=dag_hash)
|
||||
)
|
||||
return affected_specs
|
||||
|
||||
|
||||
def _build_jobs(phases, staged_phases):
|
||||
for phase in phases:
|
||||
phase_name = phase["name"]
|
||||
spec_labels, dependencies, stages = staged_phases[phase_name]
|
||||
|
||||
for stage_jobs in stages:
|
||||
for spec_label in stage_jobs:
|
||||
spec_record = spec_labels[spec_label]
|
||||
release_spec = spec_record["spec"]
|
||||
release_spec_dag_hash = release_spec.dag_hash()
|
||||
yield release_spec, release_spec_dag_hash
|
||||
|
||||
|
||||
def _noop(x):
|
||||
return x
|
||||
|
||||
|
||||
def _unpack_script(script_section, op=_noop):
|
||||
script = []
|
||||
for cmd in script_section:
|
||||
if isinstance(cmd, list):
|
||||
for subcmd in cmd:
|
||||
script.append(op(subcmd))
|
||||
else:
|
||||
script.append(op(cmd))
|
||||
|
||||
return script
|
||||
|
||||
|
||||
class SpackCI:
|
||||
"""Spack CI object used to generate intermediate representation
|
||||
used by the CI generator(s).
|
||||
"""
|
||||
|
||||
def __init__(self, ci_config, phases, staged_phases):
|
||||
"""Given the information from the ci section of the config
|
||||
and the job phases setup meta data needed for generating Spack
|
||||
CI IR.
|
||||
"""
|
||||
|
||||
self.ci_config = ci_config
|
||||
self.named_jobs = ["any", "build", "cleanup", "noop", "reindex", "signing"]
|
||||
|
||||
self.ir = {
|
||||
"jobs": {},
|
||||
"temporary-storage-url-prefix": self.ci_config.get(
|
||||
"temporary-storage-url-prefix", None
|
||||
),
|
||||
"enable-artifacts-buildcache": self.ci_config.get(
|
||||
"enable-artifacts-buildcache", False
|
||||
),
|
||||
"bootstrap": self.ci_config.get(
|
||||
"bootstrap", []
|
||||
), # This is deprecated and should be removed
|
||||
"rebuild-index": self.ci_config.get("rebuild-index", True),
|
||||
"broken-specs-url": self.ci_config.get("broken-specs-url", None),
|
||||
"broken-tests-packages": self.ci_config.get("broken-tests-packages", []),
|
||||
"target": self.ci_config.get("target", "gitlab"),
|
||||
}
|
||||
jobs = self.ir["jobs"]
|
||||
|
||||
for spec, dag_hash in _build_jobs(phases, staged_phases):
|
||||
jobs[dag_hash] = self.__init_job(spec)
|
||||
|
||||
for name in self.named_jobs:
|
||||
# Skip the special named jobs
|
||||
if name not in ["any", "build"]:
|
||||
jobs[name] = self.__init_job("")
|
||||
|
||||
def __init_job(self, spec):
|
||||
"""Initialize job object"""
|
||||
return {"spec": spec, "attributes": {}}
|
||||
|
||||
def __is_named(self, section):
|
||||
"""Check if a pipeline-gen configuration section is for a named job,
|
||||
and if so return the name otherwise return none.
|
||||
"""
|
||||
for _name in self.named_jobs:
|
||||
keys = ["{0}-job".format(_name), "{0}-job-remove".format(_name)]
|
||||
if any([key for key in keys if key in section]):
|
||||
return _name
|
||||
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def __job_name(name, suffix=""):
|
||||
"""Compute the name of a named job with appropriate suffix.
|
||||
Valid suffixes are either '-remove' or empty string or None
|
||||
"""
|
||||
assert type(name) == str
|
||||
|
||||
jname = name
|
||||
if suffix:
|
||||
jname = "{0}-job{1}".format(name, suffix)
|
||||
else:
|
||||
jname = "{0}-job".format(name)
|
||||
|
||||
return jname
|
||||
|
||||
def __apply_submapping(self, dest, spec, section):
|
||||
"""Apply submapping setion to the IR dict"""
|
||||
matched = False
|
||||
only_first = section.get("match_behavior", "first") == "first"
|
||||
|
||||
for match_attrs in reversed(section["submapping"]):
|
||||
attrs = cfg.InternalConfigScope._process_dict_keyname_overrides(match_attrs)
|
||||
for match_string in match_attrs["match"]:
|
||||
if _spec_matches(spec, match_string):
|
||||
matched = True
|
||||
if "build-job-remove" in match_attrs:
|
||||
spack.config.remove_yaml(dest, attrs["build-job-remove"])
|
||||
if "build-job" in match_attrs:
|
||||
spack.config.merge_yaml(dest, attrs["build-job"])
|
||||
break
|
||||
if matched and only_first:
|
||||
break
|
||||
|
||||
return dest
|
||||
|
||||
# Generate IR from the configs
|
||||
def generate_ir(self):
|
||||
"""Generate the IR from the Spack CI configurations."""
|
||||
|
||||
jobs = self.ir["jobs"]
|
||||
|
||||
# Implicit job defaults
|
||||
defaults = [
|
||||
{
|
||||
"build-job": {
|
||||
"script": [
|
||||
"cd {env_dir}",
|
||||
"spack env activate --without-view .",
|
||||
"spack ci rebuild",
|
||||
]
|
||||
}
|
||||
},
|
||||
{"noop-job": {"script": ['echo "All specs already up to date, nothing to rebuild."']}},
|
||||
]
|
||||
|
||||
# Job overrides
|
||||
overrides = [
|
||||
# Reindex script
|
||||
{
|
||||
"reindex-job": {
|
||||
"script:": [
|
||||
"spack buildcache update-index --keys --mirror-url {index_target_mirror}"
|
||||
]
|
||||
}
|
||||
},
|
||||
# Cleanup script
|
||||
{
|
||||
"cleanup-job": {
|
||||
"script:": [
|
||||
"spack -d mirror destroy --mirror-url {mirror_prefix}/$CI_PIPELINE_ID"
|
||||
]
|
||||
}
|
||||
},
|
||||
# Add signing job tags
|
||||
{"signing-job": {"tags": ["aws", "protected", "notary"]}},
|
||||
# Remove reserved tags
|
||||
{"any-job-remove": {"tags": SPACK_RESERVED_TAGS}},
|
||||
]
|
||||
|
||||
pipeline_gen = overrides + self.ci_config.get("pipeline-gen", []) + defaults
|
||||
|
||||
for section in reversed(pipeline_gen):
|
||||
name = self.__is_named(section)
|
||||
has_submapping = "submapping" in section
|
||||
section = cfg.InternalConfigScope._process_dict_keyname_overrides(section)
|
||||
|
||||
if name:
|
||||
remove_job_name = self.__job_name(name, suffix="-remove")
|
||||
merge_job_name = self.__job_name(name)
|
||||
do_remove = remove_job_name in section
|
||||
do_merge = merge_job_name in section
|
||||
|
||||
def _apply_section(dest, src):
|
||||
if do_remove:
|
||||
dest = spack.config.remove_yaml(dest, src[remove_job_name])
|
||||
if do_merge:
|
||||
dest = copy.copy(spack.config.merge_yaml(dest, src[merge_job_name]))
|
||||
|
||||
if name == "build":
|
||||
# Apply attributes to all build jobs
|
||||
for _, job in jobs.items():
|
||||
if job["spec"]:
|
||||
_apply_section(job["attributes"], section)
|
||||
elif name == "any":
|
||||
# Apply section attributes too all jobs
|
||||
for _, job in jobs.items():
|
||||
_apply_section(job["attributes"], section)
|
||||
else:
|
||||
# Create a signing job if there is script and the job hasn't
|
||||
# been initialized yet
|
||||
if name == "signing" and name not in jobs:
|
||||
if "signing-job" in section:
|
||||
if "script" not in section["signing-job"]:
|
||||
continue
|
||||
else:
|
||||
jobs[name] = self.__init_job("")
|
||||
# Apply attributes to named job
|
||||
_apply_section(jobs[name]["attributes"], section)
|
||||
|
||||
elif has_submapping:
|
||||
# Apply section jobs with specs to match
|
||||
for _, job in jobs.items():
|
||||
if job["spec"]:
|
||||
job["attributes"] = self.__apply_submapping(
|
||||
job["attributes"], job["spec"], section
|
||||
)
|
||||
|
||||
for _, job in jobs.items():
|
||||
if job["spec"]:
|
||||
job["spec"] = job["spec"].name
|
||||
|
||||
return self.ir
|
||||
|
||||
|
||||
def generate_gitlab_ci_yaml(
|
||||
env,
|
||||
print_summary,
|
||||
@@ -752,42 +572,14 @@ def generate_gitlab_ci_yaml(
|
||||
|
||||
yaml_root = ev.config_dict(env.yaml)
|
||||
|
||||
# Get the joined "ci" config with all of the current scopes resolved
|
||||
ci_config = cfg.get("ci")
|
||||
if "gitlab-ci" not in yaml_root:
|
||||
tty.die('Environment yaml does not have "gitlab-ci" section')
|
||||
|
||||
if not ci_config:
|
||||
tty.warn("Environment does not have `ci` a configuration")
|
||||
gitlabci_config = yaml_root.get("gitlab-ci")
|
||||
if not gitlabci_config:
|
||||
tty.die("Environment yaml does not have `gitlab-ci` config section. Cannot recover.")
|
||||
gitlab_ci = yaml_root["gitlab-ci"]
|
||||
|
||||
tty.warn(
|
||||
"The `gitlab-ci` configuration is deprecated in favor of `ci`.\n",
|
||||
"To update run \n\t$ spack env update /path/to/ci/spack.yaml",
|
||||
)
|
||||
translate_deprecated_config(gitlabci_config)
|
||||
ci_config = gitlabci_config
|
||||
|
||||
# Default target is gitlab...and only target is gitlab
|
||||
if not ci_config.get("target", "gitlab") == "gitlab":
|
||||
tty.die('Spack CI module only generates target "gitlab"')
|
||||
|
||||
cdash_config = cfg.get("cdash")
|
||||
cdash_handler = CDashHandler(cdash_config) if "build-group" in cdash_config else None
|
||||
cdash_handler = CDashHandler(yaml_root.get("cdash")) if "cdash" in yaml_root else None
|
||||
build_group = cdash_handler.build_group if cdash_handler else None
|
||||
|
||||
dependent_depth = os.environ.get("SPACK_PRUNE_UNTOUCHED_DEPENDENT_DEPTH", None)
|
||||
if dependent_depth is not None:
|
||||
try:
|
||||
dependent_depth = int(dependent_depth)
|
||||
except (TypeError, ValueError):
|
||||
tty.warn(
|
||||
f"Unrecognized value ({dependent_depth}) "
|
||||
"provided for SPACK_PRUNE_UNTOUCHED_DEPENDENT_DEPTH, "
|
||||
"ignoring it."
|
||||
)
|
||||
dependent_depth = None
|
||||
|
||||
prune_untouched_packages = False
|
||||
spack_prune_untouched = os.environ.get("SPACK_PRUNE_UNTOUCHED", None)
|
||||
if spack_prune_untouched is not None and spack_prune_untouched.lower() == "true":
|
||||
@@ -803,9 +595,7 @@ def generate_gitlab_ci_yaml(
|
||||
tty.debug("affected pkgs:")
|
||||
for p in affected_pkgs:
|
||||
tty.debug(" {0}".format(p))
|
||||
affected_specs = get_spec_filter_list(
|
||||
env, affected_pkgs, dependent_traverse_depth=dependent_depth
|
||||
)
|
||||
affected_specs = get_spec_filter_list(env, affected_pkgs)
|
||||
tty.debug("all affected specs:")
|
||||
for s in affected_specs:
|
||||
tty.debug(" {0}/{1}".format(s.name, s.dag_hash()[:7]))
|
||||
@@ -847,25 +637,25 @@ def generate_gitlab_ci_yaml(
|
||||
# trying to build.
|
||||
broken_specs_url = ""
|
||||
known_broken_specs_encountered = []
|
||||
if "broken-specs-url" in ci_config:
|
||||
broken_specs_url = ci_config["broken-specs-url"]
|
||||
if "broken-specs-url" in gitlab_ci:
|
||||
broken_specs_url = gitlab_ci["broken-specs-url"]
|
||||
|
||||
enable_artifacts_buildcache = False
|
||||
if "enable-artifacts-buildcache" in ci_config:
|
||||
enable_artifacts_buildcache = ci_config["enable-artifacts-buildcache"]
|
||||
if "enable-artifacts-buildcache" in gitlab_ci:
|
||||
enable_artifacts_buildcache = gitlab_ci["enable-artifacts-buildcache"]
|
||||
|
||||
rebuild_index_enabled = True
|
||||
if "rebuild-index" in ci_config and ci_config["rebuild-index"] is False:
|
||||
if "rebuild-index" in gitlab_ci and gitlab_ci["rebuild-index"] is False:
|
||||
rebuild_index_enabled = False
|
||||
|
||||
temp_storage_url_prefix = None
|
||||
if "temporary-storage-url-prefix" in ci_config:
|
||||
temp_storage_url_prefix = ci_config["temporary-storage-url-prefix"]
|
||||
if "temporary-storage-url-prefix" in gitlab_ci:
|
||||
temp_storage_url_prefix = gitlab_ci["temporary-storage-url-prefix"]
|
||||
|
||||
bootstrap_specs = []
|
||||
phases = []
|
||||
if "bootstrap" in ci_config:
|
||||
for phase in ci_config["bootstrap"]:
|
||||
if "bootstrap" in gitlab_ci:
|
||||
for phase in gitlab_ci["bootstrap"]:
|
||||
try:
|
||||
phase_name = phase.get("name")
|
||||
strip_compilers = phase.get("compiler-agnostic")
|
||||
@@ -930,31 +720,6 @@ def generate_gitlab_ci_yaml(
|
||||
shutil.copyfile(env.manifest_path, os.path.join(concrete_env_dir, "spack.yaml"))
|
||||
shutil.copyfile(env.lock_path, os.path.join(concrete_env_dir, "spack.lock"))
|
||||
|
||||
with open(env.manifest_path, "r") as env_fd:
|
||||
env_yaml_root = syaml.load(env_fd)
|
||||
# Add config scopes to environment
|
||||
env_includes = env_yaml_root["spack"].get("include", [])
|
||||
cli_scopes = [
|
||||
os.path.abspath(s.path)
|
||||
for s in cfg.scopes().values()
|
||||
if type(s) == cfg.ImmutableConfigScope
|
||||
and s.path not in env_includes
|
||||
and os.path.exists(s.path)
|
||||
]
|
||||
include_scopes = []
|
||||
for scope in cli_scopes:
|
||||
if scope not in include_scopes and scope not in env_includes:
|
||||
include_scopes.insert(0, scope)
|
||||
env_includes.extend(include_scopes)
|
||||
env_yaml_root["spack"]["include"] = env_includes
|
||||
|
||||
if "gitlab-ci" in env_yaml_root["spack"] and "ci" not in env_yaml_root["spack"]:
|
||||
env_yaml_root["spack"]["ci"] = env_yaml_root["spack"].pop("gitlab-ci")
|
||||
translate_deprecated_config(env_yaml_root["spack"]["ci"])
|
||||
|
||||
with open(os.path.join(concrete_env_dir, "spack.yaml"), "w") as fd:
|
||||
fd.write(syaml.dump_config(env_yaml_root, default_flow_style=False))
|
||||
|
||||
job_log_dir = os.path.join(pipeline_artifacts_dir, "logs")
|
||||
job_repro_dir = os.path.join(pipeline_artifacts_dir, "reproduction")
|
||||
job_test_dir = os.path.join(pipeline_artifacts_dir, "tests")
|
||||
@@ -966,7 +731,7 @@ def generate_gitlab_ci_yaml(
|
||||
# generation job and the rebuild jobs. This can happen when gitlab
|
||||
# checks out the project into a runner-specific directory, for example,
|
||||
# and different runners are picked for generate and rebuild jobs.
|
||||
ci_project_dir = os.environ.get("CI_PROJECT_DIR", os.getcwd())
|
||||
ci_project_dir = os.environ.get("CI_PROJECT_DIR")
|
||||
rel_artifacts_root = os.path.relpath(pipeline_artifacts_dir, ci_project_dir)
|
||||
rel_concrete_env_dir = os.path.relpath(concrete_env_dir, ci_project_dir)
|
||||
rel_job_log_dir = os.path.relpath(job_log_dir, ci_project_dir)
|
||||
@@ -980,7 +745,7 @@ def generate_gitlab_ci_yaml(
|
||||
try:
|
||||
bindist.binary_index.update()
|
||||
except bindist.FetchCacheError as e:
|
||||
tty.warn(e)
|
||||
tty.error(e)
|
||||
|
||||
staged_phases = {}
|
||||
try:
|
||||
@@ -1037,9 +802,6 @@ def generate_gitlab_ci_yaml(
|
||||
else:
|
||||
broken_spec_urls = web_util.list_url(broken_specs_url)
|
||||
|
||||
spack_ci = SpackCI(ci_config, phases, staged_phases)
|
||||
spack_ci_ir = spack_ci.generate_ir()
|
||||
|
||||
before_script, after_script = None, None
|
||||
for phase in phases:
|
||||
phase_name = phase["name"]
|
||||
@@ -1067,7 +829,7 @@ def generate_gitlab_ci_yaml(
|
||||
spec_record["needs_rebuild"] = False
|
||||
continue
|
||||
|
||||
runner_attribs = spack_ci_ir["jobs"][release_spec_dag_hash]["attributes"]
|
||||
runner_attribs = _find_matching_config(release_spec, gitlab_ci)
|
||||
|
||||
if not runner_attribs:
|
||||
tty.warn("No match found for {0}, skipping it".format(release_spec))
|
||||
@@ -1098,21 +860,23 @@ def generate_gitlab_ci_yaml(
|
||||
except AttributeError:
|
||||
image_name = build_image
|
||||
|
||||
if "script" not in runner_attribs:
|
||||
raise AttributeError
|
||||
job_script = ["spack env activate --without-view ."]
|
||||
|
||||
def main_script_replacements(cmd):
|
||||
return cmd.replace("{env_dir}", concrete_env_dir)
|
||||
if artifacts_root:
|
||||
job_script.insert(0, "cd {0}".format(concrete_env_dir))
|
||||
|
||||
job_script = _unpack_script(runner_attribs["script"], op=main_script_replacements)
|
||||
job_script.extend(["spack ci rebuild"])
|
||||
|
||||
if "script" in runner_attribs:
|
||||
job_script = [s for s in runner_attribs["script"]]
|
||||
|
||||
before_script = None
|
||||
if "before_script" in runner_attribs:
|
||||
before_script = _unpack_script(runner_attribs["before_script"])
|
||||
before_script = [s for s in runner_attribs["before_script"]]
|
||||
|
||||
after_script = None
|
||||
if "after_script" in runner_attribs:
|
||||
after_script = _unpack_script(runner_attribs["after_script"])
|
||||
after_script = [s for s in runner_attribs["after_script"]]
|
||||
|
||||
osname = str(release_spec.architecture)
|
||||
job_name = get_job_name(
|
||||
@@ -1174,7 +938,7 @@ def main_script_replacements(cmd):
|
||||
bs_arch = c_spec.architecture
|
||||
bs_arch_family = bs_arch.target.microarchitecture.family
|
||||
if (
|
||||
c_spec.intersects(compiler_pkg_spec)
|
||||
c_spec.satisfies(compiler_pkg_spec)
|
||||
and bs_arch_family == spec_arch_family
|
||||
):
|
||||
# We found the bootstrap compiler this release spec
|
||||
@@ -1356,6 +1120,19 @@ def main_script_replacements(cmd):
|
||||
else:
|
||||
tty.warn("Unable to populate buildgroup without CDash credentials")
|
||||
|
||||
service_job_config = None
|
||||
if "service-job-attributes" in gitlab_ci:
|
||||
service_job_config = gitlab_ci["service-job-attributes"]
|
||||
|
||||
default_attrs = [
|
||||
"image",
|
||||
"tags",
|
||||
"variables",
|
||||
"before_script",
|
||||
# 'script',
|
||||
"after_script",
|
||||
]
|
||||
|
||||
service_job_retries = {
|
||||
"max": 2,
|
||||
"when": ["runner_system_failure", "stuck_or_timeout_failure", "script_failure"],
|
||||
@@ -1367,29 +1144,55 @@ def main_script_replacements(cmd):
|
||||
# schedule a job to clean up the temporary storage location
|
||||
# associated with this pipeline.
|
||||
stage_names.append("cleanup-temp-storage")
|
||||
cleanup_job = copy.deepcopy(spack_ci_ir["jobs"]["cleanup"]["attributes"])
|
||||
cleanup_job = {}
|
||||
|
||||
if service_job_config:
|
||||
_copy_attributes(default_attrs, service_job_config, cleanup_job)
|
||||
|
||||
if "tags" in cleanup_job:
|
||||
service_tags = _remove_reserved_tags(cleanup_job["tags"])
|
||||
cleanup_job["tags"] = service_tags
|
||||
|
||||
cleanup_job["stage"] = "cleanup-temp-storage"
|
||||
cleanup_job["script"] = [
|
||||
"spack -d mirror destroy --mirror-url {0}/$CI_PIPELINE_ID".format(
|
||||
temp_storage_url_prefix
|
||||
)
|
||||
]
|
||||
cleanup_job["when"] = "always"
|
||||
cleanup_job["retry"] = service_job_retries
|
||||
cleanup_job["interruptible"] = True
|
||||
|
||||
cleanup_job["script"] = _unpack_script(
|
||||
cleanup_job["script"],
|
||||
op=lambda cmd: cmd.replace("mirror_prefix", temp_storage_url_prefix),
|
||||
)
|
||||
|
||||
output_object["cleanup"] = cleanup_job
|
||||
|
||||
if (
|
||||
"script" in spack_ci_ir["jobs"]["signing"]["attributes"]
|
||||
"signing-job-attributes" in gitlab_ci
|
||||
and spack_pipeline_type == "spack_protected_branch"
|
||||
):
|
||||
# External signing: generate a job to check and sign binary pkgs
|
||||
stage_names.append("stage-sign-pkgs")
|
||||
signing_job = spack_ci_ir["jobs"]["signing"]["attributes"]
|
||||
signing_job_config = gitlab_ci["signing-job-attributes"]
|
||||
signing_job = {}
|
||||
|
||||
signing_job["script"] = _unpack_script(signing_job["script"])
|
||||
signing_job_attrs_to_copy = [
|
||||
"image",
|
||||
"tags",
|
||||
"variables",
|
||||
"before_script",
|
||||
"script",
|
||||
"after_script",
|
||||
]
|
||||
|
||||
_copy_attributes(signing_job_attrs_to_copy, signing_job_config, signing_job)
|
||||
|
||||
signing_job_tags = []
|
||||
if "tags" in signing_job:
|
||||
signing_job_tags = _remove_reserved_tags(signing_job["tags"])
|
||||
|
||||
for tag in ["aws", "protected", "notary"]:
|
||||
if tag not in signing_job_tags:
|
||||
signing_job_tags.append(tag)
|
||||
signing_job["tags"] = signing_job_tags
|
||||
|
||||
signing_job["stage"] = "stage-sign-pkgs"
|
||||
signing_job["when"] = "always"
|
||||
@@ -1401,17 +1204,23 @@ def main_script_replacements(cmd):
|
||||
if rebuild_index_enabled:
|
||||
# Add a final job to regenerate the index
|
||||
stage_names.append("stage-rebuild-index")
|
||||
final_job = spack_ci_ir["jobs"]["reindex"]["attributes"]
|
||||
final_job = {}
|
||||
|
||||
if service_job_config:
|
||||
_copy_attributes(default_attrs, service_job_config, final_job)
|
||||
|
||||
if "tags" in final_job:
|
||||
service_tags = _remove_reserved_tags(final_job["tags"])
|
||||
final_job["tags"] = service_tags
|
||||
|
||||
index_target_mirror = mirror_urls[0]
|
||||
if remote_mirror_override:
|
||||
index_target_mirror = remote_mirror_override
|
||||
final_job["stage"] = "stage-rebuild-index"
|
||||
final_job["script"] = _unpack_script(
|
||||
final_job["script"],
|
||||
op=lambda cmd: cmd.replace("{index_target_mirror}", index_target_mirror),
|
||||
)
|
||||
|
||||
final_job["stage"] = "stage-rebuild-index"
|
||||
final_job["script"] = [
|
||||
"spack buildcache update-index --keys --mirror-url {0}".format(index_target_mirror)
|
||||
]
|
||||
final_job["when"] = "always"
|
||||
final_job["retry"] = service_job_retries
|
||||
final_job["interruptible"] = True
|
||||
@@ -1459,9 +1268,6 @@ def main_script_replacements(cmd):
|
||||
if spack_stack_name:
|
||||
output_object["variables"]["SPACK_CI_STACK_NAME"] = spack_stack_name
|
||||
|
||||
# Ensure the child pipeline always runs
|
||||
output_object["workflow"] = {"rules": [{"when": "always"}]}
|
||||
|
||||
if spack_buildcache_copy:
|
||||
# Write out the file describing specs that should be copied
|
||||
copy_specs_dir = os.path.join(pipeline_artifacts_dir, "specs_to_copy")
|
||||
@@ -1495,7 +1301,13 @@ def main_script_replacements(cmd):
|
||||
else:
|
||||
# No jobs were generated
|
||||
tty.debug("No specs to rebuild, generating no-op job")
|
||||
noop_job = spack_ci_ir["jobs"]["noop"]["attributes"]
|
||||
noop_job = {}
|
||||
|
||||
if service_job_config:
|
||||
_copy_attributes(default_attrs, service_job_config, noop_job)
|
||||
|
||||
if "script" not in noop_job:
|
||||
noop_job["script"] = ['echo "All specs already up to date, nothing to rebuild."']
|
||||
|
||||
noop_job["retry"] = service_job_retries
|
||||
|
||||
@@ -1509,7 +1321,7 @@ def main_script_replacements(cmd):
|
||||
sys.exit(1)
|
||||
|
||||
with open(output_file, "w") as outf:
|
||||
outf.write(syaml.dump(sorted_output, default_flow_style=True))
|
||||
outf.write(syaml.dump_config(sorted_output, default_flow_style=True))
|
||||
|
||||
|
||||
def _url_encode_string(input_string):
|
||||
@@ -1689,10 +1501,7 @@ def copy_files_to_artifacts(src, artifacts_dir):
|
||||
try:
|
||||
fs.copy(src, artifacts_dir)
|
||||
except Exception as err:
|
||||
msg = ("Unable to copy files ({0}) to artifacts {1} due to " "exception: {2}").format(
|
||||
src, artifacts_dir, str(err)
|
||||
)
|
||||
tty.warn(msg)
|
||||
tty.warn(f"Unable to copy files ({src}) to artifacts {artifacts_dir} due to: {err}")
|
||||
|
||||
|
||||
def copy_stage_logs_to_artifacts(job_spec, job_log_dir):
|
||||
@@ -1912,7 +1721,6 @@ def reproduce_ci_job(url, work_dir):
|
||||
function is a set of printed instructions for running docker and then
|
||||
commands to run to reproduce the build once inside the container.
|
||||
"""
|
||||
work_dir = os.path.realpath(work_dir)
|
||||
download_and_extract_artifacts(url, work_dir)
|
||||
|
||||
lock_file = fs.find(work_dir, "spack.lock")[0]
|
||||
@@ -2077,9 +1885,7 @@ def reproduce_ci_job(url, work_dir):
|
||||
if job_image:
|
||||
inst_list.append("\nRun the following command:\n\n")
|
||||
inst_list.append(
|
||||
" $ docker run --rm --name spack_reproducer -v {0}:{1}:Z -ti {2}\n".format(
|
||||
work_dir, mount_as_dir, job_image
|
||||
)
|
||||
" $ docker run --rm -v {0}:{1} -ti {2}\n".format(work_dir, mount_as_dir, job_image)
|
||||
)
|
||||
inst_list.append("\nOnce inside the container:\n\n")
|
||||
else:
|
||||
@@ -2130,16 +1936,13 @@ def process_command(name, commands, repro_dir):
|
||||
# Create a string [command 1] && [command 2] && ... && [command n] with commands
|
||||
# quoted using double quotes.
|
||||
args_to_string = lambda args: " ".join('"{}"'.format(arg) for arg in args)
|
||||
full_command = " \n ".join(map(args_to_string, commands))
|
||||
full_command = " && ".join(map(args_to_string, commands))
|
||||
|
||||
# Write the command to a shell script
|
||||
script = "{0}.sh".format(name)
|
||||
with open(script, "w") as fd:
|
||||
fd.write("#!/bin/sh\n\n")
|
||||
fd.write("\n# spack {0} command\n".format(name))
|
||||
fd.write("set -e\n")
|
||||
if os.environ.get("SPACK_VERBOSE_SCRIPT"):
|
||||
fd.write("set -x\n")
|
||||
fd.write(full_command)
|
||||
fd.write("\n")
|
||||
|
||||
@@ -2488,66 +2291,3 @@ def report_skipped(self, spec, directory_name, reason):
|
||||
)
|
||||
reporter = CDash(configuration=configuration)
|
||||
reporter.test_skipped_report(directory_name, spec, reason)
|
||||
|
||||
|
||||
def translate_deprecated_config(config):
|
||||
# Remove all deprecated keys from config
|
||||
mappings = config.pop("mappings", [])
|
||||
match_behavior = config.pop("match_behavior", "first")
|
||||
|
||||
build_job = {}
|
||||
if "image" in config:
|
||||
build_job["image"] = config.pop("image")
|
||||
if "tags" in config:
|
||||
build_job["tags"] = config.pop("tags")
|
||||
if "variables" in config:
|
||||
build_job["variables"] = config.pop("variables")
|
||||
if "before_script" in config:
|
||||
build_job["before_script"] = config.pop("before_script")
|
||||
if "script" in config:
|
||||
build_job["script"] = config.pop("script")
|
||||
if "after_script" in config:
|
||||
build_job["after_script"] = config.pop("after_script")
|
||||
|
||||
signing_job = None
|
||||
if "signing-job-attributes" in config:
|
||||
signing_job = {"signing-job": config.pop("signing-job-attributes")}
|
||||
|
||||
service_job_attributes = None
|
||||
if "service-job-attributes" in config:
|
||||
service_job_attributes = config.pop("service-job-attributes")
|
||||
|
||||
# If this config already has pipeline-gen do not more
|
||||
if "pipeline-gen" in config:
|
||||
return True if mappings or build_job or signing_job or service_job_attributes else False
|
||||
|
||||
config["target"] = "gitlab"
|
||||
|
||||
config["pipeline-gen"] = []
|
||||
pipeline_gen = config["pipeline-gen"]
|
||||
|
||||
# Build Job
|
||||
submapping = []
|
||||
for section in mappings:
|
||||
submapping_section = {"match": section["match"]}
|
||||
if "runner-attributes" in section:
|
||||
submapping_section["build-job"] = section["runner-attributes"]
|
||||
if "remove-attributes" in section:
|
||||
submapping_section["build-job-remove"] = section["remove-attributes"]
|
||||
submapping.append(submapping_section)
|
||||
pipeline_gen.append({"submapping": submapping, "match_behavior": match_behavior})
|
||||
|
||||
if build_job:
|
||||
pipeline_gen.append({"build-job": build_job})
|
||||
|
||||
# Signing Job
|
||||
if signing_job:
|
||||
pipeline_gen.append(signing_job)
|
||||
|
||||
# Service Jobs
|
||||
if service_job_attributes:
|
||||
pipeline_gen.append({"reindex-job": service_job_attributes})
|
||||
pipeline_gen.append({"noop-job": service_job_attributes})
|
||||
pipeline_gen.append({"cleanup-job": service_job_attributes})
|
||||
|
||||
return True
|
||||
|
@@ -498,11 +498,11 @@ def list_fn(args):
|
||||
|
||||
if not args.allarch:
|
||||
arch = spack.spec.Spec.default_arch()
|
||||
specs = [s for s in specs if s.intersects(arch)]
|
||||
specs = [s for s in specs if s.satisfies(arch)]
|
||||
|
||||
if args.specs:
|
||||
constraints = set(args.specs)
|
||||
specs = [s for s in specs if any(s.intersects(c) for c in constraints)]
|
||||
specs = [s for s in specs if any(s.satisfies(c) for c in constraints)]
|
||||
if sys.stdout.isatty():
|
||||
builds = len(specs)
|
||||
tty.msg("%s." % plural(builds, "cached build"))
|
||||
|
@@ -33,6 +33,12 @@ def deindent(desc):
|
||||
return desc.replace(" ", "")
|
||||
|
||||
|
||||
def get_env_var(variable_name):
|
||||
if variable_name in os.environ:
|
||||
return os.environ.get(variable_name)
|
||||
return None
|
||||
|
||||
|
||||
def setup_parser(subparser):
|
||||
setup_parser.parser = subparser
|
||||
subparsers = subparser.add_subparsers(help="CI sub-commands")
|
||||
@@ -249,9 +255,10 @@ def ci_rebuild(args):
|
||||
|
||||
# Make sure the environment is "gitlab-enabled", or else there's nothing
|
||||
# to do.
|
||||
ci_config = cfg.get("ci")
|
||||
if not ci_config:
|
||||
tty.die("spack ci rebuild requires an env containing ci cfg")
|
||||
yaml_root = ev.config_dict(env.yaml)
|
||||
gitlab_ci = yaml_root["gitlab-ci"] if "gitlab-ci" in yaml_root else None
|
||||
if not gitlab_ci:
|
||||
tty.die("spack ci rebuild requires an env containing gitlab-ci cfg")
|
||||
|
||||
tty.msg(
|
||||
"SPACK_BUILDCACHE_DESTINATION={0}".format(
|
||||
@@ -262,27 +269,27 @@ def ci_rebuild(args):
|
||||
# Grab the environment variables we need. These either come from the
|
||||
# pipeline generation step ("spack ci generate"), where they were written
|
||||
# out as variables, or else provided by GitLab itself.
|
||||
pipeline_artifacts_dir = os.environ.get("SPACK_ARTIFACTS_ROOT")
|
||||
job_log_dir = os.environ.get("SPACK_JOB_LOG_DIR")
|
||||
job_test_dir = os.environ.get("SPACK_JOB_TEST_DIR")
|
||||
repro_dir = os.environ.get("SPACK_JOB_REPRO_DIR")
|
||||
local_mirror_dir = os.environ.get("SPACK_LOCAL_MIRROR_DIR")
|
||||
concrete_env_dir = os.environ.get("SPACK_CONCRETE_ENV_DIR")
|
||||
ci_pipeline_id = os.environ.get("CI_PIPELINE_ID")
|
||||
ci_job_name = os.environ.get("CI_JOB_NAME")
|
||||
signing_key = os.environ.get("SPACK_SIGNING_KEY")
|
||||
job_spec_pkg_name = os.environ.get("SPACK_JOB_SPEC_PKG_NAME")
|
||||
job_spec_dag_hash = os.environ.get("SPACK_JOB_SPEC_DAG_HASH")
|
||||
compiler_action = os.environ.get("SPACK_COMPILER_ACTION")
|
||||
spack_pipeline_type = os.environ.get("SPACK_PIPELINE_TYPE")
|
||||
remote_mirror_override = os.environ.get("SPACK_REMOTE_MIRROR_OVERRIDE")
|
||||
remote_mirror_url = os.environ.get("SPACK_REMOTE_MIRROR_URL")
|
||||
spack_ci_stack_name = os.environ.get("SPACK_CI_STACK_NAME")
|
||||
shared_pr_mirror_url = os.environ.get("SPACK_CI_SHARED_PR_MIRROR_URL")
|
||||
rebuild_everything = os.environ.get("SPACK_REBUILD_EVERYTHING")
|
||||
pipeline_artifacts_dir = get_env_var("SPACK_ARTIFACTS_ROOT")
|
||||
job_log_dir = get_env_var("SPACK_JOB_LOG_DIR")
|
||||
job_test_dir = get_env_var("SPACK_JOB_TEST_DIR")
|
||||
repro_dir = get_env_var("SPACK_JOB_REPRO_DIR")
|
||||
local_mirror_dir = get_env_var("SPACK_LOCAL_MIRROR_DIR")
|
||||
concrete_env_dir = get_env_var("SPACK_CONCRETE_ENV_DIR")
|
||||
ci_pipeline_id = get_env_var("CI_PIPELINE_ID")
|
||||
ci_job_name = get_env_var("CI_JOB_NAME")
|
||||
signing_key = get_env_var("SPACK_SIGNING_KEY")
|
||||
job_spec_pkg_name = get_env_var("SPACK_JOB_SPEC_PKG_NAME")
|
||||
job_spec_dag_hash = get_env_var("SPACK_JOB_SPEC_DAG_HASH")
|
||||
compiler_action = get_env_var("SPACK_COMPILER_ACTION")
|
||||
spack_pipeline_type = get_env_var("SPACK_PIPELINE_TYPE")
|
||||
remote_mirror_override = get_env_var("SPACK_REMOTE_MIRROR_OVERRIDE")
|
||||
remote_mirror_url = get_env_var("SPACK_REMOTE_MIRROR_URL")
|
||||
spack_ci_stack_name = get_env_var("SPACK_CI_STACK_NAME")
|
||||
shared_pr_mirror_url = get_env_var("SPACK_CI_SHARED_PR_MIRROR_URL")
|
||||
rebuild_everything = get_env_var("SPACK_REBUILD_EVERYTHING")
|
||||
|
||||
# Construct absolute paths relative to current $CI_PROJECT_DIR
|
||||
ci_project_dir = os.environ.get("CI_PROJECT_DIR")
|
||||
ci_project_dir = get_env_var("CI_PROJECT_DIR")
|
||||
pipeline_artifacts_dir = os.path.join(ci_project_dir, pipeline_artifacts_dir)
|
||||
job_log_dir = os.path.join(ci_project_dir, job_log_dir)
|
||||
job_test_dir = os.path.join(ci_project_dir, job_test_dir)
|
||||
@@ -299,10 +306,8 @@ def ci_rebuild(args):
|
||||
# Query the environment manifest to find out whether we're reporting to a
|
||||
# CDash instance, and if so, gather some information from the manifest to
|
||||
# support that task.
|
||||
cdash_config = cfg.get("cdash")
|
||||
cdash_handler = None
|
||||
if "build-group" in cdash_config:
|
||||
cdash_handler = spack_ci.CDashHandler(cdash_config)
|
||||
cdash_handler = spack_ci.CDashHandler(yaml_root.get("cdash")) if "cdash" in yaml_root else None
|
||||
if cdash_handler:
|
||||
tty.debug("cdash url = {0}".format(cdash_handler.url))
|
||||
tty.debug("cdash project = {0}".format(cdash_handler.project))
|
||||
tty.debug("cdash project_enc = {0}".format(cdash_handler.project_enc))
|
||||
@@ -335,13 +340,13 @@ def ci_rebuild(args):
|
||||
pipeline_mirror_url = None
|
||||
|
||||
temp_storage_url_prefix = None
|
||||
if "temporary-storage-url-prefix" in ci_config:
|
||||
temp_storage_url_prefix = ci_config["temporary-storage-url-prefix"]
|
||||
if "temporary-storage-url-prefix" in gitlab_ci:
|
||||
temp_storage_url_prefix = gitlab_ci["temporary-storage-url-prefix"]
|
||||
pipeline_mirror_url = url_util.join(temp_storage_url_prefix, ci_pipeline_id)
|
||||
|
||||
enable_artifacts_mirror = False
|
||||
if "enable-artifacts-buildcache" in ci_config:
|
||||
enable_artifacts_mirror = ci_config["enable-artifacts-buildcache"]
|
||||
if "enable-artifacts-buildcache" in gitlab_ci:
|
||||
enable_artifacts_mirror = gitlab_ci["enable-artifacts-buildcache"]
|
||||
if enable_artifacts_mirror or (
|
||||
spack_is_pr_pipeline and not enable_artifacts_mirror and not temp_storage_url_prefix
|
||||
):
|
||||
@@ -588,8 +593,8 @@ def ci_rebuild(args):
|
||||
# avoid wasting compute cycles attempting to build those hashes.
|
||||
if install_exit_code == INSTALL_FAIL_CODE and spack_is_develop_pipeline:
|
||||
tty.debug("Install failed on develop")
|
||||
if "broken-specs-url" in ci_config:
|
||||
broken_specs_url = ci_config["broken-specs-url"]
|
||||
if "broken-specs-url" in gitlab_ci:
|
||||
broken_specs_url = gitlab_ci["broken-specs-url"]
|
||||
dev_fail_hash = job_spec.dag_hash()
|
||||
broken_spec_path = url_util.join(broken_specs_url, dev_fail_hash)
|
||||
tty.msg("Reporting broken develop build as: {0}".format(broken_spec_path))
|
||||
@@ -597,8 +602,8 @@ def ci_rebuild(args):
|
||||
broken_spec_path,
|
||||
job_spec_pkg_name,
|
||||
spack_ci_stack_name,
|
||||
os.environ.get("CI_JOB_URL"),
|
||||
os.environ.get("CI_PIPELINE_URL"),
|
||||
get_env_var("CI_JOB_URL"),
|
||||
get_env_var("CI_PIPELINE_URL"),
|
||||
job_spec.to_dict(hash=ht.dag_hash),
|
||||
)
|
||||
|
||||
@@ -610,14 +615,17 @@ def ci_rebuild(args):
|
||||
# the package, run them and copy the output. Failures of any kind should
|
||||
# *not* terminate the build process or preclude creating the build cache.
|
||||
broken_tests = (
|
||||
"broken-tests-packages" in ci_config
|
||||
and job_spec.name in ci_config["broken-tests-packages"]
|
||||
"broken-tests-packages" in gitlab_ci
|
||||
and job_spec.name in gitlab_ci["broken-tests-packages"]
|
||||
)
|
||||
reports_dir = fs.join_path(os.getcwd(), "cdash_report")
|
||||
if args.tests and broken_tests:
|
||||
tty.warn("Unable to run stand-alone tests since listed in " "ci's 'broken-tests-packages'")
|
||||
tty.warn(
|
||||
"Unable to run stand-alone tests since listed in "
|
||||
"gitlab-ci's 'broken-tests-packages'"
|
||||
)
|
||||
if cdash_handler:
|
||||
msg = "Package is listed in ci's broken-tests-packages"
|
||||
msg = "Package is listed in gitlab-ci's broken-tests-packages"
|
||||
cdash_handler.report_skipped(job_spec, reports_dir, reason=msg)
|
||||
cdash_handler.copy_test_results(reports_dir, job_test_dir)
|
||||
elif args.tests:
|
||||
@@ -680,8 +688,8 @@ def ci_rebuild(args):
|
||||
|
||||
# If this is a develop pipeline, check if the spec that we just built is
|
||||
# on the broken-specs list. If so, remove it.
|
||||
if spack_is_develop_pipeline and "broken-specs-url" in ci_config:
|
||||
broken_specs_url = ci_config["broken-specs-url"]
|
||||
if spack_is_develop_pipeline and "broken-specs-url" in gitlab_ci:
|
||||
broken_specs_url = gitlab_ci["broken-specs-url"]
|
||||
just_built_hash = job_spec.dag_hash()
|
||||
broken_spec_path = url_util.join(broken_specs_url, just_built_hash)
|
||||
if web_util.url_exists(broken_spec_path):
|
||||
@@ -698,9 +706,9 @@ def ci_rebuild(args):
|
||||
else:
|
||||
tty.debug("spack install exited non-zero, will not create buildcache")
|
||||
|
||||
api_root_url = os.environ.get("CI_API_V4_URL")
|
||||
ci_project_id = os.environ.get("CI_PROJECT_ID")
|
||||
ci_job_id = os.environ.get("CI_JOB_ID")
|
||||
api_root_url = get_env_var("CI_API_V4_URL")
|
||||
ci_project_id = get_env_var("CI_PROJECT_ID")
|
||||
ci_job_id = get_env_var("CI_JOB_ID")
|
||||
|
||||
repro_job_url = "{0}/projects/{1}/jobs/{2}/artifacts".format(
|
||||
api_root_url, ci_project_id, ci_job_id
|
||||
|
@@ -514,15 +514,7 @@ def add_concretizer_args(subparser):
|
||||
dest="concretizer:reuse",
|
||||
const=True,
|
||||
default=None,
|
||||
help="reuse installed packages/buildcaches when possible",
|
||||
)
|
||||
subgroup.add_argument(
|
||||
"--reuse-deps",
|
||||
action=ConfigSetAction,
|
||||
dest="concretizer:reuse",
|
||||
const="dependencies",
|
||||
default=None,
|
||||
help="reuse installed dependencies only",
|
||||
help="reuse installed dependencies/buildcaches when possible",
|
||||
)
|
||||
|
||||
|
||||
|
@@ -7,7 +7,6 @@
|
||||
import collections
|
||||
import os
|
||||
import shutil
|
||||
from typing import List
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
import llnl.util.tty as tty
|
||||
@@ -245,35 +244,30 @@ def config_remove(args):
|
||||
spack.config.set(path, existing, scope)
|
||||
|
||||
|
||||
def _can_update_config_file(scope: spack.config.ConfigScope, cfg_file):
|
||||
if isinstance(scope, spack.config.SingleFileScope):
|
||||
return fs.can_access(cfg_file)
|
||||
return fs.can_write_to_dir(scope.path) and fs.can_access(cfg_file)
|
||||
def _can_update_config_file(scope_dir, cfg_file):
|
||||
dir_ok = fs.can_write_to_dir(scope_dir)
|
||||
cfg_ok = fs.can_access(cfg_file)
|
||||
return dir_ok and cfg_ok
|
||||
|
||||
|
||||
def config_update(args):
|
||||
# Read the configuration files
|
||||
spack.config.config.get_config(args.section, scope=args.scope)
|
||||
updates: List[spack.config.ConfigScope] = list(
|
||||
filter(
|
||||
lambda s: not isinstance(
|
||||
s, (spack.config.InternalConfigScope, spack.config.ImmutableConfigScope)
|
||||
),
|
||||
spack.config.config.format_updates[args.section],
|
||||
)
|
||||
)
|
||||
updates = spack.config.config.format_updates[args.section]
|
||||
|
||||
cannot_overwrite, skip_system_scope = [], False
|
||||
for scope in updates:
|
||||
cfg_file = spack.config.config.get_config_filename(scope.name, args.section)
|
||||
can_be_updated = _can_update_config_file(scope, cfg_file)
|
||||
scope_dir = scope.path
|
||||
can_be_updated = _can_update_config_file(scope_dir, cfg_file)
|
||||
if not can_be_updated:
|
||||
if scope.name == "system":
|
||||
skip_system_scope = True
|
||||
tty.warn(
|
||||
msg = (
|
||||
'Not enough permissions to write to "system" scope. '
|
||||
f"Skipping update at that location [cfg={cfg_file}]"
|
||||
"Skipping update at that location [cfg={0}]"
|
||||
)
|
||||
tty.warn(msg.format(cfg_file))
|
||||
continue
|
||||
cannot_overwrite.append((scope, cfg_file))
|
||||
|
||||
@@ -321,14 +315,18 @@ def config_update(args):
|
||||
# Get a function to update the format
|
||||
update_fn = spack.config.ensure_latest_format_fn(args.section)
|
||||
for scope in updates:
|
||||
data = scope.get_section(args.section).pop(args.section)
|
||||
cfg_file = spack.config.config.get_config_filename(scope.name, args.section)
|
||||
with open(cfg_file) as f:
|
||||
data = syaml.load_config(f) or {}
|
||||
data = data.pop(args.section, {})
|
||||
update_fn(data)
|
||||
|
||||
# Make a backup copy and rewrite the file
|
||||
bkp_file = cfg_file + ".bkp"
|
||||
shutil.copy(cfg_file, bkp_file)
|
||||
spack.config.config.update_config(args.section, data, scope=scope.name, force=True)
|
||||
tty.msg(f'File "{cfg_file}" update [backup={bkp_file}]')
|
||||
msg = 'File "{0}" updated [backup={1}]'
|
||||
tty.msg(msg.format(cfg_file, bkp_file))
|
||||
|
||||
|
||||
def _can_revert_update(scope_dir, cfg_file, bkp_file):
|
||||
|
@@ -39,14 +39,19 @@
|
||||
compiler flags:
|
||||
@g{cflags="flags"} cppflags, cflags, cxxflags,
|
||||
fflags, ldflags, ldlibs
|
||||
@g{==} propagate flags to package dependencies
|
||||
@g{cflags=="flags"} propagate flags to package dependencies
|
||||
cppflags, cflags, cxxflags, fflags,
|
||||
ldflags, ldlibs
|
||||
|
||||
variants:
|
||||
@B{+variant} enable <variant>
|
||||
@B{++variant} propagate enable <variant>
|
||||
@r{-variant} or @r{~variant} disable <variant>
|
||||
@r{--variant} or @r{~~variant} propagate disable <variant>
|
||||
@B{variant=value} set non-boolean <variant> to <value>
|
||||
@B{variant==value} propagate non-boolean <variant> to <value>
|
||||
@B{variant=value1,value2,value3} set multi-value <variant> values
|
||||
@B{++}, @r{--}, @r{~~}, @B{==} propagate variants to package dependencies
|
||||
@B{variant==value1,value2,value3} propagate multi-value <variant> values
|
||||
|
||||
architecture variants:
|
||||
@m{platform=platform} linux, darwin, cray, etc.
|
||||
|
@@ -283,7 +283,7 @@ def print_tests(pkg):
|
||||
c_names = ("gcc", "intel", "intel-parallel-studio", "pgi")
|
||||
if pkg.name in c_names:
|
||||
v_names.extend(["c", "cxx", "fortran"])
|
||||
if pkg.spec.intersects("llvm+clang"):
|
||||
if pkg.spec.satisfies("llvm+clang"):
|
||||
v_names.extend(["c", "cxx"])
|
||||
# TODO Refactor END
|
||||
|
||||
|
@@ -263,6 +263,146 @@ def report_filename(args: argparse.Namespace, specs: List[spack.spec.Spec]) -> s
|
||||
return result
|
||||
|
||||
|
||||
def install_specs(specs, install_kwargs, cli_args):
|
||||
try:
|
||||
if ev.active_environment():
|
||||
install_specs_inside_environment(specs, install_kwargs, cli_args)
|
||||
else:
|
||||
install_specs_outside_environment(specs, install_kwargs)
|
||||
except spack.build_environment.InstallError as e:
|
||||
if cli_args.show_log_on_error:
|
||||
e.print_context()
|
||||
assert e.pkg, "Expected InstallError to include the associated package"
|
||||
if not os.path.exists(e.pkg.build_log_path):
|
||||
tty.error("'spack install' created no log.")
|
||||
else:
|
||||
sys.stderr.write("Full build log:\n")
|
||||
with open(e.pkg.build_log_path) as log:
|
||||
shutil.copyfileobj(log, sys.stderr)
|
||||
raise
|
||||
|
||||
|
||||
def install_specs_inside_environment(specs, install_kwargs, cli_args):
|
||||
specs_to_install, specs_to_add = [], []
|
||||
env = ev.active_environment()
|
||||
for abstract, concrete in specs:
|
||||
# This won't find specs added to the env since last
|
||||
# concretize, therefore should we consider enforcing
|
||||
# concretization of the env before allowing to install
|
||||
# specs?
|
||||
m_spec = env.matching_spec(abstract)
|
||||
|
||||
# If there is any ambiguity in the above call to matching_spec
|
||||
# (i.e. if more than one spec in the environment matches), then
|
||||
# SpackEnvironmentError is raised, with a message listing the
|
||||
# the matches. Getting to this point means there were either
|
||||
# no matches or exactly one match.
|
||||
|
||||
if not m_spec and not cli_args.add:
|
||||
msg = (
|
||||
"Cannot install '{0}' because it is not in the current environment."
|
||||
" You can add it to the environment with 'spack add {0}', or as part"
|
||||
" of the install command with 'spack install --add {0}'"
|
||||
).format(str(abstract))
|
||||
tty.die(msg)
|
||||
|
||||
if not m_spec:
|
||||
tty.debug("adding {0} as a root".format(abstract.name))
|
||||
specs_to_add.append((abstract, concrete))
|
||||
continue
|
||||
|
||||
tty.debug("exactly one match for {0} in env -> {1}".format(m_spec.name, m_spec.dag_hash()))
|
||||
|
||||
if m_spec in env.roots() or not cli_args.add:
|
||||
# either the single match is a root spec (in which case
|
||||
# the spec is not added to the env again), or the user did
|
||||
# not specify --add (in which case it is assumed we are
|
||||
# installing already-concretized specs in the env)
|
||||
tty.debug("just install {0}".format(m_spec.name))
|
||||
specs_to_install.append(m_spec)
|
||||
else:
|
||||
# the single match is not a root (i.e. it's a dependency),
|
||||
# and --add was specified, so we'll add it as a
|
||||
# root before installing
|
||||
tty.debug("add {0} then install it".format(m_spec.name))
|
||||
specs_to_add.append((abstract, concrete))
|
||||
if specs_to_add:
|
||||
tty.debug("Adding the following specs as roots:")
|
||||
for abstract, concrete in specs_to_add:
|
||||
tty.debug(" {0}".format(abstract.name))
|
||||
with env.write_transaction():
|
||||
specs_to_install.append(env.concretize_and_add(abstract, concrete))
|
||||
env.write(regenerate=False)
|
||||
# Install the validated list of cli specs
|
||||
if specs_to_install:
|
||||
tty.debug("Installing the following cli specs:")
|
||||
for s in specs_to_install:
|
||||
tty.debug(" {0}".format(s.name))
|
||||
env.install_specs(specs_to_install, **install_kwargs)
|
||||
|
||||
|
||||
def install_specs_outside_environment(specs, install_kwargs):
|
||||
installs = [(concrete.package, install_kwargs) for _, concrete in specs]
|
||||
builder = PackageInstaller(installs)
|
||||
builder.install()
|
||||
|
||||
|
||||
def install_all_specs_from_active_environment(
|
||||
install_kwargs, only_concrete, cli_test_arg, reporter_factory
|
||||
):
|
||||
"""Install all specs from the active environment
|
||||
|
||||
Args:
|
||||
install_kwargs (dict): dictionary of options to be passed to the installer
|
||||
only_concrete (bool): if true don't concretize the environment, but install
|
||||
only the specs that are already concrete
|
||||
cli_test_arg (bool or str): command line argument to select which test to run
|
||||
reporter: reporter object for the installations
|
||||
"""
|
||||
env = ev.active_environment()
|
||||
if not env:
|
||||
msg = "install requires a package argument or active environment"
|
||||
if "spack.yaml" in os.listdir(os.getcwd()):
|
||||
# There's a spack.yaml file in the working dir, the user may
|
||||
# have intended to use that
|
||||
msg += "\n\n"
|
||||
msg += "Did you mean to install using the `spack.yaml`"
|
||||
msg += " in this directory? Try: \n"
|
||||
msg += " spack env activate .\n"
|
||||
msg += " spack install\n"
|
||||
msg += " OR\n"
|
||||
msg += " spack --env . install"
|
||||
tty.die(msg)
|
||||
|
||||
install_kwargs["tests"] = compute_tests_install_kwargs(env.user_specs, cli_test_arg)
|
||||
if not only_concrete:
|
||||
with env.write_transaction():
|
||||
concretized_specs = env.concretize(tests=install_kwargs["tests"])
|
||||
ev.display_specs(concretized_specs)
|
||||
|
||||
# save view regeneration for later, so that we only do it
|
||||
# once, as it can be slow.
|
||||
env.write(regenerate=False)
|
||||
|
||||
specs = env.all_specs()
|
||||
if not specs:
|
||||
msg = "{0} environment has no specs to install".format(env.name)
|
||||
tty.msg(msg)
|
||||
return
|
||||
|
||||
reporter = reporter_factory(specs) or lang.nullcontext()
|
||||
|
||||
tty.msg("Installing environment {0}".format(env.name))
|
||||
with reporter:
|
||||
env.install_all(**install_kwargs)
|
||||
|
||||
tty.debug("Regenerating environment views for {0}".format(env.name))
|
||||
with env.write_transaction():
|
||||
# write env to trigger view generation and modulefile
|
||||
# generation
|
||||
env.write()
|
||||
|
||||
|
||||
def compute_tests_install_kwargs(specs, cli_test_arg):
|
||||
"""Translate the test cli argument into the proper install argument"""
|
||||
if cli_test_arg == "all":
|
||||
@@ -272,6 +412,43 @@ def compute_tests_install_kwargs(specs, cli_test_arg):
|
||||
return False
|
||||
|
||||
|
||||
def specs_from_cli(args, install_kwargs):
|
||||
"""Return abstract and concrete spec parsed from the command line."""
|
||||
abstract_specs = spack.cmd.parse_specs(args.spec)
|
||||
install_kwargs["tests"] = compute_tests_install_kwargs(abstract_specs, args.test)
|
||||
try:
|
||||
concrete_specs = spack.cmd.parse_specs(
|
||||
args.spec, concretize=True, tests=install_kwargs["tests"]
|
||||
)
|
||||
except SpackError as e:
|
||||
tty.debug(e)
|
||||
if args.log_format is not None:
|
||||
reporter = args.reporter()
|
||||
reporter.concretization_report(report_filename(args, abstract_specs), e.message)
|
||||
raise
|
||||
return abstract_specs, concrete_specs
|
||||
|
||||
|
||||
def concrete_specs_from_file(args):
|
||||
"""Return the list of concrete specs read from files."""
|
||||
result = []
|
||||
for file in args.specfiles:
|
||||
with open(file, "r") as f:
|
||||
if file.endswith("yaml") or file.endswith("yml"):
|
||||
s = spack.spec.Spec.from_yaml(f)
|
||||
else:
|
||||
s = spack.spec.Spec.from_json(f)
|
||||
|
||||
concretized = s.concretized()
|
||||
if concretized.dag_hash() != s.dag_hash():
|
||||
msg = 'skipped invalid file "{0}". '
|
||||
msg += "The file does not contain a concrete spec."
|
||||
tty.warn(msg.format(file))
|
||||
continue
|
||||
result.append(concretized)
|
||||
return result
|
||||
|
||||
|
||||
def require_user_confirmation_for_overwrite(concrete_specs, args):
|
||||
if args.yes_to_all:
|
||||
return
|
||||
@@ -298,40 +475,12 @@ def require_user_confirmation_for_overwrite(concrete_specs, args):
|
||||
tty.die("Reinstallation aborted.")
|
||||
|
||||
|
||||
def _dump_log_on_error(e: spack.build_environment.InstallError):
|
||||
e.print_context()
|
||||
assert e.pkg, "Expected InstallError to include the associated package"
|
||||
if not os.path.exists(e.pkg.build_log_path):
|
||||
tty.error("'spack install' created no log.")
|
||||
else:
|
||||
sys.stderr.write("Full build log:\n")
|
||||
with open(e.pkg.build_log_path, errors="replace") as log:
|
||||
shutil.copyfileobj(log, sys.stderr)
|
||||
|
||||
|
||||
def _die_require_env():
|
||||
msg = "install requires a package argument or active environment"
|
||||
if "spack.yaml" in os.listdir(os.getcwd()):
|
||||
# There's a spack.yaml file in the working dir, the user may
|
||||
# have intended to use that
|
||||
msg += (
|
||||
"\n\n"
|
||||
"Did you mean to install using the `spack.yaml`"
|
||||
" in this directory? Try: \n"
|
||||
" spack env activate .\n"
|
||||
" spack install\n"
|
||||
" OR\n"
|
||||
" spack --env . install"
|
||||
)
|
||||
tty.die(msg)
|
||||
|
||||
|
||||
def install(parser, args):
|
||||
# TODO: unify args.verbose?
|
||||
tty.set_verbose(args.verbose or args.install_verbose)
|
||||
|
||||
if args.help_cdash:
|
||||
arguments.print_cdash_help()
|
||||
spack.cmd.common.arguments.print_cdash_help()
|
||||
return
|
||||
|
||||
if args.no_checksum:
|
||||
@@ -340,154 +489,43 @@ def install(parser, args):
|
||||
if args.deprecated:
|
||||
spack.config.set("config:deprecated", True, scope="command_line")
|
||||
|
||||
if args.log_file and not args.log_format:
|
||||
msg = "the '--log-format' must be specified when using '--log-file'"
|
||||
tty.die(msg)
|
||||
|
||||
arguments.sanitize_reporter_options(args)
|
||||
spack.cmd.common.arguments.sanitize_reporter_options(args)
|
||||
|
||||
def reporter_factory(specs):
|
||||
if args.log_format is None:
|
||||
return lang.nullcontext()
|
||||
return None
|
||||
|
||||
return spack.report.build_context_manager(
|
||||
context_manager = spack.report.build_context_manager(
|
||||
reporter=args.reporter(), filename=report_filename(args, specs=specs), specs=specs
|
||||
)
|
||||
return context_manager
|
||||
|
||||
install_kwargs = install_kwargs_from_args(args)
|
||||
|
||||
env = ev.active_environment()
|
||||
|
||||
if not env and not args.spec and not args.specfiles:
|
||||
_die_require_env()
|
||||
|
||||
try:
|
||||
if env:
|
||||
install_with_active_env(env, args, install_kwargs, reporter_factory)
|
||||
else:
|
||||
install_without_active_env(args, install_kwargs, reporter_factory)
|
||||
except spack.build_environment.InstallError as e:
|
||||
if args.show_log_on_error:
|
||||
_dump_log_on_error(e)
|
||||
raise
|
||||
|
||||
|
||||
def _maybe_add_and_concretize(args, env, specs):
|
||||
"""Handle the overloaded spack install behavior of adding
|
||||
and automatically concretizing specs"""
|
||||
|
||||
# Users can opt out of accidental concretizations with --only-concrete
|
||||
if args.only_concrete:
|
||||
if not args.spec and not args.specfiles:
|
||||
# If there are no args but an active environment then install the packages from it.
|
||||
install_all_specs_from_active_environment(
|
||||
install_kwargs=install_kwargs,
|
||||
only_concrete=args.only_concrete,
|
||||
cli_test_arg=args.test,
|
||||
reporter_factory=reporter_factory,
|
||||
)
|
||||
return
|
||||
|
||||
# Otherwise, we will modify the environment.
|
||||
with env.write_transaction():
|
||||
# `spack add` adds these specs.
|
||||
if args.add:
|
||||
for spec in specs:
|
||||
env.add(spec)
|
||||
# Specs from CLI
|
||||
abstract_specs, concrete_specs = specs_from_cli(args, install_kwargs)
|
||||
|
||||
# `spack concretize`
|
||||
tests = compute_tests_install_kwargs(env.user_specs, args.test)
|
||||
concretized_specs = env.concretize(tests=tests)
|
||||
ev.display_specs(concretized_specs)
|
||||
|
||||
# save view regeneration for later, so that we only do it
|
||||
# once, as it can be slow.
|
||||
env.write(regenerate=False)
|
||||
|
||||
|
||||
def install_with_active_env(env: ev.Environment, args, install_kwargs, reporter_factory):
|
||||
specs = spack.cmd.parse_specs(args.spec)
|
||||
|
||||
# The following two commands are equivalent:
|
||||
# 1. `spack install --add x y z`
|
||||
# 2. `spack add x y z && spack concretize && spack install --only-concrete`
|
||||
# here we do the `add` and `concretize` part.
|
||||
_maybe_add_and_concretize(args, env, specs)
|
||||
|
||||
# Now we're doing `spack install --only-concrete`.
|
||||
if args.add or not specs:
|
||||
specs_to_install = env.concrete_roots()
|
||||
if not specs_to_install:
|
||||
tty.msg(f"{env.name} environment has no specs to install")
|
||||
return
|
||||
|
||||
# `spack install x y z` without --add is installing matching specs in the env.
|
||||
else:
|
||||
specs_to_install = env.all_matching_specs(*specs)
|
||||
if not specs_to_install:
|
||||
msg = (
|
||||
"Cannot install '{0}' because no matching specs are in the current environment."
|
||||
" You can add specs to the environment with 'spack add {0}', or as part"
|
||||
" of the install command with 'spack install --add {0}'"
|
||||
).format(" ".join(args.spec))
|
||||
tty.die(msg)
|
||||
|
||||
install_kwargs["tests"] = compute_tests_install_kwargs(specs_to_install, args.test)
|
||||
|
||||
if args.overwrite:
|
||||
require_user_confirmation_for_overwrite(specs_to_install, args)
|
||||
install_kwargs["overwrite"] = [spec.dag_hash() for spec in specs_to_install]
|
||||
|
||||
try:
|
||||
with reporter_factory(specs_to_install):
|
||||
env.install_specs(specs_to_install, **install_kwargs)
|
||||
finally:
|
||||
# TODO: this is doing way too much to trigger
|
||||
# views and modules to be generated.
|
||||
with env.write_transaction():
|
||||
env.write(regenerate=True)
|
||||
|
||||
|
||||
def concrete_specs_from_cli(args, install_kwargs):
|
||||
"""Return abstract and concrete spec parsed from the command line."""
|
||||
abstract_specs = spack.cmd.parse_specs(args.spec)
|
||||
install_kwargs["tests"] = compute_tests_install_kwargs(abstract_specs, args.test)
|
||||
try:
|
||||
concrete_specs = spack.cmd.parse_specs(
|
||||
args.spec, concretize=True, tests=install_kwargs["tests"]
|
||||
)
|
||||
except SpackError as e:
|
||||
tty.debug(e)
|
||||
if args.log_format is not None:
|
||||
reporter = args.reporter()
|
||||
reporter.concretization_report(report_filename(args, abstract_specs), e.message)
|
||||
raise
|
||||
return concrete_specs
|
||||
|
||||
|
||||
def concrete_specs_from_file(args):
|
||||
"""Return the list of concrete specs read from files."""
|
||||
result = []
|
||||
for file in args.specfiles:
|
||||
with open(file, "r") as f:
|
||||
if file.endswith("yaml") or file.endswith("yml"):
|
||||
s = spack.spec.Spec.from_yaml(f)
|
||||
else:
|
||||
s = spack.spec.Spec.from_json(f)
|
||||
|
||||
concretized = s.concretized()
|
||||
if concretized.dag_hash() != s.dag_hash():
|
||||
msg = 'skipped invalid file "{0}". '
|
||||
msg += "The file does not contain a concrete spec."
|
||||
tty.warn(msg.format(file))
|
||||
continue
|
||||
result.append(concretized)
|
||||
return result
|
||||
|
||||
|
||||
def install_without_active_env(args, install_kwargs, reporter_factory):
|
||||
concrete_specs = concrete_specs_from_cli(args, install_kwargs) + concrete_specs_from_file(args)
|
||||
# Concrete specs from YAML or JSON files
|
||||
specs_from_file = concrete_specs_from_file(args)
|
||||
abstract_specs.extend(specs_from_file)
|
||||
concrete_specs.extend(specs_from_file)
|
||||
|
||||
if len(concrete_specs) == 0:
|
||||
tty.die("The `spack install` command requires a spec to install.")
|
||||
|
||||
with reporter_factory(concrete_specs):
|
||||
reporter = reporter_factory(concrete_specs) or lang.nullcontext()
|
||||
with reporter:
|
||||
if args.overwrite:
|
||||
require_user_confirmation_for_overwrite(concrete_specs, args)
|
||||
install_kwargs["overwrite"] = [spec.dag_hash() for spec in concrete_specs]
|
||||
|
||||
installs = [(s.package, install_kwargs) for s in concrete_specs]
|
||||
builder = PackageInstaller(installs)
|
||||
builder.install()
|
||||
install_specs(zip(abstract_specs, concrete_specs), install_kwargs, args)
|
||||
|
@@ -335,7 +335,7 @@ def not_excluded_fn(args):
|
||||
exclude_specs.extend(spack.cmd.parse_specs(str(args.exclude_specs).split()))
|
||||
|
||||
def not_excluded(x):
|
||||
return not any(x.satisfies(y) for y in exclude_specs)
|
||||
return not any(x.satisfies(y, strict=True) for y in exclude_specs)
|
||||
|
||||
return not_excluded
|
||||
|
||||
|
@@ -26,6 +26,7 @@
|
||||
description = "run spack's unit tests (wrapper around pytest)"
|
||||
section = "developer"
|
||||
level = "long"
|
||||
is_windows = sys.platform == "win32"
|
||||
|
||||
|
||||
def setup_parser(subparser):
|
||||
@@ -211,7 +212,7 @@ def unit_test(parser, args, unknown_args):
|
||||
# mock configuration used by unit tests
|
||||
# Note: skip on windows here because for the moment,
|
||||
# clingo is wholly unsupported from bootstrap
|
||||
if sys.platform != "win32":
|
||||
if not is_windows:
|
||||
with spack.bootstrap.ensure_bootstrap_configuration():
|
||||
spack.bootstrap.ensure_core_dependencies()
|
||||
if pytest is None:
|
||||
|
@@ -28,6 +28,8 @@
|
||||
|
||||
__all__ = ["Compiler"]
|
||||
|
||||
is_windows = sys.platform == "win32"
|
||||
|
||||
|
||||
@llnl.util.lang.memoized
|
||||
def _get_compiler_version_output(compiler_path, version_arg, ignore_errors=()):
|
||||
@@ -596,7 +598,7 @@ def search_regexps(cls, language):
|
||||
suffixes = [""]
|
||||
# Windows compilers generally have an extension of some sort
|
||||
# as do most files on Windows, handle that case here
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
ext = r"\.(?:exe|bat)"
|
||||
cls_suf = [suf + ext for suf in cls.suffixes]
|
||||
ext_suf = [ext]
|
||||
|
@@ -84,7 +84,7 @@ def _to_dict(compiler):
|
||||
d = {}
|
||||
d["spec"] = str(compiler.spec)
|
||||
d["paths"] = dict((attr, getattr(compiler, attr, None)) for attr in _path_instance_vars)
|
||||
d["flags"] = dict((fname, " ".join(fvals)) for fname, fvals in compiler.flags.items())
|
||||
d["flags"] = dict((fname, fvals) for fname, fvals in compiler.flags)
|
||||
d["flags"].update(
|
||||
dict(
|
||||
(attr, getattr(compiler, attr, None))
|
||||
|
@@ -61,7 +61,7 @@ def is_clang_based(self):
|
||||
return version >= ver("9.0") and "classic" not in str(version)
|
||||
|
||||
version_argument = "--version"
|
||||
version_regex = r"[Cc]ray (?:clang|C :|C\+\+ :|Fortran :) [Vv]ersion.*?(\d+(\.\d+)+)"
|
||||
version_regex = r"[Vv]ersion.*?(\d+(\.\d+)+)"
|
||||
|
||||
@property
|
||||
def verbose_flag(self):
|
||||
|
@@ -122,19 +122,7 @@ def platform_toolset_ver(self):
|
||||
@property
|
||||
def cl_version(self):
|
||||
"""Cl toolset version"""
|
||||
return Version(
|
||||
re.search(
|
||||
Msvc.version_regex,
|
||||
spack.compiler.get_compiler_version_output(self.cc, version_arg=None),
|
||||
).group(1)
|
||||
)
|
||||
|
||||
@property
|
||||
def vs_root(self):
|
||||
# The MSVC install root is located at a fix level above the compiler
|
||||
# and is referenceable idiomatically via the pattern below
|
||||
# this should be consistent accross versions
|
||||
return os.path.abspath(os.path.join(self.cc, "../../../../../../../.."))
|
||||
return spack.compiler.get_compiler_version_output(self.cc)
|
||||
|
||||
def setup_custom_environment(self, pkg, env):
|
||||
"""Set environment variables for MSVC using the
|
||||
|
@@ -21,7 +21,6 @@
|
||||
import tempfile
|
||||
from contextlib import contextmanager
|
||||
from itertools import chain
|
||||
from typing import Union
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
@@ -44,9 +43,7 @@
|
||||
from spack.version import Version, VersionList, VersionRange, ver
|
||||
|
||||
#: impements rudimentary logic for ABI compatibility
|
||||
_abi: Union[spack.abi.ABI, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(
|
||||
lambda: spack.abi.ABI()
|
||||
)
|
||||
_abi = llnl.util.lang.Singleton(lambda: spack.abi.ABI())
|
||||
|
||||
|
||||
@functools.total_ordering
|
||||
@@ -137,7 +134,7 @@ def _valid_virtuals_and_externals(self, spec):
|
||||
|
||||
externals = spec_externals(cspec)
|
||||
for ext in externals:
|
||||
if ext.intersects(spec):
|
||||
if ext.satisfies(spec):
|
||||
usable.append(ext)
|
||||
|
||||
# If nothing is in the usable list now, it's because we aren't
|
||||
@@ -203,7 +200,7 @@ def concretize_version(self, spec):
|
||||
|
||||
# List of versions we could consider, in sorted order
|
||||
pkg_versions = spec.package_class.versions
|
||||
usable = [v for v in pkg_versions if any(v.intersects(sv) for sv in spec.versions)]
|
||||
usable = [v for v in pkg_versions if any(v.satisfies(sv) for sv in spec.versions)]
|
||||
|
||||
yaml_prefs = PackagePrefs(spec.name, "version")
|
||||
|
||||
@@ -347,7 +344,7 @@ def concretize_architecture(self, spec):
|
||||
new_target_arch = spack.spec.ArchSpec((None, None, str(new_target)))
|
||||
curr_target_arch = spack.spec.ArchSpec((None, None, str(curr_target)))
|
||||
|
||||
if not new_target_arch.intersects(curr_target_arch):
|
||||
if not new_target_arch.satisfies(curr_target_arch):
|
||||
# new_target is an incorrect guess based on preferences
|
||||
# and/or default
|
||||
valid_target_ranges = str(curr_target).split(",")
|
||||
|
@@ -36,10 +36,9 @@
|
||||
import re
|
||||
import sys
|
||||
from contextlib import contextmanager
|
||||
from typing import Dict, List, Optional, Union
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
import ruamel.yaml as yaml
|
||||
from ruamel.yaml.comments import Comment
|
||||
from ruamel.yaml.error import MarkedYAMLError
|
||||
|
||||
import llnl.util.lang
|
||||
@@ -78,8 +77,6 @@
|
||||
"config": spack.schema.config.schema,
|
||||
"upstreams": spack.schema.upstreams.schema,
|
||||
"bootstrap": spack.schema.bootstrap.schema,
|
||||
"ci": spack.schema.ci.schema,
|
||||
"cdash": spack.schema.cdash.schema,
|
||||
}
|
||||
|
||||
# Same as above, but including keys for environments
|
||||
@@ -363,12 +360,6 @@ def _process_dict_keyname_overrides(data):
|
||||
if sk.endswith(":"):
|
||||
key = syaml.syaml_str(sk[:-1])
|
||||
key.override = True
|
||||
elif sk.endswith("+"):
|
||||
key = syaml.syaml_str(sk[:-1])
|
||||
key.prepend = True
|
||||
elif sk.endswith("-"):
|
||||
key = syaml.syaml_str(sk[:-1])
|
||||
key.append = True
|
||||
else:
|
||||
key = sk
|
||||
|
||||
@@ -544,14 +535,16 @@ def update_config(
|
||||
scope = self._validate_scope(scope) # get ConfigScope object
|
||||
|
||||
# manually preserve comments
|
||||
need_comment_copy = section in scope.sections and scope.sections[section]
|
||||
need_comment_copy = section in scope.sections and scope.sections[section] is not None
|
||||
if need_comment_copy:
|
||||
comments = getattr(scope.sections[section][section], Comment.attrib, None)
|
||||
comments = getattr(
|
||||
scope.sections[section][section], yaml.comments.Comment.attrib, None
|
||||
)
|
||||
|
||||
# read only the requested section's data.
|
||||
scope.sections[section] = syaml.syaml_dict({section: update_data})
|
||||
if need_comment_copy and comments:
|
||||
setattr(scope.sections[section][section], Comment.attrib, comments)
|
||||
setattr(scope.sections[section][section], yaml.comments.Comment.attrib, comments)
|
||||
|
||||
scope._write_section(section)
|
||||
|
||||
@@ -837,7 +830,7 @@ def _config():
|
||||
|
||||
|
||||
#: This is the singleton configuration instance for Spack.
|
||||
config: Union[Configuration, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(_config)
|
||||
config = llnl.util.lang.Singleton(_config)
|
||||
|
||||
|
||||
def add_from_file(filename, scope=None):
|
||||
@@ -1047,33 +1040,6 @@ def _override(string):
|
||||
return hasattr(string, "override") and string.override
|
||||
|
||||
|
||||
def _append(string):
|
||||
"""Test if a spack YAML string is an override.
|
||||
|
||||
See ``spack_yaml`` for details. Keys in Spack YAML can end in `+:`,
|
||||
and if they do, their values append lower-precedence
|
||||
configs.
|
||||
|
||||
str, str : concatenate strings.
|
||||
[obj], [obj] : append lists.
|
||||
|
||||
"""
|
||||
return getattr(string, "append", False)
|
||||
|
||||
|
||||
def _prepend(string):
|
||||
"""Test if a spack YAML string is an override.
|
||||
|
||||
See ``spack_yaml`` for details. Keys in Spack YAML can end in `+:`,
|
||||
and if they do, their values prepend lower-precedence
|
||||
configs.
|
||||
|
||||
str, str : concatenate strings.
|
||||
[obj], [obj] : prepend lists. (default behavior)
|
||||
"""
|
||||
return getattr(string, "prepend", False)
|
||||
|
||||
|
||||
def _mark_internal(data, name):
|
||||
"""Add a simple name mark to raw YAML/JSON data.
|
||||
|
||||
@@ -1136,57 +1102,7 @@ def get_valid_type(path):
|
||||
raise ConfigError("Cannot determine valid type for path '%s'." % path)
|
||||
|
||||
|
||||
def remove_yaml(dest, source):
|
||||
"""UnMerges source from dest; entries in source take precedence over dest.
|
||||
|
||||
This routine may modify dest and should be assigned to dest, in
|
||||
case dest was None to begin with, e.g.:
|
||||
|
||||
dest = remove_yaml(dest, source)
|
||||
|
||||
In the result, elements from lists from ``source`` will not appear
|
||||
as elements of lists from ``dest``. Likewise, when iterating over keys
|
||||
or items in merged ``OrderedDict`` objects, keys from ``source`` will not
|
||||
appear as keys in ``dest``.
|
||||
|
||||
Config file authors can optionally end any attribute in a dict
|
||||
with `::` instead of `:`, and the key will remove the entire section
|
||||
from ``dest``
|
||||
"""
|
||||
|
||||
def they_are(t):
|
||||
return isinstance(dest, t) and isinstance(source, t)
|
||||
|
||||
# If source is None, overwrite with source.
|
||||
if source is None:
|
||||
return dest
|
||||
|
||||
# Source list is prepended (for precedence)
|
||||
if they_are(list):
|
||||
# Make sure to copy ruamel comments
|
||||
dest[:] = [x for x in dest if x not in source]
|
||||
return dest
|
||||
|
||||
# Source dict is merged into dest.
|
||||
elif they_are(dict):
|
||||
for sk, sv in source.items():
|
||||
# always remove the dest items. Python dicts do not overwrite
|
||||
# keys on insert, so this ensures that source keys are copied
|
||||
# into dest along with mark provenance (i.e., file/line info).
|
||||
unmerge = sk in dest
|
||||
old_dest_value = dest.pop(sk, None)
|
||||
|
||||
if unmerge and not spack.config._override(sk):
|
||||
dest[sk] = remove_yaml(old_dest_value, sv)
|
||||
|
||||
return dest
|
||||
|
||||
# If we reach here source and dest are either different types or are
|
||||
# not both lists or dicts: replace with source.
|
||||
return dest
|
||||
|
||||
|
||||
def merge_yaml(dest, source, prepend=False, append=False):
|
||||
def merge_yaml(dest, source):
|
||||
"""Merges source into dest; entries in source take precedence over dest.
|
||||
|
||||
This routine may modify dest and should be assigned to dest, in
|
||||
@@ -1202,9 +1118,6 @@ def merge_yaml(dest, source, prepend=False, append=False):
|
||||
Config file authors can optionally end any attribute in a dict
|
||||
with `::` instead of `:`, and the key will override that of the
|
||||
parent instead of merging.
|
||||
|
||||
`+:` will extend the default prepend merge strategy to include string concatenation
|
||||
`-:` will change the merge strategy to append, it also includes string concatentation
|
||||
"""
|
||||
|
||||
def they_are(t):
|
||||
@@ -1216,12 +1129,8 @@ def they_are(t):
|
||||
|
||||
# Source list is prepended (for precedence)
|
||||
if they_are(list):
|
||||
if append:
|
||||
# Make sure to copy ruamel comments
|
||||
dest[:] = [x for x in dest if x not in source] + source
|
||||
else:
|
||||
# Make sure to copy ruamel comments
|
||||
dest[:] = source + [x for x in dest if x not in source]
|
||||
# Make sure to copy ruamel comments
|
||||
dest[:] = source + [x for x in dest if x not in source]
|
||||
return dest
|
||||
|
||||
# Source dict is merged into dest.
|
||||
@@ -1238,7 +1147,7 @@ def they_are(t):
|
||||
old_dest_value = dest.pop(sk, None)
|
||||
|
||||
if merge and not _override(sk):
|
||||
dest[sk] = merge_yaml(old_dest_value, sv, _prepend(sk), _append(sk))
|
||||
dest[sk] = merge_yaml(old_dest_value, sv)
|
||||
else:
|
||||
# if sk ended with ::, or if it's new, completely override
|
||||
dest[sk] = copy.deepcopy(sv)
|
||||
@@ -1249,13 +1158,6 @@ def they_are(t):
|
||||
|
||||
return dest
|
||||
|
||||
elif they_are(str):
|
||||
# Concatenate strings in prepend mode
|
||||
if prepend:
|
||||
return source + dest
|
||||
elif append:
|
||||
return dest + source
|
||||
|
||||
# If we reach here source and dest are either different types or are
|
||||
# not both lists or dicts: replace with source.
|
||||
return copy.copy(source)
|
||||
@@ -1281,17 +1183,6 @@ def process_config_path(path):
|
||||
front = syaml.syaml_str(front)
|
||||
front.override = True
|
||||
seen_override_in_path = True
|
||||
|
||||
elif front.endswith("+"):
|
||||
front = front.rstrip("+")
|
||||
front = syaml.syaml_str(front)
|
||||
front.prepend = True
|
||||
|
||||
elif front.endswith("-"):
|
||||
front = front.rstrip("-")
|
||||
front = syaml.syaml_str(front)
|
||||
front.append = True
|
||||
|
||||
result.append(front)
|
||||
return result
|
||||
|
||||
|
@@ -39,10 +39,10 @@ def validate(configuration_file):
|
||||
# Ensure we have a "container" attribute with sensible defaults set
|
||||
env_dict = ev.config_dict(config)
|
||||
env_dict.setdefault(
|
||||
"container", {"format": "docker", "images": {"os": "ubuntu:22.04", "spack": "develop"}}
|
||||
"container", {"format": "docker", "images": {"os": "ubuntu:18.04", "spack": "develop"}}
|
||||
)
|
||||
env_dict["container"].setdefault("format", "docker")
|
||||
env_dict["container"].setdefault("images", {"os": "ubuntu:22.04", "spack": "develop"})
|
||||
env_dict["container"].setdefault("images", {"os": "ubuntu:18.04", "spack": "develop"})
|
||||
|
||||
# Remove attributes that are not needed / allowed in the
|
||||
# container recipe
|
||||
|
@@ -7,7 +7,6 @@
|
||||
"""
|
||||
import collections
|
||||
import copy
|
||||
from typing import Optional
|
||||
|
||||
import spack.environment as ev
|
||||
import spack.schema.env
|
||||
@@ -132,9 +131,6 @@ class PathContext(tengine.Context):
|
||||
directly via PATH.
|
||||
"""
|
||||
|
||||
# Must be set by derived classes
|
||||
template_name: Optional[str] = None
|
||||
|
||||
def __init__(self, config, last_phase):
|
||||
self.config = ev.config_dict(config)
|
||||
self.container_config = self.config["container"]
|
||||
@@ -150,10 +146,6 @@ def __init__(self, config, last_phase):
|
||||
# Record the last phase
|
||||
self.last_phase = last_phase
|
||||
|
||||
@tengine.context_property
|
||||
def depfile(self):
|
||||
return self.container_config.get("depfile", False)
|
||||
|
||||
@tengine.context_property
|
||||
def run(self):
|
||||
"""Information related to the run image."""
|
||||
@@ -288,8 +280,7 @@ def render_phase(self):
|
||||
def __call__(self):
|
||||
"""Returns the recipe as a string"""
|
||||
env = tengine.make_environment()
|
||||
template_name = self.container_config.get("template", self.template_name)
|
||||
t = env.get_template(template_name)
|
||||
t = env.get_template(self.template_name)
|
||||
return t.render(**self.to_dict())
|
||||
|
||||
|
||||
|
@@ -1525,7 +1525,7 @@ def _query(
|
||||
if not (start_date < inst_date < end_date):
|
||||
continue
|
||||
|
||||
if query_spec is any or rec.spec.satisfies(query_spec):
|
||||
if query_spec is any or rec.spec.satisfies(query_spec, strict=True):
|
||||
results.append(rec.spec)
|
||||
|
||||
return results
|
||||
|
@@ -29,6 +29,7 @@
|
||||
import spack.util.spack_yaml
|
||||
import spack.util.windows_registry
|
||||
|
||||
is_windows = sys.platform == "win32"
|
||||
#: Information on a package that has been detected
|
||||
DetectedPackage = collections.namedtuple("DetectedPackage", ["spec", "prefix"])
|
||||
|
||||
@@ -183,7 +184,7 @@ def library_prefix(library_dir):
|
||||
elif "lib" in lowered_components:
|
||||
idx = lowered_components.index("lib")
|
||||
return os.sep.join(components[:idx])
|
||||
elif sys.platform == "win32" and "bin" in lowered_components:
|
||||
elif is_windows and "bin" in lowered_components:
|
||||
idx = lowered_components.index("bin")
|
||||
return os.sep.join(components[:idx])
|
||||
else:
|
||||
@@ -259,13 +260,13 @@ def find_windows_compiler_bundled_packages():
|
||||
|
||||
|
||||
class WindowsKitExternalPaths(object):
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
plat_major_ver = str(winOs.windows_version()[0])
|
||||
|
||||
@staticmethod
|
||||
def find_windows_kit_roots():
|
||||
"""Return Windows kit root, typically %programfiles%\\Windows Kits\\10|11\\"""
|
||||
if sys.platform != "win32":
|
||||
if not is_windows:
|
||||
return []
|
||||
program_files = os.environ["PROGRAMFILES(x86)"]
|
||||
kit_base = os.path.join(
|
||||
@@ -358,7 +359,7 @@ def compute_windows_program_path_for_package(pkg):
|
||||
pkg (spack.package_base.PackageBase): package for which
|
||||
Program Files location is to be computed
|
||||
"""
|
||||
if sys.platform != "win32":
|
||||
if not is_windows:
|
||||
return []
|
||||
# note windows paths are fine here as this method should only ever be invoked
|
||||
# to interact with Windows
|
||||
@@ -378,7 +379,7 @@ def compute_windows_user_path_for_package(pkg):
|
||||
installs see:
|
||||
https://learn.microsoft.com/en-us/dotnet/api/system.environment.specialfolder?view=netframework-4.8
|
||||
"""
|
||||
if sys.platform != "win32":
|
||||
if not is_windows:
|
||||
return []
|
||||
|
||||
# Current user directory
|
||||
|
@@ -31,6 +31,8 @@
|
||||
path_to_dict,
|
||||
)
|
||||
|
||||
is_windows = sys.platform == "win32"
|
||||
|
||||
|
||||
def common_windows_package_paths():
|
||||
paths = WindowsCompilerExternalPaths.find_windows_compiler_bundled_packages()
|
||||
@@ -55,7 +57,7 @@ def executables_in_path(path_hints):
|
||||
path_hints (list): list of paths to be searched. If None the list will be
|
||||
constructed based on the PATH environment variable.
|
||||
"""
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
path_hints.extend(common_windows_package_paths())
|
||||
search_paths = llnl.util.filesystem.search_paths_for_executables(*path_hints)
|
||||
return path_to_dict(search_paths)
|
||||
@@ -147,7 +149,7 @@ def by_library(packages_to_check, path_hints=None):
|
||||
|
||||
path_to_lib_name = (
|
||||
libraries_in_ld_and_system_library_path(path_hints=path_hints)
|
||||
if sys.platform != "win32"
|
||||
if not is_windows
|
||||
else libraries_in_windows_paths(path_hints)
|
||||
)
|
||||
|
||||
|
@@ -21,6 +21,7 @@
|
||||
import spack.util.spack_json as sjson
|
||||
from spack.error import SpackError
|
||||
|
||||
is_windows = sys.platform == "win32"
|
||||
# Note: Posixpath is used here as opposed to
|
||||
# os.path.join due to spack.spec.Spec.format
|
||||
# requiring forward slash path seperators at this stage
|
||||
@@ -345,7 +346,7 @@ def remove_install_directory(self, spec, deprecated=False):
|
||||
|
||||
# Windows readonly files cannot be removed by Python
|
||||
# directly, change permissions before attempting to remove
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
kwargs = {
|
||||
"ignore_errors": False,
|
||||
"onerror": fs.readonly_file_handler(ignore_errors=False),
|
||||
|
@@ -13,8 +13,6 @@
|
||||
import time
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
import warnings
|
||||
from typing import List, Optional
|
||||
|
||||
import ruamel.yaml as yaml
|
||||
|
||||
@@ -61,7 +59,7 @@
|
||||
|
||||
|
||||
#: currently activated environment
|
||||
_active_environment: Optional["Environment"] = None
|
||||
_active_environment = None
|
||||
|
||||
|
||||
#: default path where environments are stored in the spack tree
|
||||
@@ -351,8 +349,7 @@ def _is_dev_spec_and_has_changed(spec):
|
||||
|
||||
def _spec_needs_overwrite(spec, changed_dev_specs):
|
||||
"""Check whether the current spec needs to be overwritten because either it has
|
||||
changed itself or one of its dependencies have changed
|
||||
"""
|
||||
changed itself or one of its dependencies have changed"""
|
||||
# if it's not installed, we don't need to overwrite it
|
||||
if not spec.installed:
|
||||
return False
|
||||
@@ -698,8 +695,6 @@ def __init__(self, path, init_file=None, with_view=None, keep_relative=False):
|
||||
# This attribute will be set properly from configuration
|
||||
# during concretization
|
||||
self.unify = None
|
||||
self.new_specs = []
|
||||
self.new_installs = []
|
||||
self.clear()
|
||||
|
||||
if init_file:
|
||||
@@ -1556,11 +1551,12 @@ def update_default_view(self, viewpath):
|
||||
|
||||
def regenerate_views(self):
|
||||
if not self.views:
|
||||
tty.debug("Skip view update, this environment does not maintain a view")
|
||||
tty.debug("Skip view update, this environment does not" " maintain a view")
|
||||
return
|
||||
|
||||
concretized_root_specs = [s for _, s in self.concretized_specs()]
|
||||
for view in self.views.values():
|
||||
view.regenerate(self.concrete_roots())
|
||||
view.regenerate(concretized_root_specs)
|
||||
|
||||
def check_views(self):
|
||||
"""Checks if the environments default view can be activated."""
|
||||
@@ -1568,7 +1564,7 @@ def check_views(self):
|
||||
# This is effectively a no-op, but it touches all packages in the
|
||||
# default view if they are installed.
|
||||
for view_name, view in self.views.items():
|
||||
for spec in self.concrete_roots():
|
||||
for _, spec in self.concretized_specs():
|
||||
if spec in view and spec.package and spec.installed:
|
||||
msg = '{0} in view "{1}"'
|
||||
tty.debug(msg.format(spec.name, view_name))
|
||||
@@ -1586,7 +1582,7 @@ def _env_modifications_for_default_view(self, reverse=False):
|
||||
visited = set()
|
||||
|
||||
errors = []
|
||||
for root_spec in self.concrete_roots():
|
||||
for _, root_spec in self.concretized_specs():
|
||||
if root_spec in self.default_view and root_spec.installed and root_spec.package:
|
||||
for spec in root_spec.traverse(deptype="run", root=True):
|
||||
if spec.name in visited:
|
||||
@@ -1751,8 +1747,31 @@ def install_all(self, **install_args):
|
||||
self.install_specs(None, **install_args)
|
||||
|
||||
def install_specs(self, specs=None, **install_args):
|
||||
specs_to_install = specs or [concrete for _, concrete in self.concretized_specs()]
|
||||
tty.debug("Processing {0} specs".format(len(specs_to_install)))
|
||||
tty.debug("Assessing installation status of environment packages")
|
||||
# If "spack install" is invoked repeatedly for a large environment
|
||||
# where all specs are already installed, the operation can take
|
||||
# a large amount of time due to repeatedly acquiring and releasing
|
||||
# locks. As a small optimization, drop already installed root specs.
|
||||
installed_roots, uninstalled_roots = self._partition_roots_by_install_status()
|
||||
if specs:
|
||||
specs_to_install = [s for s in specs if s not in installed_roots]
|
||||
specs_dropped = [s for s in specs if s in installed_roots]
|
||||
else:
|
||||
specs_to_install = uninstalled_roots
|
||||
specs_dropped = installed_roots
|
||||
|
||||
# We need to repeat the work of the installer thanks to the above optimization:
|
||||
# Already installed root specs should be marked explicitly installed in the
|
||||
# database.
|
||||
if specs_dropped:
|
||||
with spack.store.db.write_transaction(): # do all in one transaction
|
||||
for spec in specs_dropped:
|
||||
spack.store.db.update_explicit(spec, True)
|
||||
|
||||
if not specs_to_install:
|
||||
tty.msg("All of the packages are already installed")
|
||||
else:
|
||||
tty.debug("Processing {0} uninstalled specs".format(len(specs_to_install)))
|
||||
|
||||
specs_to_overwrite = self._get_overwrite_specs()
|
||||
tty.debug("{0} specs need to be overwritten".format(len(specs_to_overwrite)))
|
||||
@@ -1780,6 +1799,9 @@ def install_specs(self, specs=None, **install_args):
|
||||
"Could not install log links for {0}: {1}".format(spec.name, str(e))
|
||||
)
|
||||
|
||||
with self.write_transaction():
|
||||
self.regenerate_views()
|
||||
|
||||
def all_specs(self):
|
||||
"""Return all specs, even those a user spec would shadow."""
|
||||
roots = [self.specs_by_hash[h] for h in self.concretized_order]
|
||||
@@ -1824,11 +1846,6 @@ def concretized_specs(self):
|
||||
for s, h in zip(self.concretized_user_specs, self.concretized_order):
|
||||
yield (s, self.specs_by_hash[h])
|
||||
|
||||
def concrete_roots(self):
|
||||
"""Same as concretized_specs, except it returns the list of concrete
|
||||
roots *without* associated user spec"""
|
||||
return [root for _, root in self.concretized_specs()]
|
||||
|
||||
def get_by_hash(self, dag_hash):
|
||||
matches = {}
|
||||
roots = [self.specs_by_hash[h] for h in self.concretized_order]
|
||||
@@ -1845,16 +1862,6 @@ def get_one_by_hash(self, dag_hash):
|
||||
assert len(hash_matches) == 1
|
||||
return hash_matches[0]
|
||||
|
||||
def all_matching_specs(self, *specs: spack.spec.Spec) -> List[Spec]:
|
||||
"""Returns all concretized specs in the environment satisfying any of the input specs"""
|
||||
key = lambda s: s.dag_hash()
|
||||
return [
|
||||
s
|
||||
for s in spack.traverse.traverse_nodes(self.concrete_roots(), key=key)
|
||||
if any(s.satisfies(t) for t in specs)
|
||||
]
|
||||
|
||||
@spack.repo.autospec
|
||||
def matching_spec(self, spec):
|
||||
"""
|
||||
Given a spec (likely not concretized), find a matching concretized
|
||||
@@ -1872,60 +1879,59 @@ def matching_spec(self, spec):
|
||||
and multiple dependency specs match, then this raises an error
|
||||
and reports all matching specs.
|
||||
"""
|
||||
env_root_to_user = {root.dag_hash(): user for user, root in self.concretized_specs()}
|
||||
root_matches, dep_matches = [], []
|
||||
# Root specs will be keyed by concrete spec, value abstract
|
||||
# Dependency-only specs will have value None
|
||||
matches = {}
|
||||
|
||||
for env_spec in spack.traverse.traverse_nodes(
|
||||
specs=[root for _, root in self.concretized_specs()],
|
||||
key=lambda s: s.dag_hash(),
|
||||
order="breadth",
|
||||
):
|
||||
if not env_spec.satisfies(spec):
|
||||
if not isinstance(spec, spack.spec.Spec):
|
||||
spec = spack.spec.Spec(spec)
|
||||
|
||||
for user_spec, concretized_user_spec in self.concretized_specs():
|
||||
# Deal with concrete specs differently
|
||||
if spec.concrete:
|
||||
if spec in concretized_user_spec:
|
||||
matches[spec] = spec
|
||||
continue
|
||||
|
||||
# If the spec is concrete, then there is no possibility of multiple matches,
|
||||
# and we immediately return the single match
|
||||
if spec.concrete:
|
||||
return env_spec
|
||||
if concretized_user_spec.satisfies(spec):
|
||||
matches[concretized_user_spec] = user_spec
|
||||
for dep_spec in concretized_user_spec.traverse(root=False):
|
||||
if dep_spec.satisfies(spec):
|
||||
# Don't overwrite the abstract spec if present
|
||||
# If not present already, set to None
|
||||
matches[dep_spec] = matches.get(dep_spec, None)
|
||||
|
||||
# Distinguish between environment roots and deps. Specs that are both
|
||||
# are classified as environment roots.
|
||||
user_spec = env_root_to_user.get(env_spec.dag_hash())
|
||||
if user_spec:
|
||||
root_matches.append((env_spec, user_spec))
|
||||
else:
|
||||
dep_matches.append(env_spec)
|
||||
|
||||
# No matching spec
|
||||
if not root_matches and not dep_matches:
|
||||
if not matches:
|
||||
return None
|
||||
elif len(matches) == 1:
|
||||
return list(matches.keys())[0]
|
||||
|
||||
root_matches = dict(
|
||||
(concrete, abstract) for concrete, abstract in matches.items() if abstract
|
||||
)
|
||||
|
||||
# Single root spec, any number of dep specs => return root spec.
|
||||
if len(root_matches) == 1:
|
||||
return root_matches[0][0]
|
||||
|
||||
if not root_matches and len(dep_matches) == 1:
|
||||
return dep_matches[0]
|
||||
return list(root_matches.items())[0][0]
|
||||
|
||||
# More than one spec matched, and either multiple roots matched or
|
||||
# none of the matches were roots
|
||||
# If multiple root specs match, it is assumed that the abstract
|
||||
# spec will most-succinctly summarize the difference between them
|
||||
# (and the user can enter one of these to disambiguate)
|
||||
match_strings = []
|
||||
fmt_str = "{hash:7} " + spack.spec.default_format
|
||||
color = clr.get_color_when()
|
||||
match_strings = [
|
||||
f"Root spec {abstract.format(color=color)}\n {concrete.format(fmt_str, color=color)}"
|
||||
for concrete, abstract in root_matches
|
||||
]
|
||||
match_strings.extend(
|
||||
f"Dependency spec\n {s.format(fmt_str, color=color)}" for s in dep_matches
|
||||
)
|
||||
for concrete, abstract in matches.items():
|
||||
if abstract:
|
||||
s = "Root spec %s\n %s" % (abstract, concrete.format(fmt_str))
|
||||
else:
|
||||
s = "Dependency spec\n %s" % concrete.format(fmt_str)
|
||||
match_strings.append(s)
|
||||
matches_str = "\n".join(match_strings)
|
||||
|
||||
raise SpackEnvironmentError(
|
||||
f"{spec} matches multiple specs in the environment {self.name}: \n{matches_str}"
|
||||
msg = "{0} matches multiple specs in the environment {1}: \n" "{2}".format(
|
||||
str(spec), self.name, matches_str
|
||||
)
|
||||
raise SpackEnvironmentError(msg)
|
||||
|
||||
def removed_specs(self):
|
||||
"""Tuples of (user spec, concrete spec) for all specs that will be
|
||||
@@ -2057,73 +2063,18 @@ def _read_lockfile_dict(self, d):
|
||||
for spec_dag_hash in self.concretized_order:
|
||||
self.specs_by_hash[spec_dag_hash] = first_seen[spec_dag_hash]
|
||||
|
||||
def write(self, regenerate: bool = True) -> None:
|
||||
def write(self, regenerate=True):
|
||||
"""Writes an in-memory environment to its location on disk.
|
||||
|
||||
Write out package files for each newly concretized spec. Also
|
||||
regenerate any views associated with the environment and run post-write
|
||||
hooks, if regenerate is True.
|
||||
|
||||
Args:
|
||||
regenerate: regenerate views and run post-write hooks as well as writing if True.
|
||||
Arguments:
|
||||
regenerate (bool): regenerate views and run post-write hooks as
|
||||
well as writing if True.
|
||||
"""
|
||||
self.manifest_uptodate_or_warn()
|
||||
if self.specs_by_hash:
|
||||
self.ensure_env_directory_exists(dot_env=True)
|
||||
self.update_environment_repository()
|
||||
self.update_manifest()
|
||||
# Write the lock file last. This is useful for Makefiles
|
||||
# with `spack.lock: spack.yaml` rules, where the target
|
||||
# should be newer than the prerequisite to avoid
|
||||
# redundant re-concretization.
|
||||
self.update_lockfile()
|
||||
else:
|
||||
self.ensure_env_directory_exists(dot_env=False)
|
||||
with fs.safe_remove(self.lock_path):
|
||||
self.update_manifest()
|
||||
|
||||
if regenerate:
|
||||
self.regenerate_views()
|
||||
spack.hooks.post_env_write(self)
|
||||
|
||||
self._reset_new_specs_and_installs()
|
||||
|
||||
def _reset_new_specs_and_installs(self) -> None:
|
||||
self.new_specs = []
|
||||
self.new_installs = []
|
||||
|
||||
def update_lockfile(self) -> None:
|
||||
with fs.write_tmp_and_move(self.lock_path) as f:
|
||||
sjson.dump(self._to_lockfile_dict(), stream=f)
|
||||
|
||||
def ensure_env_directory_exists(self, dot_env: bool = False) -> None:
|
||||
"""Ensure that the root directory of the environment exists
|
||||
|
||||
Args:
|
||||
dot_env: if True also ensures that the <root>/.env directory exists
|
||||
"""
|
||||
fs.mkdirp(self.path)
|
||||
if dot_env:
|
||||
fs.mkdirp(self.env_subdir_path)
|
||||
|
||||
def update_environment_repository(self) -> None:
|
||||
"""Updates the repository associated with the environment."""
|
||||
for spec in spack.traverse.traverse_nodes(self.new_specs):
|
||||
if not spec.concrete:
|
||||
raise ValueError("specs passed to environment.write() must be concrete!")
|
||||
|
||||
self._add_to_environment_repository(spec)
|
||||
|
||||
def _add_to_environment_repository(self, spec_node: Spec) -> None:
|
||||
"""Add the root node of the spec to the environment repository"""
|
||||
repository_dir = os.path.join(self.repos_path, spec_node.namespace)
|
||||
repository = spack.repo.create_or_construct(repository_dir, spec_node.namespace)
|
||||
pkg_dir = repository.dirname_for_package_name(spec_node.name)
|
||||
fs.mkdirp(pkg_dir)
|
||||
spack.repo.path.dump_provenance(spec_node, pkg_dir)
|
||||
|
||||
def manifest_uptodate_or_warn(self):
|
||||
"""Emits a warning if the manifest file is not up-to-date."""
|
||||
# Warn that environments are not in the latest format.
|
||||
if not is_latest_format(self.manifest_path):
|
||||
ver = ".".join(str(s) for s in spack.spack_version_info[:2])
|
||||
msg = (
|
||||
@@ -2133,14 +2084,61 @@ def manifest_uptodate_or_warn(self):
|
||||
"Note that versions of Spack older than {} may not be able to "
|
||||
"use the updated configuration."
|
||||
)
|
||||
warnings.warn(msg.format(self.name, self.name, ver))
|
||||
tty.warn(msg.format(self.name, self.name, ver))
|
||||
|
||||
def update_manifest(self):
|
||||
# ensure path in var/spack/environments
|
||||
fs.mkdirp(self.path)
|
||||
|
||||
yaml_dict = config_dict(self.yaml)
|
||||
raw_yaml_dict = config_dict(self.raw_yaml)
|
||||
|
||||
if self.specs_by_hash:
|
||||
# ensure the prefix/.env directory exists
|
||||
fs.mkdirp(self.env_subdir_path)
|
||||
|
||||
for spec in spack.traverse.traverse_nodes(self.new_specs):
|
||||
if not spec.concrete:
|
||||
raise ValueError("specs passed to environment.write() " "must be concrete!")
|
||||
|
||||
root = os.path.join(self.repos_path, spec.namespace)
|
||||
repo = spack.repo.create_or_construct(root, spec.namespace)
|
||||
pkg_dir = repo.dirname_for_package_name(spec.name)
|
||||
|
||||
fs.mkdirp(pkg_dir)
|
||||
spack.repo.path.dump_provenance(spec, pkg_dir)
|
||||
|
||||
self._update_and_write_manifest(raw_yaml_dict, yaml_dict)
|
||||
|
||||
# Write the lock file last. This is useful for Makefiles
|
||||
# with `spack.lock: spack.yaml` rules, where the target
|
||||
# should be newer than the prerequisite to avoid
|
||||
# redundant re-concretization.
|
||||
with fs.write_tmp_and_move(self.lock_path) as f:
|
||||
sjson.dump(self._to_lockfile_dict(), stream=f)
|
||||
else:
|
||||
with fs.safe_remove(self.lock_path):
|
||||
self._update_and_write_manifest(raw_yaml_dict, yaml_dict)
|
||||
|
||||
# TODO: rethink where this needs to happen along with
|
||||
# writing. For some of the commands (like install, which write
|
||||
# concrete specs AND regen) this might as well be a separate
|
||||
# call. But, having it here makes the views consistent witht the
|
||||
# concretized environment for most operations. Which is the
|
||||
# special case?
|
||||
if regenerate:
|
||||
self.regenerate_views()
|
||||
|
||||
# Run post_env_hooks
|
||||
spack.hooks.post_env_write(self)
|
||||
|
||||
# new specs and new installs reset at write time
|
||||
self.new_specs = []
|
||||
self.new_installs = []
|
||||
|
||||
def _update_and_write_manifest(self, raw_yaml_dict, yaml_dict):
|
||||
"""Update YAML manifest for this environment based on changes to
|
||||
spec lists and views and write it.
|
||||
"""
|
||||
yaml_dict = config_dict(self.yaml)
|
||||
raw_yaml_dict = config_dict(self.raw_yaml)
|
||||
# invalidate _repo cache
|
||||
self._repo = None
|
||||
# put any changes in the definitions in the YAML
|
||||
@@ -2194,7 +2192,6 @@ def update_manifest(self):
|
||||
view = dict((name, view.to_dict()) for name, view in self.views.items())
|
||||
else:
|
||||
view = False
|
||||
|
||||
yaml_dict["view"] = view
|
||||
|
||||
if self.dev_specs:
|
||||
@@ -2240,19 +2237,12 @@ def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
activate(self._previous_active)
|
||||
|
||||
|
||||
def yaml_equivalent(first, second) -> bool:
|
||||
def yaml_equivalent(first, second):
|
||||
"""Returns whether two spack yaml items are equivalent, including overrides"""
|
||||
# YAML has timestamps and dates, but we don't use them yet in schemas
|
||||
if isinstance(first, dict):
|
||||
return isinstance(second, dict) and _equiv_dict(first, second)
|
||||
elif isinstance(first, list):
|
||||
return isinstance(second, list) and _equiv_list(first, second)
|
||||
elif isinstance(first, bool):
|
||||
return isinstance(second, bool) and first is second
|
||||
elif isinstance(first, int):
|
||||
return isinstance(second, int) and first == second
|
||||
elif first is None:
|
||||
return second is None
|
||||
else: # it's a string
|
||||
return isinstance(second, str) and first == second
|
||||
|
||||
@@ -2323,7 +2313,7 @@ def _concretize_from_constraints(spec_constraints, tests=False):
|
||||
invalid_deps = [
|
||||
c
|
||||
for c in spec_constraints
|
||||
if any(c.satisfies(invd) for invd in invalid_deps_string)
|
||||
if any(c.satisfies(invd, strict=True) for invd in invalid_deps_string)
|
||||
]
|
||||
if len(invalid_deps) != len(invalid_deps_string):
|
||||
raise e
|
||||
|
@@ -28,6 +28,7 @@
|
||||
import os.path
|
||||
import re
|
||||
import shutil
|
||||
import sys
|
||||
import urllib.parse
|
||||
from typing import List, Optional
|
||||
|
||||
@@ -52,6 +53,7 @@
|
||||
|
||||
#: List of all fetch strategies, created by FetchStrategy metaclass.
|
||||
all_strategies = []
|
||||
is_windows = sys.platform == "win32"
|
||||
|
||||
CONTENT_TYPE_MISMATCH_WARNING_TEMPLATE = (
|
||||
"The contents of {subject} look like {content_type}. Either the URL"
|
||||
@@ -1501,7 +1503,7 @@ def _from_merged_attrs(fetcher, pkg, version):
|
||||
return fetcher(**attrs)
|
||||
|
||||
|
||||
def for_package_version(pkg, version=None):
|
||||
def for_package_version(pkg, version):
|
||||
"""Determine a fetch strategy based on the arguments supplied to
|
||||
version() in the package description."""
|
||||
|
||||
@@ -1512,18 +1514,8 @@ def for_package_version(pkg, version=None):
|
||||
|
||||
check_pkg_attributes(pkg)
|
||||
|
||||
if version is not None:
|
||||
assert not pkg.spec.concrete, "concrete specs should not pass the 'version=' argument"
|
||||
# Specs are initialized with the universe range, if no version information is given,
|
||||
# so here we make sure we always match the version passed as argument
|
||||
if not isinstance(version, spack.version.VersionBase):
|
||||
version = spack.version.Version(version)
|
||||
|
||||
version_list = spack.version.VersionList()
|
||||
version_list.add(version)
|
||||
pkg.spec.versions = version_list
|
||||
else:
|
||||
version = pkg.version
|
||||
if not isinstance(version, spack.version.VersionBase):
|
||||
version = spack.version.Version(version)
|
||||
|
||||
# if it's a commit, we must use a GitFetchStrategy
|
||||
if isinstance(version, spack.version.GitVersion):
|
||||
|
@@ -12,7 +12,7 @@
|
||||
Currently the following hooks are supported:
|
||||
|
||||
* pre_install(spec)
|
||||
* post_install(spec, explicit)
|
||||
* post_install(spec)
|
||||
* pre_uninstall(spec)
|
||||
* post_uninstall(spec)
|
||||
* on_install_start(spec)
|
||||
|
@@ -131,7 +131,7 @@ def find_and_patch_sonames(prefix, exclude_list, patchelf):
|
||||
return patch_sonames(patchelf, prefix, relative_paths)
|
||||
|
||||
|
||||
def post_install(spec, explicit=None):
|
||||
def post_install(spec):
|
||||
# Skip if disabled
|
||||
if not spack.config.get("config:shared_linking:bind", False):
|
||||
return
|
||||
|
@@ -169,7 +169,7 @@ def write_license_file(pkg, license_path):
|
||||
f.close()
|
||||
|
||||
|
||||
def post_install(spec, explicit=None):
|
||||
def post_install(spec):
|
||||
"""This hook symlinks local licenses to the global license for
|
||||
licensed software.
|
||||
"""
|
||||
|
@@ -10,7 +10,7 @@
|
||||
import spack.modules.common
|
||||
|
||||
|
||||
def _for_each_enabled(spec, method_name, explicit=None):
|
||||
def _for_each_enabled(spec, method_name):
|
||||
"""Calls a method for each enabled module"""
|
||||
set_names = set(spack.config.get("modules", {}).keys())
|
||||
# If we have old-style modules enabled, we put those in the default set
|
||||
@@ -27,7 +27,7 @@ def _for_each_enabled(spec, method_name, explicit=None):
|
||||
continue
|
||||
|
||||
for type in enabled:
|
||||
generator = spack.modules.module_types[type](spec, name, explicit)
|
||||
generator = spack.modules.module_types[type](spec, name)
|
||||
try:
|
||||
getattr(generator, method_name)()
|
||||
except RuntimeError as e:
|
||||
@@ -36,7 +36,7 @@ def _for_each_enabled(spec, method_name, explicit=None):
|
||||
tty.warn(msg.format(method_name, str(e)))
|
||||
|
||||
|
||||
def post_install(spec, explicit):
|
||||
def post_install(spec):
|
||||
import spack.environment as ev # break import cycle
|
||||
|
||||
if ev.active_environment():
|
||||
@@ -45,7 +45,7 @@ def post_install(spec, explicit):
|
||||
# can manage interactions between env views and modules
|
||||
return
|
||||
|
||||
_for_each_enabled(spec, "write", explicit)
|
||||
_for_each_enabled(spec, "write")
|
||||
|
||||
|
||||
def post_uninstall(spec):
|
||||
|
@@ -8,7 +8,7 @@
|
||||
import spack.util.file_permissions as fp
|
||||
|
||||
|
||||
def post_install(spec, explicit=None):
|
||||
def post_install(spec):
|
||||
if not spec.external:
|
||||
fp.set_permissions_by_spec(spec.prefix, spec)
|
||||
|
||||
|
@@ -30,7 +30,8 @@
|
||||
|
||||
#: Groupdb does not exist on Windows, prevent imports
|
||||
#: on supported systems
|
||||
if sys.platform != "win32":
|
||||
is_windows = sys.platform == "win32"
|
||||
if not is_windows:
|
||||
import grp
|
||||
|
||||
#: Spack itself also limits the shebang line to at most 4KB, which should be plenty.
|
||||
@@ -224,7 +225,7 @@ def install_sbang():
|
||||
os.rename(sbang_tmp_path, sbang_path)
|
||||
|
||||
|
||||
def post_install(spec, explicit=None):
|
||||
def post_install(spec):
|
||||
"""This hook edits scripts so that they call /bin/bash
|
||||
$spack_prefix/bin/sbang instead of something longer than the
|
||||
shebang limit.
|
||||
|
@@ -6,6 +6,6 @@
|
||||
import spack.verify
|
||||
|
||||
|
||||
def post_install(spec, explicit=None):
|
||||
def post_install(spec):
|
||||
if not spec.external:
|
||||
spack.verify.write_manifest(spec)
|
||||
|
@@ -84,6 +84,9 @@
|
||||
#: queue invariants).
|
||||
STATUS_REMOVED = "removed"
|
||||
|
||||
is_windows = sys.platform == "win32"
|
||||
is_osx = sys.platform == "darwin"
|
||||
|
||||
|
||||
class InstallAction(object):
|
||||
#: Don't perform an install
|
||||
@@ -166,9 +169,9 @@ def _do_fake_install(pkg):
|
||||
if not pkg.name.startswith("lib"):
|
||||
library = "lib" + library
|
||||
|
||||
plat_shared = ".dll" if sys.platform == "win32" else ".so"
|
||||
plat_static = ".lib" if sys.platform == "win32" else ".a"
|
||||
dso_suffix = ".dylib" if sys.platform == "darwin" else plat_shared
|
||||
plat_shared = ".dll" if is_windows else ".so"
|
||||
plat_static = ".lib" if is_windows else ".a"
|
||||
dso_suffix = ".dylib" if is_osx else plat_shared
|
||||
|
||||
# Install fake command
|
||||
fs.mkdirp(pkg.prefix.bin)
|
||||
@@ -315,7 +318,7 @@ def _install_from_cache(pkg, cache_only, explicit, unsigned=False):
|
||||
tty.debug("Successfully extracted {0} from binary cache".format(pkg_id))
|
||||
_print_timer(pre=_log_prefix(pkg.name), pkg_id=pkg_id, timer=t)
|
||||
_print_installed_pkg(pkg.spec.prefix)
|
||||
spack.hooks.post_install(pkg.spec, explicit)
|
||||
spack.hooks.post_install(pkg.spec)
|
||||
return True
|
||||
|
||||
|
||||
@@ -353,7 +356,7 @@ def _process_external_package(pkg, explicit):
|
||||
# For external packages we just need to run
|
||||
# post-install hooks to generate module files.
|
||||
tty.debug("{0} generating module file".format(pre))
|
||||
spack.hooks.post_install(spec, explicit)
|
||||
spack.hooks.post_install(spec)
|
||||
|
||||
# Add to the DB
|
||||
tty.debug("{0} registering into DB".format(pre))
|
||||
@@ -1260,10 +1263,6 @@ def _install_task(self, task):
|
||||
if not pkg.unit_test_check():
|
||||
return
|
||||
|
||||
# Injecting information to know if this installation request is the root one
|
||||
# to determine in BuildProcessInstaller whether installation is explicit or not
|
||||
install_args["is_root"] = task.is_root
|
||||
|
||||
try:
|
||||
self._setup_install_dir(pkg)
|
||||
|
||||
@@ -1883,9 +1882,6 @@ def __init__(self, pkg, install_args):
|
||||
# whether to enable echoing of build output initially or not
|
||||
self.verbose = install_args.get("verbose", False)
|
||||
|
||||
# whether installation was explicitly requested by the user
|
||||
self.explicit = install_args.get("is_root", False) and install_args.get("explicit", True)
|
||||
|
||||
# env before starting installation
|
||||
self.unmodified_env = install_args.get("unmodified_env", {})
|
||||
|
||||
@@ -1946,7 +1942,7 @@ def run(self):
|
||||
self.timer.write_json(timelog)
|
||||
|
||||
# Run post install hooks before build stage is removed.
|
||||
spack.hooks.post_install(self.pkg.spec, self.explicit)
|
||||
spack.hooks.post_install(self.pkg.spec)
|
||||
|
||||
_print_timer(pre=self.pre, pkg_id=self.pkg_id, timer=self.timer)
|
||||
_print_installed_pkg(self.pkg.prefix)
|
||||
|
@@ -575,7 +575,7 @@ def setup_main_options(args):
|
||||
if args.debug:
|
||||
spack.util.debug.register_interrupt_handler()
|
||||
spack.config.set("config:debug", True, scope="command_line")
|
||||
spack.util.environment.TRACING_ENABLED = True
|
||||
spack.util.environment.tracing_enabled = True
|
||||
|
||||
if args.timestamp:
|
||||
tty.set_timestamp(True)
|
||||
|
@@ -492,7 +492,7 @@ def get_matching_versions(specs, num_versions=1):
|
||||
break
|
||||
|
||||
# Generate only versions that satisfy the spec.
|
||||
if spec.concrete or v.intersects(spec.versions):
|
||||
if spec.concrete or v.satisfies(spec.versions):
|
||||
s = spack.spec.Spec(pkg.name)
|
||||
s.versions = VersionList([v])
|
||||
s.variants = spec.variants.copy()
|
||||
|
@@ -4,7 +4,7 @@
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
"""This package contains code for creating environment modules, which can
|
||||
include Tcl non-hierarchical modules, Lua hierarchical modules, and others.
|
||||
include TCL non-hierarchical modules, LUA hierarchical modules, and others.
|
||||
"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
@@ -207,7 +207,7 @@ def merge_config_rules(configuration, spec):
|
||||
# evaluated in order of appearance in the module file
|
||||
spec_configuration = module_specific_configuration.pop("all", {})
|
||||
for constraint, action in module_specific_configuration.items():
|
||||
if spec.satisfies(constraint):
|
||||
if spec.satisfies(constraint, strict=True):
|
||||
if hasattr(constraint, "override") and constraint.override:
|
||||
spec_configuration = {}
|
||||
update_dictionary_extending_lists(spec_configuration, action)
|
||||
@@ -428,17 +428,12 @@ class BaseConfiguration(object):
|
||||
|
||||
default_projections = {"all": "{name}-{version}-{compiler.name}-{compiler.version}"}
|
||||
|
||||
def __init__(self, spec, module_set_name, explicit=None):
|
||||
def __init__(self, spec, module_set_name):
|
||||
# Module where type(self) is defined
|
||||
self.module = inspect.getmodule(self)
|
||||
# Spec for which we want to generate a module file
|
||||
self.spec = spec
|
||||
self.name = module_set_name
|
||||
# Software installation has been explicitly asked (get this information from
|
||||
# db when querying an existing module, like during a refresh or rm operations)
|
||||
if explicit is None:
|
||||
explicit = spec._installed_explicitly()
|
||||
self.explicit = explicit
|
||||
# Dictionary of configuration options that should be applied
|
||||
# to the spec
|
||||
self.conf = merge_config_rules(self.module.configuration(self.name), self.spec)
|
||||
@@ -524,7 +519,8 @@ def excluded(self):
|
||||
# Should I exclude the module because it's implicit?
|
||||
# DEPRECATED: remove 'blacklist_implicits' in v0.20
|
||||
exclude_implicits = get_deprecated(conf, "exclude_implicits", "blacklist_implicits", None)
|
||||
excluded_as_implicit = exclude_implicits and not self.explicit
|
||||
installed_implicitly = not spec._installed_explicitly()
|
||||
excluded_as_implicit = exclude_implicits and installed_implicitly
|
||||
|
||||
def debug_info(line_header, match_list):
|
||||
if match_list:
|
||||
@@ -703,7 +699,7 @@ def configure_options(self):
|
||||
|
||||
if os.path.exists(pkg.install_configure_args_path):
|
||||
with open(pkg.install_configure_args_path, "r") as args_file:
|
||||
return spack.util.path.padding_filter(args_file.read())
|
||||
return args_file.read()
|
||||
|
||||
# Returning a false-like value makes the default templates skip
|
||||
# the configure option section
|
||||
@@ -792,8 +788,7 @@ def autoload(self):
|
||||
def _create_module_list_of(self, what):
|
||||
m = self.conf.module
|
||||
name = self.conf.name
|
||||
explicit = self.conf.explicit
|
||||
return [m.make_layout(x, name, explicit).use_name for x in getattr(self.conf, what)]
|
||||
return [m.make_layout(x, name).use_name for x in getattr(self.conf, what)]
|
||||
|
||||
@tengine.context_property
|
||||
def verbose(self):
|
||||
@@ -802,7 +797,7 @@ def verbose(self):
|
||||
|
||||
|
||||
class BaseModuleFileWriter(object):
|
||||
def __init__(self, spec, module_set_name, explicit=None):
|
||||
def __init__(self, spec, module_set_name):
|
||||
self.spec = spec
|
||||
|
||||
# This class is meant to be derived. Get the module of the
|
||||
@@ -811,9 +806,9 @@ def __init__(self, spec, module_set_name, explicit=None):
|
||||
m = self.module
|
||||
|
||||
# Create the triplet of configuration/layout/context
|
||||
self.conf = m.make_configuration(spec, module_set_name, explicit)
|
||||
self.layout = m.make_layout(spec, module_set_name, explicit)
|
||||
self.context = m.make_context(spec, module_set_name, explicit)
|
||||
self.conf = m.make_configuration(spec, module_set_name)
|
||||
self.layout = m.make_layout(spec, module_set_name)
|
||||
self.context = m.make_context(spec, module_set_name)
|
||||
|
||||
# Check if a default template has been defined,
|
||||
# throw if not found
|
||||
@@ -935,7 +930,6 @@ def remove(self):
|
||||
if os.path.exists(mod_file):
|
||||
try:
|
||||
os.remove(mod_file) # Remove the module file
|
||||
self.remove_module_defaults() # Remove default targeting module file
|
||||
os.removedirs(
|
||||
os.path.dirname(mod_file)
|
||||
) # Remove all the empty directories from the leaf up
|
||||
@@ -943,18 +937,6 @@ def remove(self):
|
||||
# removedirs throws OSError on first non-empty directory found
|
||||
pass
|
||||
|
||||
def remove_module_defaults(self):
|
||||
if not any(self.spec.satisfies(default) for default in self.conf.defaults):
|
||||
return
|
||||
|
||||
# This spec matches a default, symlink needs to be removed as we remove the module
|
||||
# file it targets.
|
||||
default_symlink = os.path.join(os.path.dirname(self.layout.filename), "default")
|
||||
try:
|
||||
os.unlink(default_symlink)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def disable_modules():
|
||||
|
@@ -33,26 +33,24 @@ def configuration(module_set_name):
|
||||
configuration_registry: Dict[str, Any] = {}
|
||||
|
||||
|
||||
def make_configuration(spec, module_set_name, explicit):
|
||||
def make_configuration(spec, module_set_name):
|
||||
"""Returns the lmod configuration for spec"""
|
||||
key = (spec.dag_hash(), module_set_name, explicit)
|
||||
key = (spec.dag_hash(), module_set_name)
|
||||
try:
|
||||
return configuration_registry[key]
|
||||
except KeyError:
|
||||
return configuration_registry.setdefault(
|
||||
key, LmodConfiguration(spec, module_set_name, explicit)
|
||||
)
|
||||
return configuration_registry.setdefault(key, LmodConfiguration(spec, module_set_name))
|
||||
|
||||
|
||||
def make_layout(spec, module_set_name, explicit):
|
||||
def make_layout(spec, module_set_name):
|
||||
"""Returns the layout information for spec"""
|
||||
conf = make_configuration(spec, module_set_name, explicit)
|
||||
conf = make_configuration(spec, module_set_name)
|
||||
return LmodFileLayout(conf)
|
||||
|
||||
|
||||
def make_context(spec, module_set_name, explicit):
|
||||
def make_context(spec, module_set_name):
|
||||
"""Returns the context information for spec"""
|
||||
conf = make_configuration(spec, module_set_name, explicit)
|
||||
conf = make_configuration(spec, module_set_name)
|
||||
return LmodContext(conf)
|
||||
|
||||
|
||||
@@ -73,7 +71,7 @@ def guess_core_compilers(name, store=False):
|
||||
# A compiler is considered to be a core compiler if any of the
|
||||
# C, C++ or Fortran compilers reside in a system directory
|
||||
is_system_compiler = any(
|
||||
os.path.dirname(x) in spack.util.environment.SYSTEM_DIRS
|
||||
os.path.dirname(x) in spack.util.environment.system_dirs
|
||||
for x in compiler["paths"].values()
|
||||
if x is not None
|
||||
)
|
||||
@@ -411,7 +409,7 @@ def missing(self):
|
||||
@tengine.context_property
|
||||
def unlocked_paths(self):
|
||||
"""Returns the list of paths that are unlocked unconditionally."""
|
||||
layout = make_layout(self.spec, self.conf.name, self.conf.explicit)
|
||||
layout = make_layout(self.spec, self.conf.name)
|
||||
return [os.path.join(*parts) for parts in layout.unlocked_paths[None]]
|
||||
|
||||
@tengine.context_property
|
||||
@@ -419,7 +417,7 @@ def conditionally_unlocked_paths(self):
|
||||
"""Returns the list of paths that are unlocked conditionally.
|
||||
Each item in the list is a tuple with the structure (condition, path).
|
||||
"""
|
||||
layout = make_layout(self.spec, self.conf.name, self.conf.explicit)
|
||||
layout = make_layout(self.spec, self.conf.name)
|
||||
value = []
|
||||
conditional_paths = layout.unlocked_paths
|
||||
conditional_paths.pop(None)
|
||||
|
@@ -3,7 +3,7 @@
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
"""This module implements the classes necessary to generate Tcl
|
||||
"""This module implements the classes necessary to generate TCL
|
||||
non-hierarchical modules.
|
||||
"""
|
||||
import posixpath
|
||||
@@ -19,7 +19,7 @@
|
||||
from .common import BaseConfiguration, BaseContext, BaseFileLayout, BaseModuleFileWriter
|
||||
|
||||
|
||||
#: Tcl specific part of the configuration
|
||||
#: TCL specific part of the configuration
|
||||
def configuration(module_set_name):
|
||||
config_path = "modules:%s:tcl" % module_set_name
|
||||
config = spack.config.get(config_path, {})
|
||||
@@ -30,26 +30,24 @@ def configuration(module_set_name):
|
||||
configuration_registry: Dict[str, Any] = {}
|
||||
|
||||
|
||||
def make_configuration(spec, module_set_name, explicit):
|
||||
def make_configuration(spec, module_set_name):
|
||||
"""Returns the tcl configuration for spec"""
|
||||
key = (spec.dag_hash(), module_set_name, explicit)
|
||||
key = (spec.dag_hash(), module_set_name)
|
||||
try:
|
||||
return configuration_registry[key]
|
||||
except KeyError:
|
||||
return configuration_registry.setdefault(
|
||||
key, TclConfiguration(spec, module_set_name, explicit)
|
||||
)
|
||||
return configuration_registry.setdefault(key, TclConfiguration(spec, module_set_name))
|
||||
|
||||
|
||||
def make_layout(spec, module_set_name, explicit):
|
||||
def make_layout(spec, module_set_name):
|
||||
"""Returns the layout information for spec"""
|
||||
conf = make_configuration(spec, module_set_name, explicit)
|
||||
conf = make_configuration(spec, module_set_name)
|
||||
return TclFileLayout(conf)
|
||||
|
||||
|
||||
def make_context(spec, module_set_name, explicit):
|
||||
def make_context(spec, module_set_name):
|
||||
"""Returns the context information for spec"""
|
||||
conf = make_configuration(spec, module_set_name, explicit)
|
||||
conf = make_configuration(spec, module_set_name)
|
||||
return TclContext(conf)
|
||||
|
||||
|
||||
|
@@ -36,7 +36,7 @@
|
||||
cmake_cache_path,
|
||||
cmake_cache_string,
|
||||
)
|
||||
from spack.build_systems.cmake import CMakePackage, generator
|
||||
from spack.build_systems.cmake import CMakePackage
|
||||
from spack.build_systems.cuda import CudaPackage
|
||||
from spack.build_systems.generic import Package
|
||||
from spack.build_systems.gnu import GNUMirrorPackage
|
||||
|
@@ -57,7 +57,7 @@
|
||||
from spack.filesystem_view import YamlFilesystemView
|
||||
from spack.install_test import TestFailure, TestSuite
|
||||
from spack.installer import InstallError, PackageInstaller
|
||||
from spack.stage import ResourceStage, Stage, StageComposite, compute_stage_name
|
||||
from spack.stage import ResourceStage, Stage, StageComposite, stage_prefix
|
||||
from spack.util.executable import ProcessError, which
|
||||
from spack.util.package_hash import package_hash
|
||||
from spack.util.prefix import Prefix
|
||||
@@ -92,6 +92,9 @@
|
||||
_spack_configure_argsfile = "spack-configure-args.txt"
|
||||
|
||||
|
||||
is_windows = sys.platform == "win32"
|
||||
|
||||
|
||||
def deprecated_version(pkg, version):
|
||||
"""Return True if the version is deprecated, False otherwise.
|
||||
|
||||
@@ -162,7 +165,7 @@ def windows_establish_runtime_linkage(self):
|
||||
|
||||
Performs symlinking to incorporate rpath dependencies to Windows runtime search paths
|
||||
"""
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
self.win_rpath.add_library_dependent(*self.win_add_library_dependent())
|
||||
self.win_rpath.add_rpath(*self.win_add_rpath())
|
||||
self.win_rpath.establish_link()
|
||||
@@ -207,7 +210,7 @@ def to_windows_exe(exe):
|
||||
plat_exe = []
|
||||
if hasattr(cls, "executables"):
|
||||
for exe in cls.executables:
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
exe = to_windows_exe(exe)
|
||||
plat_exe.append(exe)
|
||||
return plat_exe
|
||||
@@ -1022,7 +1025,8 @@ def _make_root_stage(self, fetcher):
|
||||
)
|
||||
# Construct a path where the stage should build..
|
||||
s = self.spec
|
||||
stage_name = compute_stage_name(s)
|
||||
stage_name = "{0}{1}-{2}-{3}".format(stage_prefix, s.name, s.version, s.dag_hash())
|
||||
|
||||
stage = Stage(
|
||||
fetcher,
|
||||
mirror_paths=mirror_paths,
|
||||
@@ -1196,7 +1200,7 @@ def _make_fetcher(self):
|
||||
# one element (the root package). In case there are resources
|
||||
# associated with the package, append their fetcher to the
|
||||
# composite.
|
||||
root_fetcher = fs.for_package_version(self)
|
||||
root_fetcher = fs.for_package_version(self, self.version)
|
||||
fetcher = fs.FetchStrategyComposite() # Composite fetcher
|
||||
fetcher.append(root_fetcher) # Root fetcher is always present
|
||||
resources = self._get_needed_resources()
|
||||
@@ -1307,7 +1311,7 @@ def provides(self, vpkg_name):
|
||||
True if this package provides a virtual package with the specified name
|
||||
"""
|
||||
return any(
|
||||
any(self.spec.intersects(c) for c in constraints)
|
||||
any(self.spec.satisfies(c) for c in constraints)
|
||||
for s, constraints in self.provided.items()
|
||||
if s.name == vpkg_name
|
||||
)
|
||||
@@ -1613,7 +1617,7 @@ def content_hash(self, content=None):
|
||||
# TODO: resources
|
||||
if self.spec.versions.concrete:
|
||||
try:
|
||||
source_id = fs.for_package_version(self).source_id()
|
||||
source_id = fs.for_package_version(self, self.version).source_id()
|
||||
except (fs.ExtrapolationError, fs.InvalidArgsError):
|
||||
# ExtrapolationError happens if the package has no fetchers defined.
|
||||
# InvalidArgsError happens when there are version directives with args,
|
||||
@@ -1776,7 +1780,7 @@ def _get_needed_resources(self):
|
||||
# conflict with the spec, so we need to invoke
|
||||
# when_spec.satisfies(self.spec) vs.
|
||||
# self.spec.satisfies(when_spec)
|
||||
if when_spec.intersects(self.spec):
|
||||
if when_spec.satisfies(self.spec, strict=False):
|
||||
resources.extend(resource_list)
|
||||
# Sorts the resources by the length of the string representing their
|
||||
# destination. Since any nested resource must contain another
|
||||
@@ -2397,7 +2401,7 @@ def rpath(self):
|
||||
|
||||
# on Windows, libraries of runtime interest are typically
|
||||
# stored in the bin directory
|
||||
if sys.platform == "win32":
|
||||
if is_windows:
|
||||
rpaths = [self.prefix.bin]
|
||||
rpaths.extend(d.prefix.bin for d in deps if os.path.isdir(d.prefix.bin))
|
||||
else:
|
||||
|
@@ -73,7 +73,7 @@ def __call__(self, spec):
|
||||
# integer is the index of the first spec in order that satisfies
|
||||
# spec, or it's a number larger than any position in the order.
|
||||
match_index = next(
|
||||
(i for i, s in enumerate(spec_order) if spec.intersects(s)), len(spec_order)
|
||||
(i for i, s in enumerate(spec_order) if spec.satisfies(s)), len(spec_order)
|
||||
)
|
||||
if match_index < len(spec_order) and spec_order[match_index] == spec:
|
||||
# If this is called with multiple specs that all satisfy the same
|
||||
@@ -185,7 +185,7 @@ def _package(maybe_abstract_spec):
|
||||
),
|
||||
extra_attributes=entry.get("extra_attributes", {}),
|
||||
)
|
||||
if external_spec.intersects(spec):
|
||||
if external_spec.satisfies(spec):
|
||||
external_specs.append(external_spec)
|
||||
|
||||
# Defensively copy returned specs
|
||||
|
@@ -37,7 +37,7 @@
|
||||
|
||||
|
||||
def slingshot_network():
|
||||
return os.path.exists("/opt/cray/pe") and os.path.exists("/lib64/libcxi.so")
|
||||
return os.path.exists("/lib64/libcxi.so")
|
||||
|
||||
|
||||
def _target_name_from_craype_target_name(name):
|
||||
|
@@ -10,7 +10,7 @@ def get_projection(projections, spec):
|
||||
"""
|
||||
all_projection = None
|
||||
for spec_like, projection in projections.items():
|
||||
if spec.satisfies(spec_like):
|
||||
if spec.satisfies(spec_like, strict=True):
|
||||
return projection
|
||||
elif spec_like == "all":
|
||||
all_projection = projection
|
||||
|
@@ -72,7 +72,7 @@ def providers_for(self, virtual_spec):
|
||||
# Add all the providers that satisfy the vpkg spec.
|
||||
if virtual_spec.name in self.providers:
|
||||
for p_spec, spec_set in self.providers[virtual_spec.name].items():
|
||||
if p_spec.intersects(virtual_spec, deps=False):
|
||||
if p_spec.satisfies(virtual_spec, deps=False):
|
||||
result.update(spec_set)
|
||||
|
||||
# Return providers in order. Defensively copy.
|
||||
@@ -186,7 +186,7 @@ def update(self, spec):
|
||||
provider_spec = provider_spec_readonly.copy()
|
||||
provider_spec.compiler_flags = spec.compiler_flags.copy()
|
||||
|
||||
if spec.intersects(provider_spec, deps=False):
|
||||
if spec.satisfies(provider_spec, deps=False):
|
||||
provided_name = provided_spec.name
|
||||
|
||||
provider_map = self.providers.setdefault(provided_name, {})
|
||||
|
@@ -675,7 +675,7 @@ def relocate_text_bin(binaries, prefixes):
|
||||
Raises:
|
||||
spack.relocate_text.BinaryTextReplaceError: when the new path is longer than the old path
|
||||
"""
|
||||
return BinaryFilePrefixReplacer.from_strings_or_bytes(prefixes).apply(binaries)
|
||||
BinaryFilePrefixReplacer.from_strings_or_bytes(prefixes).apply(binaries)
|
||||
|
||||
|
||||
def is_relocatable(spec):
|
||||
|
@@ -73,28 +73,24 @@ def is_noop(self) -> bool:
|
||||
"""Returns true when the prefix to prefix map
|
||||
is mapping everything to the same location (identity)
|
||||
or there are no prefixes to replace."""
|
||||
return not self.prefix_to_prefix
|
||||
return not bool(self.prefix_to_prefix)
|
||||
|
||||
def apply(self, filenames: list):
|
||||
"""Returns a list of files that were modified"""
|
||||
changed_files = []
|
||||
if self.is_noop:
|
||||
return []
|
||||
return
|
||||
for filename in filenames:
|
||||
if self.apply_to_filename(filename):
|
||||
changed_files.append(filename)
|
||||
return changed_files
|
||||
self.apply_to_filename(filename)
|
||||
|
||||
def apply_to_filename(self, filename):
|
||||
if self.is_noop:
|
||||
return False
|
||||
return
|
||||
with open(filename, "rb+") as f:
|
||||
return self.apply_to_file(f)
|
||||
self.apply_to_file(f)
|
||||
|
||||
def apply_to_file(self, f):
|
||||
if self.is_noop:
|
||||
return False
|
||||
return self._apply_to_file(f)
|
||||
return
|
||||
self._apply_to_file(f)
|
||||
|
||||
|
||||
class TextFilePrefixReplacer(PrefixReplacer):
|
||||
@@ -126,11 +122,10 @@ def _apply_to_file(self, f):
|
||||
data = f.read()
|
||||
new_data = re.sub(self.regex, replacement, data)
|
||||
if id(data) == id(new_data):
|
||||
return False
|
||||
return
|
||||
f.seek(0)
|
||||
f.write(new_data)
|
||||
f.truncate()
|
||||
return True
|
||||
|
||||
|
||||
class BinaryFilePrefixReplacer(PrefixReplacer):
|
||||
@@ -199,9 +194,6 @@ def _apply_to_file(self, f):
|
||||
|
||||
Arguments:
|
||||
f: file opened in rb+ mode
|
||||
|
||||
Returns:
|
||||
bool: True if file was modified
|
||||
"""
|
||||
assert f.tell() == 0
|
||||
|
||||
@@ -209,8 +201,6 @@ def _apply_to_file(self, f):
|
||||
# but it's nasty to deal with matches across boundaries, so let's stick to
|
||||
# something simple.
|
||||
|
||||
modified = True
|
||||
|
||||
for match in self.regex.finditer(f.read()):
|
||||
# The matching prefix (old) and its replacement (new)
|
||||
old = match.group(1)
|
||||
@@ -253,9 +243,6 @@ def _apply_to_file(self, f):
|
||||
|
||||
f.seek(match.start())
|
||||
f.write(replacement)
|
||||
modified = True
|
||||
|
||||
return modified
|
||||
|
||||
|
||||
class BinaryStringReplacementError(spack.error.SpackError):
|
||||
|
@@ -1368,7 +1368,7 @@ def create(configuration):
|
||||
|
||||
|
||||
#: Singleton repo path instance
|
||||
path: Union[RepoPath, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(_path)
|
||||
path = llnl.util.lang.Singleton(_path)
|
||||
|
||||
# Add the finder to sys.meta_path
|
||||
REPOS_FINDER = ReposFinder()
|
||||
|
@@ -119,7 +119,7 @@ def rewire_node(spec, explicit):
|
||||
spack.store.db.add(spec, spack.store.layout, explicit=explicit)
|
||||
|
||||
# run post install hooks
|
||||
spack.hooks.post_install(spec, explicit)
|
||||
spack.hooks.post_install(spec)
|
||||
|
||||
|
||||
class RewireError(spack.error.SpackError):
|
||||
|
@@ -15,8 +15,7 @@
|
||||
"cdash": {
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
# "required": ["build-group", "url", "project", "site"],
|
||||
"required": ["build-group"],
|
||||
"required": ["build-group", "url", "project", "site"],
|
||||
"patternProperties": {
|
||||
r"build-group": {"type": "string"},
|
||||
r"url": {"type": "string"},
|
||||
|
@@ -1,212 +0,0 @@
|
||||
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
"""Schema for gitlab-ci.yaml configuration file.
|
||||
|
||||
.. literalinclude:: ../spack/schema/ci.py
|
||||
:lines: 13-
|
||||
"""
|
||||
|
||||
from llnl.util.lang import union_dicts
|
||||
|
||||
import spack.schema.gitlab_ci
|
||||
|
||||
# Schema for script fields
|
||||
# List of lists and/or strings
|
||||
# This is similar to what is allowed in
|
||||
# the gitlab schema
|
||||
script_schema = {
|
||||
"type": "array",
|
||||
"items": {"anyOf": [{"type": "string"}, {"type": "array", "items": {"type": "string"}}]},
|
||||
}
|
||||
|
||||
# Schema for CI image
|
||||
image_schema = {
|
||||
"oneOf": [
|
||||
{"type": "string"},
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {"type": "string"},
|
||||
"entrypoint": {"type": "array", "items": {"type": "string"}},
|
||||
},
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
# Additional attributes are allow
|
||||
# and will be forwarded directly to the
|
||||
# CI target YAML for each job.
|
||||
attributes_schema = {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"image": image_schema,
|
||||
"tags": {"type": "array", "items": {"type": "string"}},
|
||||
"variables": {
|
||||
"type": "object",
|
||||
"patternProperties": {r"[\w\d\-_\.]+": {"type": "string"}},
|
||||
},
|
||||
"before_script": script_schema,
|
||||
"script": script_schema,
|
||||
"after_script": script_schema,
|
||||
},
|
||||
}
|
||||
|
||||
submapping_schema = {
|
||||
"type": "object",
|
||||
"additinoalProperties": False,
|
||||
"required": ["submapping"],
|
||||
"properties": {
|
||||
"match_behavior": {"type": "string", "enum": ["first", "merge"], "default": "first"},
|
||||
"submapping": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"required": ["match"],
|
||||
"properties": {
|
||||
"match": {"type": "array", "items": {"type": "string"}},
|
||||
"build-job": attributes_schema,
|
||||
"build-job-remove": attributes_schema,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
named_attributes_schema = {
|
||||
"oneOf": [
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {"noop-job": attributes_schema, "noop-job-remove": attributes_schema},
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {"build-job": attributes_schema, "build-job-remove": attributes_schema},
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {
|
||||
"reindex-job": attributes_schema,
|
||||
"reindex-job-remove": attributes_schema,
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {
|
||||
"signing-job": attributes_schema,
|
||||
"signing-job-remove": attributes_schema,
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {
|
||||
"cleanup-job": attributes_schema,
|
||||
"cleanup-job-remove": attributes_schema,
|
||||
},
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {"any-job": attributes_schema, "any-job-remove": attributes_schema},
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
pipeline_gen_schema = {
|
||||
"type": "array",
|
||||
"items": {"oneOf": [submapping_schema, named_attributes_schema]},
|
||||
}
|
||||
|
||||
core_shared_properties = union_dicts(
|
||||
{
|
||||
"pipeline-gen": pipeline_gen_schema,
|
||||
"bootstrap": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"anyOf": [
|
||||
{"type": "string"},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"required": ["name"],
|
||||
"properties": {
|
||||
"name": {"type": "string"},
|
||||
"compiler-agnostic": {"type": "boolean", "default": False},
|
||||
},
|
||||
},
|
||||
]
|
||||
},
|
||||
},
|
||||
"rebuild-index": {"type": "boolean"},
|
||||
"broken-specs-url": {"type": "string"},
|
||||
"broken-tests-packages": {"type": "array", "items": {"type": "string"}},
|
||||
"target": {"type": "string", "enum": ["gitlab"], "default": "gitlab"},
|
||||
}
|
||||
)
|
||||
|
||||
ci_properties = {
|
||||
"anyOf": [
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
# "required": ["mappings"],
|
||||
"properties": union_dicts(
|
||||
core_shared_properties, {"enable-artifacts-buildcache": {"type": "boolean"}}
|
||||
),
|
||||
},
|
||||
{
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
# "required": ["mappings"],
|
||||
"properties": union_dicts(
|
||||
core_shared_properties, {"temporary-storage-url-prefix": {"type": "string"}}
|
||||
),
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
#: Properties for inclusion in other schemas
|
||||
properties = {
|
||||
"ci": {
|
||||
"oneOf": [
|
||||
ci_properties,
|
||||
# Allow legacy format under `ci` for `config update ci`
|
||||
spack.schema.gitlab_ci.gitlab_ci_properties,
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
#: Full schema with metadata
|
||||
schema = {
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Spack CI configuration file schema",
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": properties,
|
||||
}
|
||||
|
||||
|
||||
def update(data):
|
||||
import llnl.util.tty as tty
|
||||
|
||||
import spack.ci
|
||||
import spack.environment as ev
|
||||
|
||||
# Warn if deprecated section is still in the environment
|
||||
ci_env = ev.active_environment()
|
||||
if ci_env:
|
||||
env_config = ev.config_dict(ci_env.yaml)
|
||||
if "gitlab-ci" in env_config:
|
||||
tty.die("Error: `gitlab-ci` section detected with `ci`, these are not compatible")
|
||||
|
||||
# Detect if the ci section is using the new pipeline-gen
|
||||
# If it is, assume it has already been converted
|
||||
return spack.ci.translate_deprecated_config(data)
|
@@ -14,9 +14,7 @@
|
||||
"type": "object",
|
||||
"additionalProperties": False,
|
||||
"properties": {
|
||||
"reuse": {
|
||||
"oneOf": [{"type": "boolean"}, {"type": "string", "enum": ["dependencies"]}]
|
||||
},
|
||||
"reuse": {"type": "boolean"},
|
||||
"enable_node_namespace": {"type": "boolean"},
|
||||
"targets": {
|
||||
"type": "object",
|
||||
|
@@ -61,7 +61,6 @@
|
||||
"build_stage": {
|
||||
"oneOf": [{"type": "string"}, {"type": "array", "items": {"type": "string"}}]
|
||||
},
|
||||
"stage_name": {"type": "string"},
|
||||
"test_stage": {"type": "string"},
|
||||
"extensions": {"type": "array", "items": {"type": "string"}},
|
||||
"template_dirs": {"type": "array", "items": {"type": "string"}},
|
||||
|
@@ -63,8 +63,6 @@
|
||||
},
|
||||
# Add labels to the image
|
||||
"labels": {"type": "object"},
|
||||
# Use a custom template to render the recipe
|
||||
"template": {"type": "string", "default": None},
|
||||
# Add a custom extra section at the bottom of a stage
|
||||
"extra_instructions": {
|
||||
"type": "object",
|
||||
@@ -84,16 +82,6 @@
|
||||
},
|
||||
},
|
||||
"docker": {"type": "object", "additionalProperties": False, "default": {}},
|
||||
"depfile": {"type": "boolean", "default": False},
|
||||
},
|
||||
"deprecatedProperties": {
|
||||
"properties": ["extra_instructions"],
|
||||
"message": (
|
||||
"container:extra_instructions has been deprecated and will be removed "
|
||||
"in Spack v0.21. Set container:template appropriately to use custom Jinja2 "
|
||||
"templates instead."
|
||||
),
|
||||
"error": False,
|
||||
},
|
||||
}
|
||||
|
||||
|
@@ -10,7 +10,6 @@
|
||||
"""
|
||||
from llnl.util.lang import union_dicts
|
||||
|
||||
import spack.schema.gitlab_ci # DEPRECATED
|
||||
import spack.schema.merged
|
||||
import spack.schema.packages
|
||||
import spack.schema.projections
|
||||
@@ -53,8 +52,6 @@
|
||||
"default": {},
|
||||
"additionalProperties": False,
|
||||
"properties": union_dicts(
|
||||
# Include deprecated "gitlab-ci" section
|
||||
spack.schema.gitlab_ci.properties,
|
||||
# merged configuration scope schemas
|
||||
spack.schema.merged.properties,
|
||||
# extra environment schema properties
|
||||
@@ -133,15 +130,6 @@ def update(data):
|
||||
Returns:
|
||||
True if data was changed, False otherwise
|
||||
"""
|
||||
|
||||
import spack.ci
|
||||
|
||||
if "gitlab-ci" in data:
|
||||
data["ci"] = data.pop("gitlab-ci")
|
||||
|
||||
if "ci" in data:
|
||||
return spack.ci.translate_deprecated_config(data["ci"])
|
||||
|
||||
# There are not currently any deprecated attributes in this section
|
||||
# that have not been removed
|
||||
return False
|
||||
|
@@ -12,11 +12,11 @@
|
||||
|
||||
import spack.schema.bootstrap
|
||||
import spack.schema.cdash
|
||||
import spack.schema.ci
|
||||
import spack.schema.compilers
|
||||
import spack.schema.concretizer
|
||||
import spack.schema.config
|
||||
import spack.schema.container
|
||||
import spack.schema.gitlab_ci
|
||||
import spack.schema.mirrors
|
||||
import spack.schema.modules
|
||||
import spack.schema.packages
|
||||
@@ -31,7 +31,7 @@
|
||||
spack.schema.concretizer.properties,
|
||||
spack.schema.config.properties,
|
||||
spack.schema.container.properties,
|
||||
spack.schema.ci.properties,
|
||||
spack.schema.gitlab_ci.properties,
|
||||
spack.schema.mirrors.properties,
|
||||
spack.schema.modules.properties,
|
||||
spack.schema.packages.properties,
|
||||
|
@@ -32,16 +32,11 @@
|
||||
{
|
||||
"type": "array",
|
||||
"items": {
|
||||
"oneOf": [
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"one_of": {"type": "array"},
|
||||
"any_of": {"type": "array"},
|
||||
},
|
||||
},
|
||||
{"type": "string"},
|
||||
]
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"one_of": {"type": "array"},
|
||||
"any_of": {"type": "array"},
|
||||
},
|
||||
},
|
||||
},
|
||||
# Shorthand for a single requirement group with
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user