Compare commits
13 Commits
features/i
...
develop-20
Author | SHA1 | Date | |
---|---|---|---|
![]() |
755a4054b2 | ||
![]() |
040d747a86 | ||
![]() |
9440894173 | ||
![]() |
4e42e3c2ec | ||
![]() |
662bf113e2 | ||
![]() |
7e11fd62e2 | ||
![]() |
a6c22f2690 | ||
![]() |
4894668ece | ||
![]() |
199133fca4 | ||
![]() |
ea3a3b51a0 | ||
![]() |
23bd3e6104 | ||
![]() |
c72477e67a | ||
![]() |
2d2a4d1908 |
@@ -63,7 +63,6 @@ on these ideas for each distinct build system that Spack supports:
|
||||
build_systems/cudapackage
|
||||
build_systems/custompackage
|
||||
build_systems/inteloneapipackage
|
||||
build_systems/intelpackage
|
||||
build_systems/rocmpackage
|
||||
build_systems/sourceforgepackage
|
||||
|
||||
|
@@ -33,9 +33,6 @@ For more information on a specific package, do::
|
||||
|
||||
spack info --all <package-name>
|
||||
|
||||
Intel no longer releases new versions of Parallel Studio, which can be
|
||||
used in Spack via the :ref:`intelpackage`. All of its components can
|
||||
now be found in oneAPI.
|
||||
|
||||
Examples
|
||||
========
|
||||
@@ -50,34 +47,8 @@ Install the oneAPI compilers::
|
||||
|
||||
spack install intel-oneapi-compilers
|
||||
|
||||
Add the compilers to your ``compilers.yaml`` so spack can use them::
|
||||
|
||||
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/bin
|
||||
|
||||
Verify that the compilers are available::
|
||||
|
||||
spack compiler list
|
||||
|
||||
Note that 2024 and later releases do not include ``icc``. Before 2024,
|
||||
the package layout was different::
|
||||
|
||||
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin/intel64
|
||||
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin
|
||||
|
||||
The ``intel-oneapi-compilers`` package includes 2 families of
|
||||
compilers:
|
||||
|
||||
* ``intel``: ``icc``, ``icpc``, ``ifort``. Intel's *classic*
|
||||
compilers. 2024 and later releases contain ``ifort``, but not
|
||||
``icc`` and ``icpc``.
|
||||
* ``oneapi``: ``icx``, ``icpx``, ``ifx``. Intel's new generation of
|
||||
compilers based on LLVM.
|
||||
|
||||
To build the ``patchelf`` Spack package with ``icc``, do::
|
||||
|
||||
spack install patchelf%intel
|
||||
|
||||
To build with with ``icx``, do ::
|
||||
To build the ``patchelf`` Spack package with ``icx``, do::
|
||||
|
||||
spack install patchelf%oneapi
|
||||
|
||||
@@ -92,15 +63,6 @@ Install the oneAPI compilers::
|
||||
|
||||
spack install intel-oneapi-compilers
|
||||
|
||||
Add the compilers to your ``compilers.yaml`` so Spack can use them::
|
||||
|
||||
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/bin
|
||||
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/bin
|
||||
|
||||
Verify that the compilers are available::
|
||||
|
||||
spack compiler list
|
||||
|
||||
Clone `spack-configs <https://github.com/spack/spack-configs>`_ repo and activate Intel oneAPI CPU environment::
|
||||
|
||||
git clone https://github.com/spack/spack-configs
|
||||
@@ -149,7 +111,7 @@ Compilers
|
||||
---------
|
||||
|
||||
To use the compilers, add some information about the installation to
|
||||
``compilers.yaml``. For most users, it is sufficient to do::
|
||||
``packages.yaml``. For most users, it is sufficient to do::
|
||||
|
||||
spack compiler add /opt/intel/oneapi/compiler/latest/bin
|
||||
|
||||
@@ -157,7 +119,7 @@ Adapt the paths above if you did not install the tools in the default
|
||||
location. After adding the compilers, using them is the same
|
||||
as if you had installed the ``intel-oneapi-compilers`` package.
|
||||
Another option is to manually add the configuration to
|
||||
``compilers.yaml`` as described in :ref:`Compiler configuration
|
||||
``packages.yaml`` as described in :ref:`Compiler configuration
|
||||
<compiler-config>`.
|
||||
|
||||
Before 2024, the directory structure was different::
|
||||
@@ -200,15 +162,5 @@ You can also use Spack-installed libraries. For example::
|
||||
Will update your environment CPATH, LIBRARY_PATH, and other
|
||||
environment variables for building an application with oneMKL.
|
||||
|
||||
More information
|
||||
================
|
||||
|
||||
This section describes basic use of oneAPI, especially if it has
|
||||
changed compared to Parallel Studio. See :ref:`intelpackage` for more
|
||||
information on :ref:`intel-virtual-packages`,
|
||||
:ref:`intel-unrelated-packages`,
|
||||
:ref:`intel-integrating-external-libraries`, and
|
||||
:ref:`using-mkl-tips`.
|
||||
|
||||
|
||||
.. _`Intel installers`: https://software.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top.html
|
||||
|
File diff suppressed because it is too large
Load Diff
@@ -12,8 +12,7 @@ The ``ROCmPackage`` is not a build system but a helper package. Like ``CudaPacka
|
||||
it provides standard variants, dependencies, and conflicts to facilitate building
|
||||
packages using GPUs though for AMD in this case.
|
||||
|
||||
You can find the source for this package (and suggestions for setting up your
|
||||
``compilers.yaml`` and ``packages.yaml`` files) at
|
||||
You can find the source for this package (and suggestions for setting up your ``packages.yaml`` file) at
|
||||
`<https://github.com/spack/spack/blob/develop/lib/spack/spack/build_systems/rocm.py>`__.
|
||||
|
||||
^^^^^^^^
|
||||
|
@@ -11,7 +11,7 @@ Configuration Files
|
||||
Spack has many configuration files. Here is a quick list of them, in
|
||||
case you want to skip directly to specific docs:
|
||||
|
||||
* :ref:`compilers.yaml <compiler-config>`
|
||||
* :ref:`packages.yaml <compiler-config>`
|
||||
* :ref:`concretizer.yaml <concretizer-options>`
|
||||
* :ref:`config.yaml <config-yaml>`
|
||||
* :ref:`include.yaml <include-yaml>`
|
||||
@@ -95,7 +95,7 @@ are six configuration scopes. From lowest to highest:
|
||||
precedence over all other scopes.
|
||||
|
||||
Each configuration directory may contain several configuration files,
|
||||
such as ``config.yaml``, ``compilers.yaml``, or ``mirrors.yaml``. When
|
||||
such as ``config.yaml``, ``packages.yaml``, or ``mirrors.yaml``. When
|
||||
configurations conflict, settings from higher-precedence scopes override
|
||||
lower-precedence settings.
|
||||
|
||||
|
@@ -686,7 +686,7 @@ the environment.
|
||||
spack:
|
||||
include:
|
||||
- environment/relative/path/to/config.yaml
|
||||
- https://github.com/path/to/raw/config/compilers.yaml
|
||||
- https://github.com/path/to/raw/config/packages.yaml
|
||||
- /absolute/path/to/packages.yaml
|
||||
- path: /path/to/$os/$target/environment
|
||||
optional: true
|
||||
|
@@ -254,12 +254,11 @@ directory.
|
||||
Compiler configuration
|
||||
----------------------
|
||||
|
||||
Spack has the ability to build packages with multiple compilers and
|
||||
compiler versions. Compilers can be made available to Spack by
|
||||
specifying them manually in ``compilers.yaml`` or ``packages.yaml``,
|
||||
or automatically by running ``spack compiler find``, but for
|
||||
convenience Spack will automatically detect compilers the first time
|
||||
it needs them.
|
||||
Spack has the ability to build packages with multiple compilers and compiler versions.
|
||||
Compilers can be made available to Spack by specifying them manually in ``packages.yaml``,
|
||||
or automatically by running ``spack compiler find``.
|
||||
For convenience, Spack will automatically detect compilers the first time it needs them,
|
||||
if none is available.
|
||||
|
||||
.. _cmd-spack-compilers:
|
||||
|
||||
@@ -274,16 +273,11 @@ compilers`` or ``spack compiler list``:
|
||||
|
||||
$ spack compilers
|
||||
==> Available compilers
|
||||
-- gcc ---------------------------------------------------------
|
||||
gcc@4.9.0 gcc@4.8.0 gcc@4.7.0 gcc@4.6.2 gcc@4.4.7
|
||||
gcc@4.8.2 gcc@4.7.1 gcc@4.6.3 gcc@4.6.1 gcc@4.1.2
|
||||
-- intel -------------------------------------------------------
|
||||
intel@15.0.0 intel@14.0.0 intel@13.0.0 intel@12.1.0 intel@10.0
|
||||
intel@14.0.3 intel@13.1.1 intel@12.1.5 intel@12.0.4 intel@9.1
|
||||
intel@14.0.2 intel@13.1.0 intel@12.1.3 intel@11.1
|
||||
intel@14.0.1 intel@13.0.1 intel@12.1.2 intel@10.1
|
||||
-- clang -------------------------------------------------------
|
||||
clang@3.4 clang@3.3 clang@3.2 clang@3.1
|
||||
-- gcc ubuntu20.04-x86_64 ---------------------------------------
|
||||
gcc@9.4.0 gcc@8.4.0 gcc@10.5.0
|
||||
|
||||
-- llvm ubuntu20.04-x86_64 --------------------------------------
|
||||
llvm@12.0.0 llvm@11.0.0 llvm@10.0.0
|
||||
|
||||
Any of these compilers can be used to build Spack packages. More on
|
||||
how this is done is in :ref:`sec-specs`.
|
||||
@@ -302,16 +296,22 @@ An alias for ``spack compiler find``.
|
||||
``spack compiler find``
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Lists the compilers currently available to Spack. If you do not see
|
||||
a compiler in this list, but you want to use it with Spack, you can
|
||||
simply run ``spack compiler find`` with the path to where the
|
||||
compiler is installed. For example:
|
||||
If you do not see a compiler in the list shown by:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack compiler find /usr/local/tools/ic-13.0.079
|
||||
==> Added 1 new compiler to ~/.spack/linux/compilers.yaml
|
||||
intel@13.0.079
|
||||
$ spack compiler list
|
||||
|
||||
but you want to use it with Spack, you can simply run ``spack compiler find`` with the
|
||||
path to where the compiler is installed. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack compiler find /opt/intel/oneapi/compiler/2025.1/bin/
|
||||
==> Added 1 new compiler to /home/user/.spack/packages.yaml
|
||||
intel-oneapi-compilers@2025.1.0
|
||||
==> Compilers are defined in the following files:
|
||||
/home/user/.spack/packages.yaml
|
||||
|
||||
Or you can run ``spack compiler find`` with no arguments to force
|
||||
auto-detection. This is useful if you do not know where compilers are
|
||||
@@ -322,7 +322,7 @@ installed, but you know that new compilers have been added to your
|
||||
|
||||
$ module load gcc/4.9.0
|
||||
$ spack compiler find
|
||||
==> Added 1 new compiler to ~/.spack/linux/compilers.yaml
|
||||
==> Added 1 new compiler to /home/user/.spack/packages.yaml
|
||||
gcc@4.9.0
|
||||
|
||||
This loads the environment module for gcc-4.9.0 to add it to
|
||||
@@ -331,7 +331,7 @@ This loads the environment module for gcc-4.9.0 to add it to
|
||||
.. note::
|
||||
|
||||
By default, spack does not fill in the ``modules:`` field in the
|
||||
``compilers.yaml`` file. If you are using a compiler from a
|
||||
``packages.yaml`` file. If you are using a compiler from a
|
||||
module, then you should add this field manually.
|
||||
See the section on :ref:`compilers-requiring-modules`.
|
||||
|
||||
@@ -341,91 +341,82 @@ This loads the environment module for gcc-4.9.0 to add it to
|
||||
``spack compiler info``
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If you want to see specifics on a particular compiler, you can run
|
||||
``spack compiler info`` on it:
|
||||
If you want to see additional information on some specific compilers, you can run ``spack compiler info`` on it:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack compiler info intel@15
|
||||
intel@15.0.0:
|
||||
paths:
|
||||
cc = /usr/local/bin/icc-15.0.090
|
||||
cxx = /usr/local/bin/icpc-15.0.090
|
||||
f77 = /usr/local/bin/ifort-15.0.090
|
||||
fc = /usr/local/bin/ifort-15.0.090
|
||||
modules = []
|
||||
operating_system = centos6
|
||||
...
|
||||
$ spack compiler info gcc
|
||||
gcc@=8.4.0 languages='c,c++,fortran' arch=linux-ubuntu20.04-x86_64:
|
||||
prefix: /usr
|
||||
compilers:
|
||||
c: /usr/bin/gcc-8
|
||||
cxx: /usr/bin/g++-8
|
||||
fortran: /usr/bin/gfortran-8
|
||||
|
||||
This shows which C, C++, and Fortran compilers were detected by Spack.
|
||||
Notice also that we didn't have to be too specific about the
|
||||
version. We just said ``intel@15``, and information about the only
|
||||
matching Intel compiler was displayed.
|
||||
gcc@=9.4.0 languages='c,c++,fortran' arch=linux-ubuntu20.04-x86_64:
|
||||
prefix: /usr
|
||||
compilers:
|
||||
c: /usr/bin/gcc
|
||||
cxx: /usr/bin/g++
|
||||
fortran: /usr/bin/gfortran
|
||||
|
||||
gcc@=10.5.0 languages='c,c++,fortran' arch=linux-ubuntu20.04-x86_64:
|
||||
prefix: /usr
|
||||
compilers:
|
||||
c: /usr/bin/gcc-10
|
||||
cxx: /usr/bin/g++-10
|
||||
fortran: /usr/bin/gfortran-10
|
||||
|
||||
This shows the details of the compilers that were detected by Spack.
|
||||
Notice also that we didn't have to be too specific about the version. We just said ``gcc``, and we got information
|
||||
about all the matching compilers.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Manual compiler configuration
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If auto-detection fails, you can manually configure a compiler by
|
||||
editing your ``~/.spack/<platform>/compilers.yaml`` file. You can do this by running
|
||||
``spack config edit compilers``, which will open the file in
|
||||
If auto-detection fails, you can manually configure a compiler by editing your ``~/.spack/packages.yaml`` file.
|
||||
You can do this by running ``spack config edit packages``, which will open the file in
|
||||
:ref:`your favorite editor <controlling-the-editor>`.
|
||||
|
||||
Each compiler configuration in the file looks like this:
|
||||
Each compiler has an "external" entry in the file with some ``extra_attributes``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
compilers:
|
||||
- compiler:
|
||||
modules: []
|
||||
operating_system: centos6
|
||||
paths:
|
||||
cc: /usr/local/bin/icc-15.0.024-beta
|
||||
cxx: /usr/local/bin/icpc-15.0.024-beta
|
||||
f77: /usr/local/bin/ifort-15.0.024-beta
|
||||
fc: /usr/local/bin/ifort-15.0.024-beta
|
||||
spec: intel@15.0.0
|
||||
packages:
|
||||
gcc:
|
||||
externals:
|
||||
- spec: gcc@10.5.0 languages='c,c++,fortran'
|
||||
prefix: /usr
|
||||
extra_attributes:
|
||||
compilers:
|
||||
c: /usr/bin/gcc-10
|
||||
cxx: /usr/bin/g++-10
|
||||
fortran: /usr/bin/gfortran-10
|
||||
|
||||
For compilers that do not support Fortran (like ``clang``), put
|
||||
``None`` for ``f77`` and ``fc``:
|
||||
The compiler executables are listed under ``extra_attributes:compilers``, and are keyed by language.
|
||||
Once you save the file, the configured compilers will show up in the list displayed by ``spack compilers``.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
compilers:
|
||||
- compiler:
|
||||
modules: []
|
||||
operating_system: centos6
|
||||
paths:
|
||||
cc: /usr/bin/clang
|
||||
cxx: /usr/bin/clang++
|
||||
f77: None
|
||||
fc: None
|
||||
spec: clang@3.3svn
|
||||
|
||||
Once you save the file, the configured compilers will show up in the
|
||||
list displayed by ``spack compilers``.
|
||||
|
||||
You can also add compiler flags to manually configured compilers. These
|
||||
flags should be specified in the ``flags`` section of the compiler
|
||||
specification. The valid flags are ``cflags``, ``cxxflags``, ``fflags``,
|
||||
You can also add compiler flags to manually configured compilers. These flags should be specified in the
|
||||
``flags`` section of the compiler specification. The valid flags are ``cflags``, ``cxxflags``, ``fflags``,
|
||||
``cppflags``, ``ldflags``, and ``ldlibs``. For example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
compilers:
|
||||
- compiler:
|
||||
modules: []
|
||||
operating_system: centos6
|
||||
paths:
|
||||
cc: /usr/bin/gcc
|
||||
cxx: /usr/bin/g++
|
||||
f77: /usr/bin/gfortran
|
||||
fc: /usr/bin/gfortran
|
||||
flags:
|
||||
cflags: -O3 -fPIC
|
||||
cxxflags: -O3 -fPIC
|
||||
cppflags: -O3 -fPIC
|
||||
spec: gcc@4.7.2
|
||||
packages:
|
||||
gcc:
|
||||
externals:
|
||||
- spec: gcc@10.5.0 languages='c,c++,fortran'
|
||||
prefix: /usr
|
||||
extra_attributes:
|
||||
compilers:
|
||||
c: /usr/bin/gcc-10
|
||||
cxx: /usr/bin/g++-10
|
||||
fortran: /usr/bin/gfortran-10
|
||||
flags:
|
||||
cflags: -O3 -fPIC
|
||||
cxxflags: -O3 -fPIC
|
||||
cppflags: -O3 -fPIC
|
||||
|
||||
These flags will be treated by spack as if they were entered from
|
||||
the command line each time this compiler is used. The compiler wrappers
|
||||
@@ -440,95 +431,44 @@ These variables should be specified in the ``environment`` section of the compil
|
||||
specification. The operations available to modify the environment are ``set``, ``unset``,
|
||||
``prepend_path``, ``append_path``, and ``remove_path``. For example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
compilers:
|
||||
- compiler:
|
||||
modules: []
|
||||
operating_system: centos6
|
||||
paths:
|
||||
cc: /opt/intel/oneapi/compiler/latest/linux/bin/icx
|
||||
cxx: /opt/intel/oneapi/compiler/latest/linux/bin/icpx
|
||||
f77: /opt/intel/oneapi/compiler/latest/linux/bin/ifx
|
||||
fc: /opt/intel/oneapi/compiler/latest/linux/bin/ifx
|
||||
spec: oneapi@latest
|
||||
environment:
|
||||
set:
|
||||
MKL_ROOT: "/path/to/mkl/root"
|
||||
unset: # A list of environment variables to unset
|
||||
- CC
|
||||
prepend_path: # Similar for append|remove_path
|
||||
LD_LIBRARY_PATH: /ld/paths/added/by/setvars/sh
|
||||
|
||||
.. note::
|
||||
|
||||
Spack is in the process of moving compilers from a separate
|
||||
attribute to be handled like all other packages. As part of this
|
||||
process, the ``compilers.yaml`` section will eventually be replaced
|
||||
by configuration in the ``packages.yaml`` section. This new
|
||||
configuration is now available, although it is not yet the default
|
||||
behavior.
|
||||
|
||||
Compilers can also be configured as external packages in the
|
||||
``packages.yaml`` config file. Any external package for a compiler
|
||||
(e.g. ``gcc`` or ``llvm``) will be treated as a configured compiler
|
||||
assuming the paths to the compiler executables are determinable from
|
||||
the prefix.
|
||||
|
||||
If the paths to the compiler executable are not determinable from the
|
||||
prefix, you can add them to the ``extra_attributes`` field. Similarly,
|
||||
all other fields from the compilers config can be added to the
|
||||
``extra_attributes`` field for an external representing a compiler.
|
||||
|
||||
Note that the format for the ``paths`` field in the
|
||||
``extra_attributes`` section is different than in the ``compilers``
|
||||
config. For compilers configured as external packages, the section is
|
||||
named ``compilers`` and the dictionary maps language names (``c``,
|
||||
``cxx``, ``fortran``) to paths, rather than using the names ``cc``,
|
||||
``fc``, and ``f77``.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
gcc:
|
||||
external:
|
||||
- spec: gcc@12.2.0 arch=linux-rhel8-skylake
|
||||
prefix: /usr
|
||||
extra_attributes:
|
||||
environment:
|
||||
set:
|
||||
GCC_ROOT: /usr
|
||||
external:
|
||||
- spec: llvm+clang@15.0.0 arch=linux-rhel8-skylake
|
||||
prefix: /usr
|
||||
intel-oneapi-compilers:
|
||||
externals:
|
||||
- spec: intel-oneapi-compilers@2025.1.0
|
||||
prefix: /opt/intel/oneapi
|
||||
extra_attributes:
|
||||
compilers:
|
||||
c: /usr/bin/clang-with-suffix
|
||||
cxx: /usr/bin/clang++-with-extra-info
|
||||
fortran: /usr/bin/gfortran
|
||||
extra_rpaths:
|
||||
- /usr/lib/llvm/
|
||||
c: /opt/intel/oneapi/compiler/2025.1/bin/icx
|
||||
cxx: /opt/intel/oneapi/compiler/2025.1/bin/icpx
|
||||
fortran: /opt/intel/oneapi/compiler/2025.1/bin/ifx
|
||||
environment:
|
||||
set:
|
||||
MKL_ROOT: "/path/to/mkl/root"
|
||||
unset: # A list of environment variables to unset
|
||||
- CC
|
||||
prepend_path: # Similar for append|remove_path
|
||||
LD_LIBRARY_PATH: /ld/paths/added/by/setvars/sh
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Build Your Own Compiler
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If you are particular about which compiler/version you use, you might
|
||||
wish to have Spack build it for you. For example:
|
||||
If you are particular about which compiler/version you use, you might wish to have Spack build it for you.
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install gcc@4.9.3
|
||||
$ spack install gcc@14+binutils
|
||||
|
||||
Once that has finished, you will need to add it to your
|
||||
``compilers.yaml`` file. You can then set Spack to use it by default
|
||||
by adding the following to your ``packages.yaml`` file:
|
||||
Once the compiler is installed, you can start using it without additional configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
.. code-block:: console
|
||||
|
||||
packages:
|
||||
all:
|
||||
compiler: [gcc@4.9.3]
|
||||
$ spack install hdf5~mpi %gcc@14
|
||||
|
||||
The same holds true for compilers that are made available from buildcaches, when reusing them is allowed.
|
||||
|
||||
.. _compilers-requiring-modules:
|
||||
|
||||
@@ -536,30 +476,26 @@ by adding the following to your ``packages.yaml`` file:
|
||||
Compilers Requiring Modules
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Many installed compilers will work regardless of the environment they
|
||||
are called with. However, some installed compilers require
|
||||
``$LD_LIBRARY_PATH`` or other environment variables to be set in order
|
||||
to run; this is typical for Intel and other proprietary compilers.
|
||||
Many installed compilers will work regardless of the environment they are called with.
|
||||
However, some installed compilers require environment variables to be set in order to run;
|
||||
this is typical for Intel and other proprietary compilers.
|
||||
|
||||
In such a case, you should tell Spack which module(s) to load in order
|
||||
to run the chosen compiler (If the compiler does not come with a
|
||||
module file, you might consider making one by hand). Spack will load
|
||||
this module into the environment ONLY when the compiler is run, and
|
||||
NOT in general for a package's ``install()`` method. See, for
|
||||
example, this ``compilers.yaml`` file:
|
||||
On typical HPC clusters, these environment modifications are usually delegated to some "module" system.
|
||||
In such a case, you should tell Spack which module(s) to load in order to run the chosen compiler:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
compilers:
|
||||
- compiler:
|
||||
modules: [other/comp/gcc-5.3-sp3]
|
||||
operating_system: SuSE11
|
||||
paths:
|
||||
cc: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gcc
|
||||
cxx: /usr/local/other/SLES11.3/gcc/5.3.0/bin/g++
|
||||
f77: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gfortran
|
||||
fc: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gfortran
|
||||
spec: gcc@5.3.0
|
||||
packages:
|
||||
gcc:
|
||||
externals:
|
||||
- spec: gcc@10.5.0 languages='c,c++,fortran'
|
||||
prefix: /opt/compilers
|
||||
extra_attributes:
|
||||
compilers:
|
||||
c: /opt/compilers/bin/gcc-10
|
||||
cxx: /opt/compilers/bin/g++-10
|
||||
fortran: /opt/compilers/bin/gfortran-10
|
||||
modules: [gcc/10.5.0]
|
||||
|
||||
Some compilers require special environment settings to be loaded not just
|
||||
to run, but also to execute the code they build, breaking packages that
|
||||
@@ -580,7 +516,7 @@ Licensed Compilers
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Some proprietary compilers require licensing to use. If you need to
|
||||
use a licensed compiler (eg, PGI), the process is similar to a mix of
|
||||
use a licensed compiler, the process is similar to a mix of
|
||||
build your own, plus modules:
|
||||
|
||||
#. Create a Spack package (if it doesn't exist already) to install
|
||||
@@ -590,24 +526,21 @@ build your own, plus modules:
|
||||
using Spack to load the module it just created, and running simple
|
||||
builds (eg: ``cc helloWorld.c && ./a.out``)
|
||||
|
||||
#. Add the newly-installed compiler to ``compilers.yaml`` as shown
|
||||
above.
|
||||
#. Add the newly-installed compiler to ``packages.yaml`` as shown above.
|
||||
|
||||
.. _mixed-toolchains:
|
||||
|
||||
^^^^^^^^^^^^^^^^
|
||||
Mixed Toolchains
|
||||
^^^^^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Fortran compilers on macOS
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Modern compilers typically come with related compilers for C, C++ and
|
||||
Fortran bundled together. When possible, results are best if the same
|
||||
compiler is used for all languages.
|
||||
|
||||
In some cases, this is not possible. For example, starting with macOS El
|
||||
Capitan (10.11), many packages no longer build with GCC, but XCode
|
||||
provides no Fortran compilers. The user is therefore forced to use a
|
||||
mixed toolchain: XCode-provided Clang for C/C++ and GNU ``gfortran`` for
|
||||
Fortran.
|
||||
In some cases, this is not possible. For example, XCode on macOS provides no Fortran compilers.
|
||||
The user is therefore forced to use a mixed toolchain: XCode-provided Clang for C/C++ and e.g.
|
||||
GNU ``gfortran`` for Fortran.
|
||||
|
||||
#. You need to make sure that Xcode is installed. Run the following command:
|
||||
|
||||
@@ -660,45 +593,25 @@ Fortran.
|
||||
|
||||
Note: the flag is ``-license``, not ``--license``.
|
||||
|
||||
#. Run ``spack compiler find`` to locate Clang.
|
||||
|
||||
#. There are different ways to get ``gfortran`` on macOS. For example, you can
|
||||
install GCC with Spack (``spack install gcc``), with Homebrew (``brew install
|
||||
gcc``), or from a `DMG installer
|
||||
<https://github.com/fxcoudert/gfortran-for-macOS/releases>`_.
|
||||
|
||||
#. The only thing left to do is to edit ``~/.spack/darwin/compilers.yaml`` to provide
|
||||
the path to ``gfortran``:
|
||||
#. Run ``spack compiler find`` to locate both Apple-Clang and GCC.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
compilers:
|
||||
- compiler:
|
||||
# ...
|
||||
paths:
|
||||
cc: /usr/bin/clang
|
||||
cxx: /usr/bin/clang++
|
||||
f77: /path/to/bin/gfortran
|
||||
fc: /path/to/bin/gfortran
|
||||
spec: apple-clang@11.0.0
|
||||
|
||||
|
||||
If you used Spack to install GCC, you can get the installation prefix by
|
||||
``spack location -i gcc`` (this will only work if you have a single version
|
||||
of GCC installed). Whereas for Homebrew, GCC is installed in
|
||||
``/usr/local/Cellar/gcc/x.y.z``. With the DMG installer, the correct path
|
||||
will be ``/usr/local/gfortran``.
|
||||
Since languages in Spack are modeled as virtual packages, ``apple-clang`` will be used to provide
|
||||
C and C++, while GCC will be used for Fortran.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
Compiler Verification
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can verify that your compilers are configured properly by installing a
|
||||
simple package. For example:
|
||||
You can verify that your compilers are configured properly by installing a simple package. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install zlib%gcc@5.3.0
|
||||
$ spack install zlib-ng%gcc@5.3.0
|
||||
|
||||
|
||||
.. _vendor-specific-compiler-configuration:
|
||||
@@ -707,9 +620,7 @@ simple package. For example:
|
||||
Vendor-Specific Compiler Configuration
|
||||
--------------------------------------
|
||||
|
||||
With Spack, things usually "just work" with GCC. Not so for other
|
||||
compilers. This section provides details on how to get specific
|
||||
compilers working.
|
||||
This section provides details on how to get vendor-specific compilers working.
|
||||
|
||||
^^^^^^^^^^^^^^^
|
||||
Intel Compilers
|
||||
@@ -731,8 +642,8 @@ compilers:
|
||||
you have installed from the ``PATH`` environment variable.
|
||||
|
||||
If you want use a version of ``gcc`` or ``g++`` other than the default
|
||||
version on your system, you need to use either the ``-gcc-name``
|
||||
or ``-gxx-name`` compiler option to specify the path to the version of
|
||||
version on your system, you need to use either the ``--gcc-install-dir``
|
||||
or ``--gcc-toolchain`` compiler option to specify the path to the version of
|
||||
``gcc`` or ``g++`` that you want to use."
|
||||
|
||||
-- `Intel Reference Guide <https://software.intel.com/en-us/node/522750>`_
|
||||
@@ -740,76 +651,12 @@ compilers:
|
||||
Intel compilers may therefore be configured in one of two ways with
|
||||
Spack: using modules, or using compiler flags.
|
||||
|
||||
""""""""""""""""""""""""""
|
||||
Configuration with Modules
|
||||
""""""""""""""""""""""""""
|
||||
|
||||
One can control which GCC is seen by the Intel compiler with modules.
|
||||
A module must be loaded both for the Intel Compiler (so it will run)
|
||||
and GCC (so the compiler can find the intended GCC). The following
|
||||
configuration in ``compilers.yaml`` illustrates this technique:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
compilers:
|
||||
- compiler:
|
||||
modules: [gcc-4.9.3, intel-15.0.24]
|
||||
operating_system: centos7
|
||||
paths:
|
||||
cc: /opt/intel-15.0.24/bin/icc-15.0.24-beta
|
||||
cxx: /opt/intel-15.0.24/bin/icpc-15.0.24-beta
|
||||
f77: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
|
||||
fc: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
|
||||
spec: intel@15.0.24.4.9.3
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
The version number on the Intel compiler is a combination of
|
||||
the "native" Intel version number and the GNU compiler it is
|
||||
targeting.
|
||||
|
||||
""""""""""""""""""""""""""
|
||||
Command Line Configuration
|
||||
""""""""""""""""""""""""""
|
||||
|
||||
One can also control which GCC is seen by the Intel compiler by adding
|
||||
flags to the ``icc`` command:
|
||||
|
||||
#. Identify the location of the compiler you just installed:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack location --install-dir gcc
|
||||
~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw...
|
||||
|
||||
#. Set up ``compilers.yaml``, for example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
compilers:
|
||||
- compiler:
|
||||
modules: [intel-15.0.24]
|
||||
operating_system: centos7
|
||||
paths:
|
||||
cc: /opt/intel-15.0.24/bin/icc-15.0.24-beta
|
||||
cxx: /opt/intel-15.0.24/bin/icpc-15.0.24-beta
|
||||
f77: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
|
||||
fc: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
|
||||
flags:
|
||||
cflags: -gcc-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc
|
||||
cxxflags: -gxx-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/g++
|
||||
fflags: -gcc-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc
|
||||
spec: intel@15.0.24.4.9.3
|
||||
|
||||
|
||||
^^^
|
||||
NAG
|
||||
^^^
|
||||
|
||||
The Numerical Algorithms Group provides a licensed Fortran compiler. Like Clang,
|
||||
this requires you to set up a :ref:`mixed-toolchains`. It is recommended to use
|
||||
GCC for your C/C++ compilers.
|
||||
The Numerical Algorithms Group provides a licensed Fortran compiler.
|
||||
It is recommended to use GCC for your C/C++ compilers.
|
||||
|
||||
The NAG Fortran compilers are a bit more strict than other compilers, and many
|
||||
packages will fail to install with error messages like:
|
||||
@@ -826,44 +673,40 @@ the command line:
|
||||
|
||||
$ spack install openmpi fflags="-mismatch"
|
||||
|
||||
Or it can be set permanently in your ``compilers.yaml``:
|
||||
Or it can be set permanently in your ``packages.yaml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
- compiler:
|
||||
modules: []
|
||||
operating_system: centos6
|
||||
paths:
|
||||
cc: /soft/spack/opt/spack/linux-x86_64/gcc-5.3.0/gcc-6.1.0-q2zosj3igepi3pjnqt74bwazmptr5gpj/bin/gcc
|
||||
cxx: /soft/spack/opt/spack/linux-x86_64/gcc-5.3.0/gcc-6.1.0-q2zosj3igepi3pjnqt74bwazmptr5gpj/bin/g++
|
||||
f77: /soft/spack/opt/spack/linux-x86_64/gcc-4.4.7/nag-6.1-jt3h5hwt5myezgqguhfsan52zcskqene/bin/nagfor
|
||||
fc: /soft/spack/opt/spack/linux-x86_64/gcc-4.4.7/nag-6.1-jt3h5hwt5myezgqguhfsan52zcskqene/bin/nagfor
|
||||
flags:
|
||||
fflags: -mismatch
|
||||
spec: nag@6.1
|
||||
|
||||
packages:
|
||||
nag:
|
||||
externals:
|
||||
- spec: nag@6.1
|
||||
prefix: /opt/nag/bin
|
||||
extra_attributes:
|
||||
compilers:
|
||||
fortran: /opt/nag/bin/nagfor
|
||||
flags:
|
||||
fflags: -mismatch
|
||||
|
||||
---------------
|
||||
System Packages
|
||||
---------------
|
||||
|
||||
Once compilers are configured, one needs to determine which
|
||||
pre-installed system packages, if any, to use in builds. This is
|
||||
configured in the file ``~/.spack/packages.yaml``. For example, to use
|
||||
an OpenMPI installed in /opt/local, one would use:
|
||||
Once compilers are configured, one needs to determine which pre-installed system packages,
|
||||
if any, to use in builds. These are also configured in the ``~/.spack/packages.yaml`` file.
|
||||
For example, to use an OpenMPI installed in /opt/local, one would use:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
openmpi:
|
||||
externals:
|
||||
- spec: openmpi@1.10.1
|
||||
prefix: /opt/local
|
||||
buildable: False
|
||||
packages:
|
||||
openmpi:
|
||||
buildable: False
|
||||
externals:
|
||||
- spec: openmpi@1.10.1
|
||||
prefix: /opt/local
|
||||
|
||||
In general, Spack is easier to use and more reliable if it builds all of
|
||||
its own dependencies. However, there are several packages for which one
|
||||
commonly needs to use system versions:
|
||||
In general, *Spack is easier to use and more reliable if it builds all of its own dependencies*.
|
||||
However, there are several packages for which one commonly needs to use system versions:
|
||||
|
||||
^^^
|
||||
MPI
|
||||
@@ -876,8 +719,7 @@ you are unlikely to get a working MPI from Spack. Instead, use an
|
||||
appropriate pre-installed MPI.
|
||||
|
||||
If you choose a pre-installed MPI, you should consider using the
|
||||
pre-installed compiler used to build that MPI; see above on
|
||||
``compilers.yaml``.
|
||||
pre-installed compiler used to build that MPI.
|
||||
|
||||
^^^^^^^
|
||||
OpenSSL
|
||||
@@ -1441,9 +1283,9 @@ To configure Spack, first run the following command inside the Spack console:
|
||||
spack compiler find
|
||||
|
||||
This creates a ``.staging`` directory in our Spack prefix, along with a ``windows`` subdirectory
|
||||
containing a ``compilers.yaml`` file. On a fresh Windows install with the above packages
|
||||
containing a ``packages.yaml`` file. On a fresh Windows install with the above packages
|
||||
installed, this command should only detect Microsoft Visual Studio and the Intel Fortran
|
||||
compiler will be integrated within the first version of MSVC present in the ``compilers.yaml``
|
||||
compiler will be integrated within the first version of MSVC present in the ``packages.yaml``
|
||||
output.
|
||||
|
||||
Spack provides a default ``config.yaml`` file for Windows that it will use unless overridden.
|
||||
|
@@ -311,4 +311,4 @@ def ld_flags(self):
|
||||
|
||||
|
||||
#: Tuple of Intel math libraries, exported to packages
|
||||
INTEL_MATH_LIBRARIES = ("intel-mkl", "intel-oneapi-mkl", "intel-parallel-studio")
|
||||
INTEL_MATH_LIBRARIES = ("intel-oneapi-mkl",)
|
||||
|
@@ -4,7 +4,6 @@
|
||||
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import sys
|
||||
from typing import Dict
|
||||
@@ -26,12 +25,10 @@
|
||||
import spack.hash_types as ht
|
||||
import spack.mirrors.mirror
|
||||
import spack.package_base
|
||||
import spack.paths
|
||||
import spack.repo
|
||||
import spack.spec
|
||||
import spack.stage
|
||||
import spack.util.executable
|
||||
import spack.util.git
|
||||
import spack.util.gpg as gpg_util
|
||||
import spack.util.timer as timer
|
||||
import spack.util.url as url_util
|
||||
@@ -45,7 +42,6 @@
|
||||
SPACK_COMMAND = "spack"
|
||||
INSTALL_FAIL_CODE = 1
|
||||
FAILED_CREATE_BUILDCACHE_CODE = 100
|
||||
BUILTIN = re.compile(r"var\/spack\/repos\/builtin\/packages\/([^\/]+)\/package\.py")
|
||||
|
||||
|
||||
def deindent(desc):
|
||||
@@ -783,18 +779,15 @@ def ci_verify_versions(args):
|
||||
then parses the git diff between the two to determine which packages
|
||||
have been modified verifies the new checksums inside of them.
|
||||
"""
|
||||
with fs.working_dir(spack.paths.prefix):
|
||||
# We use HEAD^1 explicitly on the merge commit created by
|
||||
# GitHub Actions. However HEAD~1 is a safer default for the helper function.
|
||||
files = spack.util.git.get_modified_files(from_ref=args.from_ref, to_ref=args.to_ref)
|
||||
|
||||
# Get a list of package names from the modified files.
|
||||
pkgs = [(m.group(1), p) for p in files for m in [BUILTIN.search(p)] if m]
|
||||
# Get a list of all packages that have been changed or added
|
||||
# between from_ref and to_ref
|
||||
pkgs = spack.repo.get_all_package_diffs("AC", args.from_ref, args.to_ref)
|
||||
|
||||
failed_version = False
|
||||
for pkg_name, path in pkgs:
|
||||
for pkg_name in pkgs:
|
||||
spec = spack.spec.Spec(pkg_name)
|
||||
pkg = spack.repo.PATH.get_pkg_class(spec.name)(spec)
|
||||
path = spack.repo.PATH.package_path(pkg_name)
|
||||
|
||||
# Skip checking manual download packages and trust the maintainers
|
||||
if pkg.manual_download:
|
||||
@@ -818,7 +811,7 @@ def ci_verify_versions(args):
|
||||
# TODO: enforce every version have a commit or a sha256 defined if not
|
||||
# an infinite version (there are a lot of package's where this doesn't work yet.)
|
||||
|
||||
with fs.working_dir(spack.paths.prefix):
|
||||
with fs.working_dir(os.path.dirname(path)):
|
||||
added_checksums = spack_ci.get_added_versions(
|
||||
checksums_version_dict, path, from_ref=args.from_ref, to_ref=args.to_ref
|
||||
)
|
||||
|
@@ -572,7 +572,7 @@ def edit(self, spec, prefix):
|
||||
class IntelPackageTemplate(PackageTemplate):
|
||||
"""Provides appropriate overrides for licensed Intel software"""
|
||||
|
||||
base_class_name = "IntelPackage"
|
||||
base_class_name = "IntelOneApiPackage"
|
||||
|
||||
body_def = """\
|
||||
# FIXME: Override `setup_environment` if necessary."""
|
||||
|
@@ -65,7 +65,6 @@
|
||||
import spack.util.executable
|
||||
import spack.util.path
|
||||
import spack.util.timer as timer
|
||||
from spack.traverse import CoverNodesVisitor, traverse_breadth_first_with_visitor
|
||||
from spack.util.environment import EnvironmentModifications, dump_environment
|
||||
from spack.util.executable import which
|
||||
|
||||
@@ -119,11 +118,6 @@ class ExecuteResult(enum.Enum):
|
||||
FAILED = enum.auto()
|
||||
# Task is missing build spec and will be requeued
|
||||
MISSING_BUILD_SPEC = enum.auto()
|
||||
# Task is queued to install from binary but no binary found
|
||||
MISSING_BINARY = enum.auto()
|
||||
|
||||
|
||||
requeue_results = [ExecuteResult.MISSING_BUILD_SPEC, ExecuteResult.MISSING_BINARY]
|
||||
|
||||
|
||||
class InstallAction(enum.Enum):
|
||||
@@ -135,46 +129,22 @@ class InstallAction(enum.Enum):
|
||||
OVERWRITE = enum.auto()
|
||||
|
||||
|
||||
class InstallerProgress:
|
||||
"""Installation progress tracker"""
|
||||
|
||||
def __init__(self, packages: List["spack.package_base.PackageBase"]):
|
||||
self.counter = SpecsCount(dt.BUILD | dt.LINK | dt.RUN)
|
||||
self.pkg_count: int = self.counter.total([pkg.spec for pkg in packages])
|
||||
self.pkg_ids: Set[str] = set()
|
||||
class InstallStatus:
|
||||
def __init__(self, pkg_count: int):
|
||||
# Counters used for showing status information
|
||||
self.pkg_num: int = 0
|
||||
self.add_progress: bool = spack.config.get("config:install_status", True)
|
||||
self.pkg_count: int = pkg_count
|
||||
self.pkg_ids: Set[str] = set()
|
||||
|
||||
def set_installed(self, pkg: "spack.package_base.PackageBase", message: str) -> None:
|
||||
"""
|
||||
Flag package as installed and output the installation status if
|
||||
enabled by config:install_status.
|
||||
|
||||
Args:
|
||||
pkg: installed package
|
||||
message: message to be output
|
||||
"""
|
||||
def next_pkg(self, pkg: "spack.package_base.PackageBase"):
|
||||
pkg_id = package_id(pkg.spec)
|
||||
|
||||
if pkg_id not in self.pkg_ids:
|
||||
self.pkg_num += 1
|
||||
self.pkg_ids.add(pkg_id)
|
||||
visited = max(len(self.pkg_ids), self.counter.total([pkg.spec]), self.pkg_num + 1)
|
||||
self.pkg_num = visited
|
||||
|
||||
if tty.msg_enabled():
|
||||
post = self.get_progress() if self.add_progress else ""
|
||||
print(
|
||||
colorize("@*g{[+]} ") + spack.util.path.debug_padded_filter(message) + f" {post}"
|
||||
)
|
||||
|
||||
self.set_term_title("Installed")
|
||||
|
||||
def set_term_title(self, text: str):
|
||||
"""Update the terminal title bar.
|
||||
|
||||
Args:
|
||||
text: message to output in the terminal title bar
|
||||
"""
|
||||
if not self.add_progress:
|
||||
if not spack.config.get("config:install_status", True):
|
||||
return
|
||||
|
||||
if not sys.stdout.isatty():
|
||||
@@ -185,11 +155,7 @@ def set_term_title(self, text: str):
|
||||
sys.stdout.flush()
|
||||
|
||||
def get_progress(self) -> str:
|
||||
"""Current installation progress
|
||||
|
||||
Returns: string showing the current installation progress
|
||||
"""
|
||||
return f"[{self.pkg_num}/{self.pkg_count} completed]"
|
||||
return f"[{self.pkg_num}/{self.pkg_count}]"
|
||||
|
||||
|
||||
class TermStatusLine:
|
||||
@@ -258,9 +224,7 @@ def _check_last_phase(pkg: "spack.package_base.PackageBase") -> None:
|
||||
pkg.last_phase = None # type: ignore[attr-defined]
|
||||
|
||||
|
||||
def _handle_external_and_upstream(
|
||||
pkg: "spack.package_base.PackageBase", explicit: bool, progress: InstallerProgress
|
||||
) -> bool:
|
||||
def _handle_external_and_upstream(pkg: "spack.package_base.PackageBase", explicit: bool) -> bool:
|
||||
"""
|
||||
Determine if the package is external or upstream and register it in the
|
||||
database if it is external package.
|
||||
@@ -268,8 +232,6 @@ def _handle_external_and_upstream(
|
||||
Args:
|
||||
pkg: the package whose installation is under consideration
|
||||
explicit: the package was explicitly requested by the user
|
||||
progress: installation progress tracker
|
||||
|
||||
Return:
|
||||
``True`` if the package is not to be installed locally, otherwise ``False``
|
||||
"""
|
||||
@@ -277,7 +239,7 @@ def _handle_external_and_upstream(
|
||||
# consists in module file generation and registration in the DB.
|
||||
if pkg.spec.external:
|
||||
_process_external_package(pkg, explicit)
|
||||
progress.set_installed(pkg, f"{pkg.prefix} (external {package_id(pkg.spec)})")
|
||||
_print_installed_pkg(f"{pkg.prefix} (external {package_id(pkg.spec)})")
|
||||
return True
|
||||
|
||||
if pkg.spec.installed_upstream:
|
||||
@@ -285,7 +247,7 @@ def _handle_external_and_upstream(
|
||||
f"{package_id(pkg.spec)} is installed in an upstream Spack instance at "
|
||||
f"{pkg.spec.prefix}"
|
||||
)
|
||||
progress.set_installed(pkg, pkg.prefix)
|
||||
_print_installed_pkg(pkg.prefix)
|
||||
|
||||
# This will result in skipping all post-install hooks. In the case
|
||||
# of modules this is considered correct because we want to retrieve
|
||||
@@ -361,6 +323,17 @@ def _log_prefix(pkg_name) -> str:
|
||||
return f"{pid}{pkg_name}:"
|
||||
|
||||
|
||||
def _print_installed_pkg(message: str) -> None:
|
||||
"""
|
||||
Output a message with a package icon.
|
||||
|
||||
Args:
|
||||
message (str): message to be output
|
||||
"""
|
||||
if tty.msg_enabled():
|
||||
print(colorize("@*g{[+]} ") + spack.util.path.debug_padded_filter(message))
|
||||
|
||||
|
||||
def print_install_test_log(pkg: "spack.package_base.PackageBase") -> None:
|
||||
"""Output install test log file path but only if have test failures.
|
||||
|
||||
@@ -381,17 +354,13 @@ def _print_timer(pre: str, pkg_id: str, timer: timer.BaseTimer) -> None:
|
||||
|
||||
|
||||
def _install_from_cache(
|
||||
pkg: "spack.package_base.PackageBase",
|
||||
progress: InstallerProgress,
|
||||
explicit: bool,
|
||||
unsigned: Optional[bool] = False,
|
||||
pkg: "spack.package_base.PackageBase", explicit: bool, unsigned: Optional[bool] = False
|
||||
) -> bool:
|
||||
"""
|
||||
Install the package from binary cache
|
||||
|
||||
Args:
|
||||
pkg: package to install from the binary cache
|
||||
progress: installation status tracker
|
||||
explicit: ``True`` if installing the package was explicitly
|
||||
requested by the user, otherwise, ``False``
|
||||
unsigned: if ``True`` or ``False`` override the mirror signature verification defaults
|
||||
@@ -411,7 +380,7 @@ def _install_from_cache(
|
||||
|
||||
_write_timer_json(pkg, t, True)
|
||||
_print_timer(pre=_log_prefix(pkg.name), pkg_id=pkg_id, timer=t)
|
||||
progress.set_installed(pkg, pkg.spec.prefix)
|
||||
_print_installed_pkg(pkg.spec.prefix)
|
||||
spack.hooks.post_install(pkg.spec, explicit)
|
||||
return True
|
||||
|
||||
@@ -622,7 +591,7 @@ def get_dependent_ids(spec: "spack.spec.Spec") -> List[str]:
|
||||
return [package_id(d) for d in spec.dependents()]
|
||||
|
||||
|
||||
def install_msg(name: str, pid: int) -> str:
|
||||
def install_msg(name: str, pid: int, install_status: InstallStatus) -> str:
|
||||
"""
|
||||
Colorize the name/id of the package being installed
|
||||
|
||||
@@ -633,7 +602,12 @@ def install_msg(name: str, pid: int) -> str:
|
||||
Return: Colorized installing message
|
||||
"""
|
||||
pre = f"{pid}: " if tty.show_pid() else ""
|
||||
return pre + colorize("@*{Installing} @*g{%s}" % (name))
|
||||
post = (
|
||||
" @*{%s}" % install_status.get_progress()
|
||||
if install_status and spack.config.get("config:install_status", True)
|
||||
else ""
|
||||
)
|
||||
return pre + colorize("@*{Installing} @*g{%s}%s" % (name, post))
|
||||
|
||||
|
||||
def archive_install_logs(pkg: "spack.package_base.PackageBase", phase_log_dir: str) -> None:
|
||||
@@ -742,18 +716,6 @@ def package_id(spec: "spack.spec.Spec") -> str:
|
||||
return f"{spec.name}-{spec.version}-{spec.dag_hash()}"
|
||||
|
||||
|
||||
class SpecsCount:
|
||||
def __init__(self, depflag: int):
|
||||
self.depflag = depflag
|
||||
|
||||
def total(self, specs: List["spack.spec.Spec"]):
|
||||
visitor = CoverNodesVisitor(
|
||||
spack.spec.DagCountVisitor(self.depflag), key=lambda s: package_id(s)
|
||||
)
|
||||
traverse_breadth_first_with_visitor(specs, visitor)
|
||||
return visitor.visitor.number
|
||||
|
||||
|
||||
class BuildRequest:
|
||||
"""Class for representing an installation request."""
|
||||
|
||||
@@ -844,7 +806,16 @@ def get_depflags(self, pkg: "spack.package_base.PackageBase") -> int:
|
||||
depflag = dt.LINK | dt.RUN
|
||||
include_build_deps = self.install_args.get("include_build_deps")
|
||||
|
||||
if include_build_deps:
|
||||
if self.pkg_id == package_id(pkg.spec):
|
||||
cache_only = self.install_args.get("package_cache_only")
|
||||
else:
|
||||
cache_only = self.install_args.get("dependencies_cache_only")
|
||||
|
||||
# Include build dependencies if pkg is going to be built from sources, or
|
||||
# if build deps are explicitly requested.
|
||||
if include_build_deps or not (
|
||||
cache_only or pkg.spec.installed and pkg.spec.dag_hash() not in self.overwrite
|
||||
):
|
||||
depflag |= dt.BUILD
|
||||
if self.run_tests(pkg):
|
||||
depflag |= dt.TEST
|
||||
@@ -901,6 +872,7 @@ def __init__(
|
||||
pkg: "spack.package_base.PackageBase",
|
||||
request: BuildRequest,
|
||||
*,
|
||||
compiler: bool = False,
|
||||
start: float = 0.0,
|
||||
attempts: int = 0,
|
||||
status: BuildStatus = BuildStatus.QUEUED,
|
||||
@@ -995,14 +967,11 @@ def __init__(
|
||||
self.attempts = attempts
|
||||
self._update()
|
||||
|
||||
def execute(self, progress: InstallerProgress) -> ExecuteResult:
|
||||
def execute(self, install_status: InstallStatus) -> ExecuteResult:
|
||||
"""Execute the work of this task.
|
||||
|
||||
Args:
|
||||
progress: installation progress tracker
|
||||
|
||||
Returns: execution result
|
||||
"""
|
||||
The ``install_status`` is an ``InstallStatus`` object used to format progress reporting for
|
||||
this task in the context of the full ``BuildRequest``."""
|
||||
raise NotImplementedError
|
||||
|
||||
def __eq__(self, other):
|
||||
@@ -1167,26 +1136,33 @@ def priority(self):
|
||||
class BuildTask(Task):
|
||||
"""Class for representing a build task for a package."""
|
||||
|
||||
def execute(self, progress: InstallerProgress) -> ExecuteResult:
|
||||
def execute(self, install_status):
|
||||
"""
|
||||
Perform the installation of the requested spec and/or dependency
|
||||
represented by the build task.
|
||||
|
||||
Args:
|
||||
progress: installation progress tracker
|
||||
|
||||
Returns: execution result
|
||||
"""
|
||||
install_args = self.request.install_args
|
||||
tests = install_args.get("tests", False)
|
||||
tests = install_args.get("tests")
|
||||
unsigned = install_args.get("unsigned")
|
||||
|
||||
pkg, pkg_id = self.pkg, self.pkg_id
|
||||
|
||||
tty.msg(install_msg(pkg_id, self.pid))
|
||||
tty.msg(install_msg(pkg_id, self.pid, install_status))
|
||||
self.start = self.start or time.time()
|
||||
self.status = BuildStatus.INSTALLING
|
||||
|
||||
pkg.run_tests = tests is True or (tests and pkg.name in tests)
|
||||
# Use the binary cache if requested
|
||||
if self.use_cache:
|
||||
if _install_from_cache(pkg, self.explicit, unsigned):
|
||||
return ExecuteResult.SUCCESS
|
||||
elif self.cache_only:
|
||||
raise spack.error.InstallError(
|
||||
"No binary found when cache-only was specified", pkg=pkg
|
||||
)
|
||||
else:
|
||||
tty.msg(f"No binary for {pkg_id} found: installing from source")
|
||||
|
||||
pkg.run_tests = tests is True or tests and pkg.name in tests
|
||||
|
||||
# hook that allows tests to inspect the Package before installation
|
||||
# see unit_test_check() docs.
|
||||
@@ -1209,8 +1185,6 @@ def execute(self, progress: InstallerProgress) -> ExecuteResult:
|
||||
# Note: PARENT of the build process adds the new package to
|
||||
# the database, so that we don't need to re-read from file.
|
||||
spack.store.STORE.db.add(pkg.spec, explicit=self.explicit)
|
||||
|
||||
progress.set_installed(self.pkg, self.pkg.prefix)
|
||||
except spack.error.StopPhase as e:
|
||||
# A StopPhase exception means that do_install was asked to
|
||||
# stop early from clients, and is not an error at this point
|
||||
@@ -1220,77 +1194,10 @@ def execute(self, progress: InstallerProgress) -> ExecuteResult:
|
||||
return ExecuteResult.SUCCESS
|
||||
|
||||
|
||||
class InstallTask(Task):
|
||||
"""Class for representing a build task for a package."""
|
||||
|
||||
def execute(self, progress: InstallerProgress) -> ExecuteResult:
|
||||
"""
|
||||
Perform the installation of the requested spec and/or dependency
|
||||
represented by the build task.
|
||||
|
||||
Args:
|
||||
progress: installation progress tracker
|
||||
|
||||
Returns: execution result
|
||||
"""
|
||||
# no-op and requeue to build if not allowed to use cache
|
||||
if not self.use_cache:
|
||||
return ExecuteResult.MISSING_BINARY
|
||||
|
||||
install_args = self.request.install_args
|
||||
unsigned = install_args.get("unsigned")
|
||||
|
||||
pkg, pkg_id = self.pkg, self.pkg_id
|
||||
|
||||
tty.msg(install_msg(pkg_id, self.pid))
|
||||
self.start = self.start or time.time()
|
||||
self.status = BuildStatus.INSTALLING
|
||||
|
||||
try:
|
||||
if _install_from_cache(pkg, progress, self.explicit, unsigned):
|
||||
return ExecuteResult.SUCCESS
|
||||
elif self.cache_only:
|
||||
raise spack.error.InstallError(
|
||||
"No binary found when cache-only was specified", pkg=pkg
|
||||
)
|
||||
else:
|
||||
tty.msg(f"No binary for {pkg_id} found: installing from source")
|
||||
return ExecuteResult.MISSING_BINARY
|
||||
except binary_distribution.NoChecksumException as exc:
|
||||
if self.cache_only:
|
||||
raise
|
||||
|
||||
tty.error(
|
||||
f"Failed to install {self.pkg.name} from binary cache due "
|
||||
f"to {str(exc)}: Requeueing to install from source."
|
||||
)
|
||||
return ExecuteResult.MISSING_BINARY
|
||||
|
||||
def build_task(self, installed):
|
||||
build_task = BuildTask(
|
||||
pkg=self.pkg,
|
||||
request=self.request,
|
||||
start=0,
|
||||
attempts=self.attempts,
|
||||
status=BuildStatus.QUEUED,
|
||||
installed=installed,
|
||||
)
|
||||
|
||||
# Fixup dependents in case it was changed by `add_dependent`
|
||||
# This would be the case of a `build_spec` for a spliced spec
|
||||
build_task.dependents = self.dependents
|
||||
|
||||
# Same for dependencies
|
||||
build_task.dependencies = self.dependencies
|
||||
build_task.uninstalled_deps = self.uninstalled_deps - installed
|
||||
|
||||
return build_task
|
||||
|
||||
|
||||
class RewireTask(Task):
|
||||
"""Class for representing a rewire task for a package."""
|
||||
|
||||
def execute(self, progress: InstallerProgress) -> ExecuteResult:
|
||||
def execute(self, install_status):
|
||||
"""Execute rewire task
|
||||
|
||||
Rewire tasks are executed by either rewiring self.package.spec.build_spec that is already
|
||||
@@ -1299,30 +1206,24 @@ def execute(self, progress: InstallerProgress) -> ExecuteResult:
|
||||
If not available installed or as binary, return ExecuteResult.MISSING_BUILD_SPEC.
|
||||
This will prompt the Installer to requeue the task with a dependency on the BuildTask
|
||||
to install self.pkg.spec.build_spec
|
||||
|
||||
Args:
|
||||
progress: installation progress tracker
|
||||
|
||||
Returns: execution result
|
||||
"""
|
||||
oldstatus = self.status
|
||||
self.status = BuildStatus.INSTALLING
|
||||
tty.msg(install_msg(self.pkg_id, self.pid))
|
||||
tty.msg(install_msg(self.pkg_id, self.pid, install_status))
|
||||
self.start = self.start or time.time()
|
||||
if not self.pkg.spec.build_spec.installed:
|
||||
try:
|
||||
install_args = self.request.install_args
|
||||
unsigned = install_args.get("unsigned")
|
||||
_process_binary_cache_tarball(self.pkg, explicit=self.explicit, unsigned=unsigned)
|
||||
progress.set_installed(self.pkg, self.pkg.prefix)
|
||||
_print_installed_pkg(self.pkg.prefix)
|
||||
return ExecuteResult.SUCCESS
|
||||
except BaseException as e:
|
||||
tty.error(f"Failed to rewire {self.pkg.spec} from binary. {e}")
|
||||
self.status = oldstatus
|
||||
return ExecuteResult.MISSING_BUILD_SPEC
|
||||
|
||||
spack.rewiring.rewire_node(self.pkg.spec, self.explicit)
|
||||
progress.set_installed(self.pkg, self.pkg.prefix)
|
||||
_print_installed_pkg(self.pkg.prefix)
|
||||
return ExecuteResult.SUCCESS
|
||||
|
||||
|
||||
@@ -1422,9 +1323,6 @@ def __init__(
|
||||
# Priority queue of tasks
|
||||
self.build_pq: List[Tuple[Tuple[int, int], Task]] = []
|
||||
|
||||
# Installation status tracker
|
||||
self.progress: InstallerProgress = InstallerProgress(packages)
|
||||
|
||||
# Mapping of unique package ids to task
|
||||
self.build_tasks: Dict[str, Task] = {}
|
||||
|
||||
@@ -1479,9 +1377,8 @@ def _add_init_task(
|
||||
request: the associated install request
|
||||
all_deps: dictionary of all dependencies and associated dependents
|
||||
"""
|
||||
cls = RewireTask if pkg.spec.spliced else InstallTask
|
||||
task: Task = cls(pkg, request=request, status=BuildStatus.QUEUED, installed=self.installed)
|
||||
|
||||
cls = RewireTask if pkg.spec.spliced else BuildTask
|
||||
task = cls(pkg, request=request, status=BuildStatus.QUEUED, installed=self.installed)
|
||||
for dep_id in task.dependencies:
|
||||
all_deps[dep_id].add(package_id(pkg.spec))
|
||||
|
||||
@@ -1774,7 +1671,7 @@ def _requeue_with_build_spec_tasks(self, task):
|
||||
"""Requeue the task and its missing build spec dependencies"""
|
||||
# Full install of the build_spec is necessary because it didn't already exist somewhere
|
||||
spec = task.pkg.spec
|
||||
for dep in spec.build_spec.traverse(deptype=task.request.get_depflags(task.pkg)):
|
||||
for dep in spec.build_spec.traverse():
|
||||
dep_pkg = dep.package
|
||||
|
||||
dep_id = package_id(dep)
|
||||
@@ -1797,48 +1694,6 @@ def _requeue_with_build_spec_tasks(self, task):
|
||||
spec_task.add_dependency(build_pkg_id)
|
||||
self._push_task(spec_task)
|
||||
|
||||
def _requeue_as_build_task(self, task):
|
||||
# TODO: handle the compile bootstrapping stuff?
|
||||
spec = task.pkg.spec
|
||||
build_dep_ids = []
|
||||
for builddep in spec.dependencies(deptype=dt.BUILD):
|
||||
# track which package ids are the direct build deps
|
||||
build_dep_ids.append(package_id(builddep))
|
||||
for dep in builddep.traverse(deptype=task.request.get_depflags(task.pkg)):
|
||||
dep_pkg = dep.package
|
||||
dep_id = package_id(dep)
|
||||
|
||||
# Add a new task if we need one
|
||||
if dep_id not in self.build_tasks and dep_id not in self.installed:
|
||||
self._add_init_task(dep_pkg, task.request, self.all_dependencies)
|
||||
# Add edges for an existing task if it exists
|
||||
elif dep_id in self.build_tasks:
|
||||
for parent in dep.dependents():
|
||||
parent_id = package_id(parent)
|
||||
self.build_tasks[dep_id].add_dependent(parent_id)
|
||||
|
||||
# Clear any persistent failure markings _unless_ they
|
||||
# are associated with another process in this parallel build
|
||||
spack.store.STORE.failure_tracker.clear(dep, force=False)
|
||||
|
||||
# Remove InstallTask
|
||||
self._remove_task(task.pkg_id)
|
||||
|
||||
# New task to build this spec from source
|
||||
build_task = task.build_task(self.installed)
|
||||
build_task_id = package_id(spec)
|
||||
|
||||
# Attach dependency relationships between spec and build deps
|
||||
for build_dep_id in build_dep_ids:
|
||||
if build_dep_id not in self.installed:
|
||||
build_dep_task = self.build_tasks[build_dep_id]
|
||||
build_dep_task.add_dependent(build_task_id)
|
||||
|
||||
build_task.add_dependency(build_dep_id)
|
||||
|
||||
# Add new Task -- this removes the old task as well
|
||||
self._push_task(build_task)
|
||||
|
||||
def _add_tasks(self, request: BuildRequest, all_deps):
|
||||
"""Add tasks to the priority queue for the given build request.
|
||||
|
||||
@@ -1892,55 +1747,19 @@ def _add_tasks(self, request: BuildRequest, all_deps):
|
||||
fail_fast = bool(request.install_args.get("fail_fast"))
|
||||
self.fail_fast = self.fail_fast or fail_fast
|
||||
|
||||
def _install_task(self, task: Task) -> ExecuteResult:
|
||||
def _install_task(self, task: Task, install_status: InstallStatus) -> None:
|
||||
"""
|
||||
Perform the installation of the requested spec and/or dependency
|
||||
represented by the task.
|
||||
|
||||
Args:
|
||||
task: the installation task for a package
|
||||
"""
|
||||
rc = task.execute(self.progress)
|
||||
install_status: the installation status for the package"""
|
||||
rc = task.execute(install_status)
|
||||
if rc == ExecuteResult.MISSING_BUILD_SPEC:
|
||||
self._requeue_with_build_spec_tasks(task)
|
||||
elif rc == ExecuteResult.MISSING_BINARY:
|
||||
self._requeue_as_build_task(task)
|
||||
else: # if rc == ExecuteResult.SUCCESS or rc == ExecuteResult.FAILED
|
||||
self._update_installed(task)
|
||||
return rc
|
||||
|
||||
def _overwrite_install_task(self, task: Task):
|
||||
"""
|
||||
Try to run the install task overwriting the package prefix.
|
||||
If this fails, try to recover the original install prefix. If that fails
|
||||
too, mark the spec as uninstalled.
|
||||
"""
|
||||
try:
|
||||
with fs.replace_directory_transaction(task.pkg.prefix):
|
||||
rc = self._install_task(task)
|
||||
if rc in requeue_results:
|
||||
raise Requeue # raise to trigger transactional replacement of directory
|
||||
|
||||
except Requeue:
|
||||
pass # This task is requeueing, not failing
|
||||
except fs.CouldNotRestoreDirectoryBackup as e:
|
||||
spack.store.STORE.db.remove(task.pkg.spec)
|
||||
if isinstance(e.inner_exception, Requeue):
|
||||
message_fn = tty.warn
|
||||
else:
|
||||
message_fn = tty.error
|
||||
|
||||
message_fn(
|
||||
f"Recovery of install dir of {task.pkg.name} failed due to "
|
||||
f"{e.outer_exception.__class__.__name__}: {str(e.outer_exception)}. "
|
||||
"The spec is now uninstalled."
|
||||
)
|
||||
|
||||
# Unwrap the actuall installation exception
|
||||
if isinstance(e.inner_exception, Requeue):
|
||||
tty.warn("Task will be requeued to build from source")
|
||||
else:
|
||||
raise e.inner_exception
|
||||
|
||||
def _next_is_pri0(self) -> bool:
|
||||
"""
|
||||
@@ -2044,7 +1863,7 @@ def _remove_task(self, pkg_id: str) -> Optional[Task]:
|
||||
else:
|
||||
return None
|
||||
|
||||
def _requeue_task(self, task: Task) -> None:
|
||||
def _requeue_task(self, task: Task, install_status: InstallStatus) -> None:
|
||||
"""
|
||||
Requeues a task that appears to be in progress by another process.
|
||||
|
||||
@@ -2052,7 +1871,10 @@ def _requeue_task(self, task: Task) -> None:
|
||||
task (Task): the installation task for a package
|
||||
"""
|
||||
if task.status not in [BuildStatus.INSTALLED, BuildStatus.INSTALLING]:
|
||||
tty.debug(f"{install_msg(task.pkg_id, self.pid)} in progress by another process")
|
||||
tty.debug(
|
||||
f"{install_msg(task.pkg_id, self.pid, install_status)} "
|
||||
"in progress by another process"
|
||||
)
|
||||
|
||||
new_task = task.next_attempt(self.installed)
|
||||
new_task.status = BuildStatus.INSTALLING
|
||||
@@ -2198,6 +2020,8 @@ def install(self) -> None:
|
||||
single_requested_spec = len(self.build_requests) == 1
|
||||
failed_build_requests = []
|
||||
|
||||
install_status = InstallStatus(len(self.build_pq))
|
||||
|
||||
# Only enable the terminal status line when we're in a tty without debug info
|
||||
# enabled, so that the output does not get cluttered.
|
||||
term_status = TermStatusLine(
|
||||
@@ -2213,7 +2037,8 @@ def install(self) -> None:
|
||||
keep_prefix = install_args.get("keep_prefix")
|
||||
|
||||
pkg, pkg_id, spec = task.pkg, task.pkg_id, task.pkg.spec
|
||||
self.progress.set_term_title(f"Processing {pkg.name}")
|
||||
install_status.next_pkg(pkg)
|
||||
install_status.set_term_title(f"Processing {pkg.name}")
|
||||
tty.debug(f"Processing {pkg_id}: task={task}")
|
||||
# Ensure that the current spec has NO uninstalled dependencies,
|
||||
# which is assumed to be reflected directly in its priority.
|
||||
@@ -2242,7 +2067,7 @@ def install(self) -> None:
|
||||
# Skip the installation if the spec is not being installed locally
|
||||
# (i.e., if external or upstream) BUT flag it as installed since
|
||||
# some package likely depends on it.
|
||||
if _handle_external_and_upstream(pkg, task.explicit, self.progress):
|
||||
if _handle_external_and_upstream(pkg, task.explicit):
|
||||
term_status.clear()
|
||||
self._flag_installed(pkg, task.dependents)
|
||||
continue
|
||||
@@ -2263,7 +2088,7 @@ def install(self) -> None:
|
||||
# another process is likely (un)installing the spec or has
|
||||
# determined the spec has already been installed (though the
|
||||
# other process may be hung).
|
||||
self.progress.set_term_title(f"Acquiring lock for {pkg.name}")
|
||||
install_status.set_term_title(f"Acquiring lock for {pkg.name}")
|
||||
term_status.add(pkg_id)
|
||||
ltype, lock = self._ensure_locked("write", pkg)
|
||||
if lock is None:
|
||||
@@ -2275,7 +2100,7 @@ def install(self) -> None:
|
||||
# can check the status presumably established by another process
|
||||
# -- failed, installed, or uninstalled -- on the next pass.
|
||||
if lock is None:
|
||||
self._requeue_task(task)
|
||||
self._requeue_task(task, install_status)
|
||||
continue
|
||||
|
||||
term_status.clear()
|
||||
@@ -2286,7 +2111,7 @@ def install(self) -> None:
|
||||
task.request.overwrite_time = time.time()
|
||||
|
||||
# Determine state of installation artifacts and adjust accordingly.
|
||||
self.progress.set_term_title(f"Preparing {pkg.name}")
|
||||
install_status.set_term_title(f"Preparing {pkg.name}")
|
||||
self._prepare_for_install(task)
|
||||
|
||||
# Flag an already installed package
|
||||
@@ -2298,7 +2123,7 @@ def install(self) -> None:
|
||||
if lock is not None:
|
||||
self._update_installed(task)
|
||||
path = spack.util.path.debug_padded_filter(pkg.prefix)
|
||||
self.progress.set_installed(pkg, path)
|
||||
_print_installed_pkg(path)
|
||||
else:
|
||||
# At this point we've failed to get a write or a read
|
||||
# lock, which means another process has taken a write
|
||||
@@ -2309,7 +2134,7 @@ def install(self) -> None:
|
||||
# established by the other process -- failed, installed,
|
||||
# or uninstalled -- on the next pass.
|
||||
self.installed.remove(pkg_id)
|
||||
self._requeue_task(task)
|
||||
self._requeue_task(task, install_status)
|
||||
continue
|
||||
|
||||
# Having a read lock on an uninstalled pkg may mean another
|
||||
@@ -2322,19 +2147,21 @@ def install(self) -> None:
|
||||
# uninstalled -- on the next pass.
|
||||
if ltype == "read":
|
||||
lock.release_read()
|
||||
self._requeue_task(task)
|
||||
self._requeue_task(task, install_status)
|
||||
continue
|
||||
|
||||
# Proceed with the installation since we have an exclusive write
|
||||
# lock on the package.
|
||||
self.progress.set_term_title(f"Installing {pkg.name}")
|
||||
install_status.set_term_title(f"Installing {pkg.name}")
|
||||
try:
|
||||
action = self._install_action(task)
|
||||
|
||||
if action == InstallAction.INSTALL:
|
||||
self._install_task(task)
|
||||
self._install_task(task, install_status)
|
||||
elif action == InstallAction.OVERWRITE:
|
||||
self._overwrite_install_task(task)
|
||||
# spack.store.STORE.db is not really a Database object, but a small
|
||||
# wrapper -- silence mypy
|
||||
OverwriteInstall(self, spack.store.STORE.db, task, install_status).install() # type: ignore[arg-type] # noqa: E501
|
||||
|
||||
# If we installed then we should keep the prefix
|
||||
stop_before_phase = getattr(pkg, "stop_before_phase", None)
|
||||
@@ -2349,6 +2176,20 @@ def install(self) -> None:
|
||||
)
|
||||
raise
|
||||
|
||||
except binary_distribution.NoChecksumException as exc:
|
||||
if task.cache_only:
|
||||
raise
|
||||
|
||||
# Checking hash on downloaded binary failed.
|
||||
tty.error(
|
||||
f"Failed to install {pkg.name} from binary cache due "
|
||||
f"to {str(exc)}: Requeueing to install from source."
|
||||
)
|
||||
# this overrides a full method, which is ugly.
|
||||
task.use_cache = False # type: ignore[misc]
|
||||
self._requeue_task(task, install_status)
|
||||
continue
|
||||
|
||||
except (Exception, SystemExit) as exc:
|
||||
self._update_failed(task, True, exc)
|
||||
|
||||
@@ -2384,12 +2225,7 @@ def install(self) -> None:
|
||||
# Perform basic task cleanup for the installed spec to
|
||||
# include downgrading the write to a read lock
|
||||
if pkg.spec.installed:
|
||||
# Do not clean up this was an overwrite that wasn't completed
|
||||
overwrite = spec.dag_hash() in task.request.overwrite
|
||||
rec = spack.store.STORE.db.get_record(pkg.spec)
|
||||
incomplete = task.request.overwrite_time > rec.installation_time
|
||||
if not (overwrite and incomplete):
|
||||
self._cleanup_task(pkg)
|
||||
self._cleanup_task(pkg)
|
||||
|
||||
# Cleanup, which includes releasing all of the read locks
|
||||
self._cleanup_all_tasks()
|
||||
@@ -2541,6 +2377,7 @@ def run(self) -> bool:
|
||||
|
||||
print_install_test_log(self.pkg)
|
||||
_print_timer(pre=self.pre, pkg_id=self.pkg_id, timer=self.timer)
|
||||
_print_installed_pkg(self.pkg.prefix)
|
||||
|
||||
# preserve verbosity across runs
|
||||
return self.echo
|
||||
@@ -2686,22 +2523,39 @@ def deprecate(spec: "spack.spec.Spec", deprecator: "spack.spec.Spec", link_fn) -
|
||||
link_fn(deprecator.prefix, spec.prefix)
|
||||
|
||||
|
||||
class Requeue(Exception):
|
||||
"""Raised when we need an error to indicate a requeueing situation.
|
||||
class OverwriteInstall:
|
||||
def __init__(
|
||||
self,
|
||||
installer: PackageInstaller,
|
||||
database: spack.database.Database,
|
||||
task: Task,
|
||||
install_status: InstallStatus,
|
||||
):
|
||||
self.installer = installer
|
||||
self.database = database
|
||||
self.task = task
|
||||
self.install_status = install_status
|
||||
|
||||
While this is raised and excepted, it does not represent an Error."""
|
||||
def install(self):
|
||||
"""
|
||||
Try to run the install task overwriting the package prefix.
|
||||
If this fails, try to recover the original install prefix. If that fails
|
||||
too, mark the spec as uninstalled. This function always the original
|
||||
install error if installation fails.
|
||||
"""
|
||||
try:
|
||||
with fs.replace_directory_transaction(self.task.pkg.prefix):
|
||||
self.installer._install_task(self.task, self.install_status)
|
||||
except fs.CouldNotRestoreDirectoryBackup as e:
|
||||
self.database.remove(self.task.pkg.spec)
|
||||
tty.error(
|
||||
f"Recovery of install dir of {self.task.pkg.name} failed due to "
|
||||
f"{e.outer_exception.__class__.__name__}: {str(e.outer_exception)}. "
|
||||
"The spec is now uninstalled."
|
||||
)
|
||||
|
||||
|
||||
class InstallError(spack.error.SpackError):
|
||||
"""Raised when something goes wrong during install or uninstall.
|
||||
|
||||
The error can be annotated with a ``pkg`` attribute to allow the
|
||||
caller to get the package for which the exception was raised.
|
||||
"""
|
||||
|
||||
def __init__(self, message, long_msg=None, pkg=None):
|
||||
super().__init__(message, long_msg)
|
||||
self.pkg = pkg
|
||||
# Unwrap the actual installation exception.
|
||||
raise e.inner_exception
|
||||
|
||||
|
||||
class BadInstallPhase(spack.error.InstallError):
|
||||
|
@@ -101,26 +101,17 @@ def wrapper(instance, *args, **kwargs):
|
||||
# installed explicitly will also be installed as a
|
||||
# dependency of another spec. In this case append to both
|
||||
# spec reports.
|
||||
added = []
|
||||
for current_spec in llnl.util.lang.dedupe([pkg.spec.root, pkg.spec]):
|
||||
name = name_fmt.format(current_spec.name, current_spec.dag_hash(length=7))
|
||||
try:
|
||||
item = next((x for x in self.specs if x["name"] == name))
|
||||
item["packages"].append(package)
|
||||
added.append(item)
|
||||
except StopIteration:
|
||||
pass
|
||||
|
||||
start_time = time.time()
|
||||
try:
|
||||
value = wrapped_fn(instance, *args, **kwargs)
|
||||
|
||||
# If we are requeuing the task, it neither succeeded nor failed
|
||||
# remove the package so we don't count it (yet) in either category
|
||||
if value in spack.installer.requeue_results:
|
||||
for item in added:
|
||||
item["packages"].remove(package)
|
||||
|
||||
package["stdout"] = self.fetch_log(pkg)
|
||||
package["installed_from_binary_cache"] = pkg.installed_from_binary_cache
|
||||
self.on_success(pkg, kwargs, package)
|
||||
|
@@ -3005,6 +3005,10 @@ def setup(
|
||||
|
||||
# Fail if we already know an unreachable node is requested
|
||||
for spec in specs:
|
||||
# concrete roots don't need their dependencies verified
|
||||
if spec.concrete:
|
||||
continue
|
||||
|
||||
missing_deps = [
|
||||
str(d)
|
||||
for d in spec.traverse()
|
||||
|
@@ -153,7 +153,8 @@
|
||||
r"(})?" # finish format string with non-escaped close brace }, or missing if not present
|
||||
r"|"
|
||||
# OPTION 3: mismatched close brace (option 2 would consume a matched open brace)
|
||||
r"(})" r")", # brace
|
||||
r"(})" # brace
|
||||
r")",
|
||||
re.IGNORECASE,
|
||||
)
|
||||
|
||||
@@ -662,11 +663,9 @@ def versions(self):
|
||||
def display_str(self):
|
||||
"""Equivalent to {compiler.name}{@compiler.version} for Specs, without extra
|
||||
@= for readability."""
|
||||
if self.spec.concrete:
|
||||
return f"{self.name}@{self.version}"
|
||||
elif self.versions != vn.any_version:
|
||||
return f"{self.name}@{self.versions}"
|
||||
return self.name
|
||||
if self.versions != vn.any_version:
|
||||
return self.spec.format("{name}{@version}")
|
||||
return self.spec.format("{name}")
|
||||
|
||||
def __lt__(self, other):
|
||||
if not isinstance(other, CompilerSpec):
|
||||
@@ -2679,7 +2678,7 @@ def name_and_dependency_types(s: str) -> Tuple[str, dt.DepFlag]:
|
||||
return name, depflag
|
||||
|
||||
def spec_and_dependency_types(
|
||||
s: Union[Spec, Tuple[Spec, str]]
|
||||
s: Union[Spec, Tuple[Spec, str]],
|
||||
) -> Tuple[Spec, dt.DepFlag]:
|
||||
"""Given a non-string key in the literal, extracts the spec
|
||||
and its dependency types.
|
||||
@@ -5150,21 +5149,6 @@ def eval_conditional(string):
|
||||
return eval(string, valid_variables)
|
||||
|
||||
|
||||
class DagCountVisitor:
|
||||
"""Class for counting the number of specs encountered during traversal."""
|
||||
|
||||
def __init__(self, depflag: int):
|
||||
self.depflag: int = depflag
|
||||
self.number: int = 0
|
||||
|
||||
def accept(self, item: spack.traverse.EdgeAndDepth) -> bool:
|
||||
self.number += 1
|
||||
return True
|
||||
|
||||
def neighbors(self, item: spack.traverse.EdgeAndDepth):
|
||||
return item.edge.spec.edges_to_dependencies(depflag=self.depflag)
|
||||
|
||||
|
||||
class SpecParseError(spack.error.SpecError):
|
||||
"""Wrapper for ParseError for when we're parsing specs."""
|
||||
|
||||
|
@@ -231,13 +231,13 @@ def test_default_rpaths_create_install_default_layout(temporary_mirror_dir):
|
||||
uninstall_cmd("-y", "--dependents", gspec.name)
|
||||
|
||||
# Test installing from build caches
|
||||
buildcache_cmd("install", "-uo", cspec.name, sy_spec.name)
|
||||
buildcache_cmd("install", "-u", cspec.name, sy_spec.name)
|
||||
|
||||
# This gives warning that spec is already installed
|
||||
buildcache_cmd("install", "-uo", cspec.name)
|
||||
buildcache_cmd("install", "-u", cspec.name)
|
||||
|
||||
# Test overwrite install
|
||||
buildcache_cmd("install", "-fuo", cspec.name)
|
||||
buildcache_cmd("install", "-fu", cspec.name)
|
||||
|
||||
buildcache_cmd("keys", "-f")
|
||||
buildcache_cmd("list")
|
||||
@@ -263,10 +263,10 @@ def test_default_rpaths_install_nondefault_layout(temporary_mirror_dir):
|
||||
|
||||
# Install some packages with dependent packages
|
||||
# test install in non-default install path scheme
|
||||
buildcache_cmd("install", "-uo", cspec.name, sy_spec.name)
|
||||
buildcache_cmd("install", "-u", cspec.name, sy_spec.name)
|
||||
|
||||
# Test force install in non-default install path scheme
|
||||
buildcache_cmd("install", "-ufo", cspec.name)
|
||||
buildcache_cmd("install", "-uf", cspec.name)
|
||||
|
||||
|
||||
@pytest.mark.requires_executables(*required_executables)
|
||||
@@ -288,19 +288,19 @@ def test_relative_rpaths_install_default_layout(temporary_mirror_dir):
|
||||
cspec = spack.concretize.concretize_one("corge")
|
||||
|
||||
# Install buildcache created with relativized rpaths
|
||||
buildcache_cmd("install", "-ufo", cspec.name)
|
||||
buildcache_cmd("install", "-uf", cspec.name)
|
||||
|
||||
# This gives warning that spec is already installed
|
||||
buildcache_cmd("install", "-ufo", cspec.name)
|
||||
buildcache_cmd("install", "-uf", cspec.name)
|
||||
|
||||
# Uninstall the package and deps
|
||||
uninstall_cmd("-y", "--dependents", gspec.name)
|
||||
|
||||
# Install build cache
|
||||
buildcache_cmd("install", "-ufo", cspec.name)
|
||||
buildcache_cmd("install", "-uf", cspec.name)
|
||||
|
||||
# Test overwrite install
|
||||
buildcache_cmd("install", "-ufo", cspec.name)
|
||||
buildcache_cmd("install", "-uf", cspec.name)
|
||||
|
||||
|
||||
@pytest.mark.requires_executables(*required_executables)
|
||||
@@ -317,7 +317,7 @@ def test_relative_rpaths_install_nondefault(temporary_mirror_dir):
|
||||
cspec = spack.concretize.concretize_one("corge")
|
||||
|
||||
# Test install in non-default install path scheme and relative path
|
||||
buildcache_cmd("install", "-ufo", cspec.name)
|
||||
buildcache_cmd("install", "-uf", cspec.name)
|
||||
|
||||
|
||||
def test_push_and_fetch_keys(mock_gnupghome, tmp_path):
|
||||
|
@@ -56,14 +56,33 @@ def test_build_request_strings(install_mockery):
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"include_build_deps,deptypes", [(True, dt.BUILD | dt.LINK | dt.RUN), (False, dt.LINK | dt.RUN)]
|
||||
"package_cache_only,dependencies_cache_only,package_deptypes,dependencies_deptypes",
|
||||
[
|
||||
(False, False, dt.BUILD | dt.LINK | dt.RUN, dt.BUILD | dt.LINK | dt.RUN),
|
||||
(True, False, dt.LINK | dt.RUN, dt.BUILD | dt.LINK | dt.RUN),
|
||||
(False, True, dt.BUILD | dt.LINK | dt.RUN, dt.LINK | dt.RUN),
|
||||
(True, True, dt.LINK | dt.RUN, dt.LINK | dt.RUN),
|
||||
],
|
||||
)
|
||||
def test_build_request_deptypes(install_mockery, include_build_deps, deptypes):
|
||||
def test_build_request_deptypes(
|
||||
install_mockery,
|
||||
package_cache_only,
|
||||
dependencies_cache_only,
|
||||
package_deptypes,
|
||||
dependencies_deptypes,
|
||||
):
|
||||
s = spack.concretize.concretize_one("dependent-install")
|
||||
|
||||
build_request = inst.BuildRequest(s.package, {"include_build_deps": include_build_deps})
|
||||
build_request = inst.BuildRequest(
|
||||
s.package,
|
||||
{
|
||||
"package_cache_only": package_cache_only,
|
||||
"dependencies_cache_only": dependencies_cache_only,
|
||||
},
|
||||
)
|
||||
|
||||
package_deptypes = build_request.get_depflags(s.package)
|
||||
dependency_deptypes = build_request.get_depflags(s["dependency-install"].package)
|
||||
actual_package_deptypes = build_request.get_depflags(s.package)
|
||||
actual_dependency_deptypes = build_request.get_depflags(s["dependency-install"].package)
|
||||
|
||||
assert package_deptypes == dependency_deptypes == deptypes
|
||||
assert actual_package_deptypes == package_deptypes
|
||||
assert actual_dependency_deptypes == dependencies_deptypes
|
||||
|
@@ -32,7 +32,7 @@ def repro_dir(tmp_path):
|
||||
|
||||
|
||||
def test_get_added_versions_new_checksum(mock_git_package_changes):
|
||||
repo_path, filename, commits = mock_git_package_changes
|
||||
repo, filename, commits = mock_git_package_changes
|
||||
|
||||
checksum_versions = {
|
||||
"3f6576971397b379d4205ae5451ff5a68edf6c103b2f03c4188ed7075fbb5f04": Version("2.1.5"),
|
||||
@@ -41,7 +41,7 @@ def test_get_added_versions_new_checksum(mock_git_package_changes):
|
||||
"86993903527d9b12fc543335c19c1d33a93797b3d4d37648b5addae83679ecd8": Version("2.0.0"),
|
||||
}
|
||||
|
||||
with fs.working_dir(str(repo_path)):
|
||||
with fs.working_dir(repo.packages_path):
|
||||
added_versions = ci.get_added_versions(
|
||||
checksum_versions, filename, from_ref=commits[-1], to_ref=commits[-2]
|
||||
)
|
||||
@@ -50,7 +50,7 @@ def test_get_added_versions_new_checksum(mock_git_package_changes):
|
||||
|
||||
|
||||
def test_get_added_versions_new_commit(mock_git_package_changes):
|
||||
repo_path, filename, commits = mock_git_package_changes
|
||||
repo, filename, commits = mock_git_package_changes
|
||||
|
||||
checksum_versions = {
|
||||
"74253725f884e2424a0dd8ae3f69896d5377f325": Version("2.1.6"),
|
||||
@@ -60,9 +60,9 @@ def test_get_added_versions_new_commit(mock_git_package_changes):
|
||||
"86993903527d9b12fc543335c19c1d33a93797b3d4d37648b5addae83679ecd8": Version("2.0.0"),
|
||||
}
|
||||
|
||||
with fs.working_dir(str(repo_path)):
|
||||
with fs.working_dir(repo.packages_path):
|
||||
added_versions = ci.get_added_versions(
|
||||
checksum_versions, filename, from_ref=commits[2], to_ref=commits[1]
|
||||
checksum_versions, filename, from_ref=commits[-2], to_ref=commits[-3]
|
||||
)
|
||||
assert len(added_versions) == 1
|
||||
assert added_versions[0] == Version("2.1.6")
|
||||
|
@@ -1978,6 +1978,13 @@ def test_ci_validate_git_versions_invalid(
|
||||
assert f"Invalid commit for diff-test@{version}" in err
|
||||
|
||||
|
||||
def mock_packages_path(path):
|
||||
def packages_path():
|
||||
return path
|
||||
|
||||
return packages_path
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def verify_standard_versions_valid(monkeypatch):
|
||||
def validate_standard_versions(pkg, versions):
|
||||
@@ -2024,9 +2031,12 @@ def test_ci_verify_versions_valid(
|
||||
mock_git_package_changes,
|
||||
verify_standard_versions_valid,
|
||||
verify_git_versions_valid,
|
||||
tmpdir,
|
||||
):
|
||||
repo_path, _, commits = mock_git_package_changes
|
||||
monkeypatch.setattr(spack.paths, "prefix", repo_path)
|
||||
repo, _, commits = mock_git_package_changes
|
||||
spack.repo.PATH.put_first(repo)
|
||||
|
||||
monkeypatch.setattr(spack.repo, "packages_path", mock_packages_path(repo.packages_path))
|
||||
|
||||
out = ci_cmd("verify-versions", commits[-1], commits[-3])
|
||||
assert "Validated diff-test@2.1.5" in out
|
||||
@@ -2040,9 +2050,10 @@ def test_ci_verify_versions_standard_invalid(
|
||||
verify_standard_versions_invalid,
|
||||
verify_git_versions_invalid,
|
||||
):
|
||||
repo_path, _, commits = mock_git_package_changes
|
||||
repo, _, commits = mock_git_package_changes
|
||||
spack.repo.PATH.put_first(repo)
|
||||
|
||||
monkeypatch.setattr(spack.paths, "prefix", repo_path)
|
||||
monkeypatch.setattr(spack.repo, "packages_path", mock_packages_path(repo.packages_path))
|
||||
|
||||
out = ci_cmd("verify-versions", commits[-1], commits[-3], fail_on_error=False)
|
||||
assert "Invalid checksum found diff-test@2.1.5" in out
|
||||
@@ -2050,8 +2061,10 @@ def test_ci_verify_versions_standard_invalid(
|
||||
|
||||
|
||||
def test_ci_verify_versions_manual_package(monkeypatch, mock_packages, mock_git_package_changes):
|
||||
repo_path, _, commits = mock_git_package_changes
|
||||
monkeypatch.setattr(spack.paths, "prefix", repo_path)
|
||||
repo, _, commits = mock_git_package_changes
|
||||
spack.repo.PATH.put_first(repo)
|
||||
|
||||
monkeypatch.setattr(spack.repo, "packages_path", mock_packages_path(repo.packages_path))
|
||||
|
||||
pkg_class = spack.spec.Spec("diff-test").package_class
|
||||
monkeypatch.setattr(pkg_class, "manual_download", True)
|
||||
|
@@ -62,7 +62,7 @@
|
||||
(
|
||||
["-t", "intel", "/test-intel"],
|
||||
"test-intel",
|
||||
[r"TestIntel(IntelPackage)", r"setup_environment"],
|
||||
[r"TestIntel(IntelOneApiPackage)", r"setup_environment"],
|
||||
),
|
||||
(
|
||||
["-t", "makefile", "/test-makefile"],
|
||||
|
@@ -24,8 +24,6 @@ def test_it_just_runs(pkg):
|
||||
(
|
||||
("mpi",),
|
||||
[
|
||||
"intel-mpi",
|
||||
"intel-parallel-studio",
|
||||
"mpich",
|
||||
"mpilander",
|
||||
"mvapich2",
|
||||
|
@@ -243,13 +243,11 @@ def latest_commit():
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_git_package_changes(git, tmpdir, override_git_repos_cache_path):
|
||||
def mock_git_package_changes(git, tmpdir, override_git_repos_cache_path, monkeypatch):
|
||||
"""Create a mock git repo with known structure of package edits
|
||||
|
||||
The structure of commits in this repo is as follows::
|
||||
|
||||
o diff-test: modification to make manual download package
|
||||
|
|
||||
o diff-test: add v1.2 (from a git ref)
|
||||
|
|
||||
o diff-test: add v1.1 (from source tarball)
|
||||
@@ -261,8 +259,12 @@ def mock_git_package_changes(git, tmpdir, override_git_repos_cache_path):
|
||||
Important attributes of the repo for test coverage are: multiple package
|
||||
versions are added with some coming from a tarball and some from git refs.
|
||||
"""
|
||||
repo_path = str(tmpdir.mkdir("git_package_changes_repo"))
|
||||
filename = "var/spack/repos/builtin/packages/diff-test/package.py"
|
||||
filename = "diff-test/package.py"
|
||||
|
||||
repo_path, _ = spack.repo.create_repo(str(tmpdir.mkdir("myrepo")))
|
||||
repo_cache = spack.util.file_cache.FileCache(str(tmpdir.mkdir("cache")))
|
||||
|
||||
repo = spack.repo.Repo(repo_path, cache=repo_cache)
|
||||
|
||||
def commit(message):
|
||||
global commit_counter
|
||||
@@ -276,7 +278,7 @@ def commit(message):
|
||||
)
|
||||
commit_counter += 1
|
||||
|
||||
with working_dir(repo_path):
|
||||
with working_dir(repo.packages_path):
|
||||
git("init")
|
||||
|
||||
git("config", "user.name", "Spack")
|
||||
@@ -307,17 +309,11 @@ def latest_commit():
|
||||
commit("diff-test: add v2.1.6")
|
||||
commits.append(latest_commit())
|
||||
|
||||
# convert pkg-a to a manual download package
|
||||
shutil.copy2(f"{spack.paths.test_path}/data/conftest/diff-test/package-3.txt", filename)
|
||||
git("add", filename)
|
||||
commit("diff-test: modification to make manual download package")
|
||||
commits.append(latest_commit())
|
||||
|
||||
# The commits are ordered with the last commit first in the list
|
||||
commits = list(reversed(commits))
|
||||
|
||||
# Return the git directory to install, the filename used, and the commits
|
||||
yield repo_path, filename, commits
|
||||
yield repo, filename, commits
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
|
@@ -1,23 +0,0 @@
|
||||
# Copyright Spack Project Developers. See COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class DiffTest(AutotoolsPackage):
|
||||
"""zlib replacement with optimizations for next generation systems."""
|
||||
|
||||
homepage = "https://github.com/zlib-ng/zlib-ng"
|
||||
url = "https://github.com/zlib-ng/zlib-ng/archive/2.0.0.tar.gz"
|
||||
git = "https://github.com/zlib-ng/zlib-ng.git"
|
||||
|
||||
license("Zlib")
|
||||
|
||||
manual_download = True
|
||||
|
||||
version("2.1.6", tag="2.1.6", commit="74253725f884e2424a0dd8ae3f69896d5377f325")
|
||||
version("2.1.5", sha256="3f6576971397b379d4205ae5451ff5a68edf6c103b2f03c4188ed7075fbb5f04")
|
||||
version("2.1.4", sha256="a0293475e6a44a3f6c045229fe50f69dc0eebc62a42405a51f19d46a5541e77a")
|
||||
version("2.0.7", sha256="6c0853bb27738b811f2b4d4af095323c3d5ce36ceed6b50e5f773204fb8f7200")
|
||||
version("2.0.0", sha256="86993903527d9b12fc543335c19c1d33a93797b3d4d37648b5addae83679ecd8")
|
@@ -28,7 +28,7 @@
|
||||
import spack.spec
|
||||
import spack.store
|
||||
import spack.util.lock as lk
|
||||
import spack.util.spack_json as sjson
|
||||
from spack.installer import PackageInstaller
|
||||
from spack.main import SpackCommand
|
||||
|
||||
|
||||
@@ -77,13 +77,6 @@ def create_build_task(
|
||||
return inst.BuildTask(pkg, request=request, status=inst.BuildStatus.QUEUED)
|
||||
|
||||
|
||||
def create_install_task(
|
||||
pkg: spack.package_base.PackageBase, install_args: Optional[dict] = None
|
||||
) -> inst.InstallTask:
|
||||
request = inst.BuildRequest(pkg, {} if install_args is None else install_args)
|
||||
return inst.InstallTask(pkg, request=request, status=inst.BuildStatus.QUEUED)
|
||||
|
||||
|
||||
def create_installer(
|
||||
specs: Union[List[str], List[spack.spec.Spec]], install_args: Optional[dict] = None
|
||||
) -> inst.PackageInstaller:
|
||||
@@ -123,15 +116,19 @@ def test_install_msg(monkeypatch):
|
||||
install_msg = "Installing {0}".format(name)
|
||||
|
||||
monkeypatch.setattr(tty, "_debug", 0)
|
||||
assert inst.install_msg(name, pid) == install_msg
|
||||
assert inst.install_msg(name, pid, None) == install_msg
|
||||
|
||||
install_status = inst.InstallStatus(1)
|
||||
expected = "{0} [0/1]".format(install_msg)
|
||||
assert inst.install_msg(name, pid, install_status) == expected
|
||||
|
||||
monkeypatch.setattr(tty, "_debug", 1)
|
||||
assert inst.install_msg(name, pid) == install_msg
|
||||
assert inst.install_msg(name, pid, None) == install_msg
|
||||
|
||||
# Expect the PID to be added at debug level 2
|
||||
monkeypatch.setattr(tty, "_debug", 2)
|
||||
expected = "{0}: {1}".format(pid, install_msg)
|
||||
assert inst.install_msg(name, pid) == expected
|
||||
assert inst.install_msg(name, pid, None) == expected
|
||||
|
||||
|
||||
def test_install_from_cache_errors(install_mockery):
|
||||
@@ -143,15 +140,13 @@ def test_install_from_cache_errors(install_mockery):
|
||||
with pytest.raises(
|
||||
spack.error.InstallError, match="No binary found when cache-only was specified"
|
||||
):
|
||||
inst.PackageInstaller(
|
||||
PackageInstaller(
|
||||
[spec.package], package_cache_only=True, dependencies_cache_only=True
|
||||
).install()
|
||||
assert not spec.package.installed_from_binary_cache
|
||||
|
||||
# Check when don't expect to install only from binary cache
|
||||
assert not inst._install_from_cache(
|
||||
spec.package, inst.InstallerProgress([spec.package]), explicit=True, unsigned=False
|
||||
)
|
||||
assert not inst._install_from_cache(spec.package, explicit=True, unsigned=False)
|
||||
assert not spec.package.installed_from_binary_cache
|
||||
|
||||
|
||||
@@ -161,9 +156,7 @@ def test_install_from_cache_ok(install_mockery, monkeypatch):
|
||||
monkeypatch.setattr(inst, "_try_install_from_binary_cache", _true)
|
||||
monkeypatch.setattr(spack.hooks, "post_install", _noop)
|
||||
|
||||
assert inst._install_from_cache(
|
||||
spec.package, inst.InstallerProgress([spec.package]), explicit=True, unsigned=False
|
||||
)
|
||||
assert inst._install_from_cache(spec.package, explicit=True, unsigned=False)
|
||||
|
||||
|
||||
def test_process_external_package_module(install_mockery, monkeypatch, capfd):
|
||||
@@ -228,6 +221,54 @@ def test_installer_str(install_mockery):
|
||||
assert "failed (0)" in istr
|
||||
|
||||
|
||||
def test_installer_prune_built_build_deps(install_mockery, monkeypatch, tmpdir):
|
||||
r"""
|
||||
Ensure that build dependencies of installed deps are pruned
|
||||
from installer package queues.
|
||||
|
||||
(a)
|
||||
/ \
|
||||
/ \
|
||||
(b) (c) <--- is installed already so we should
|
||||
\ / | \ prune (f) from this install since
|
||||
\ / | \ it is *only* needed to build (b)
|
||||
(d) (e) (f)
|
||||
|
||||
Thus since (c) is already installed our build_pq dag should
|
||||
only include four packages. [(a), (b), (c), (d), (e)]
|
||||
"""
|
||||
|
||||
@property
|
||||
def _mock_installed(self):
|
||||
return self.name == "pkg-c"
|
||||
|
||||
# Mock the installed property to say that (b) is installed
|
||||
monkeypatch.setattr(spack.spec.Spec, "installed", _mock_installed)
|
||||
|
||||
# Create mock repository with packages (a), (b), (c), (d), and (e)
|
||||
builder = spack.repo.MockRepositoryBuilder(tmpdir.mkdir("mock-repo"))
|
||||
|
||||
builder.add_package("pkg-a", dependencies=[("pkg-b", "build", None), ("pkg-c", "build", None)])
|
||||
builder.add_package("pkg-b", dependencies=[("pkg-d", "build", None)])
|
||||
builder.add_package(
|
||||
"pkg-c",
|
||||
dependencies=[("pkg-d", "build", None), ("pkg-e", "all", None), ("pkg-f", "build", None)],
|
||||
)
|
||||
builder.add_package("pkg-d")
|
||||
builder.add_package("pkg-e")
|
||||
builder.add_package("pkg-f")
|
||||
|
||||
with spack.repo.use_repositories(builder.root):
|
||||
installer = create_installer(["pkg-a"])
|
||||
|
||||
installer._init_queue()
|
||||
|
||||
# Assert that (c) is not in the build_pq
|
||||
result = {task.pkg_id[:5] for _, task in installer.build_pq}
|
||||
expected = {"pkg-a", "pkg-b", "pkg-c", "pkg-d", "pkg-e"}
|
||||
assert result == expected
|
||||
|
||||
|
||||
def test_check_before_phase_error(install_mockery):
|
||||
s = spack.concretize.concretize_one("trivial-install-test-package")
|
||||
s.package.stop_before_phase = "beforephase"
|
||||
@@ -564,7 +605,7 @@ def test_check_deps_status_external(install_mockery, monkeypatch):
|
||||
monkeypatch.setattr(spack.spec.Spec, "external", True)
|
||||
installer._check_deps_status(request)
|
||||
|
||||
for dep in request.spec.traverse(root=False, deptype=request.get_depflags(request.spec)):
|
||||
for dep in request.spec.traverse(root=False):
|
||||
assert inst.package_id(dep) in installer.installed
|
||||
|
||||
|
||||
@@ -576,7 +617,7 @@ def test_check_deps_status_upstream(install_mockery, monkeypatch):
|
||||
monkeypatch.setattr(spack.spec.Spec, "installed_upstream", True)
|
||||
installer._check_deps_status(request)
|
||||
|
||||
for dep in request.spec.traverse(root=False, deptype=request.get_depflags(request.spec)):
|
||||
for dep in request.spec.traverse(root=False):
|
||||
assert inst.package_id(dep) in installer.installed
|
||||
|
||||
|
||||
@@ -627,13 +668,12 @@ def test_install_spliced_build_spec_installed(install_mockery, capfd, mock_fetch
|
||||
|
||||
# Do the splice.
|
||||
out = spec.splice(dep, transitive)
|
||||
inst.PackageInstaller([out.build_spec.package]).install()
|
||||
PackageInstaller([out.build_spec.package]).install()
|
||||
|
||||
installer = create_installer([out], {"verbose": True, "fail_fast": True})
|
||||
installer._init_queue()
|
||||
for _, task in installer.build_pq:
|
||||
assert isinstance(task, inst.RewireTask if task.pkg.spec.spliced else inst.InstallTask)
|
||||
|
||||
assert isinstance(task, inst.RewireTask if task.pkg.spec.spliced else inst.BuildTask)
|
||||
installer.install()
|
||||
for node in out.traverse():
|
||||
assert node.installed
|
||||
@@ -659,7 +699,7 @@ def test_install_splice_root_from_binary(
|
||||
original_spec = spack.concretize.concretize_one(root_str)
|
||||
spec_to_splice = spack.concretize.concretize_one("splice-h+foo")
|
||||
|
||||
inst.PackageInstaller([original_spec.package, spec_to_splice.package]).install()
|
||||
PackageInstaller([original_spec.package, spec_to_splice.package]).install()
|
||||
|
||||
out = original_spec.splice(spec_to_splice, transitive)
|
||||
|
||||
@@ -676,7 +716,7 @@ def test_install_splice_root_from_binary(
|
||||
uninstall = SpackCommand("uninstall")
|
||||
uninstall("-ay")
|
||||
|
||||
inst.PackageInstaller([out.package], unsigned=True).install()
|
||||
PackageInstaller([out.package], unsigned=True).install()
|
||||
|
||||
assert len(spack.store.STORE.db.query()) == len(list(out.traverse()))
|
||||
|
||||
@@ -684,10 +724,10 @@ def test_install_splice_root_from_binary(
|
||||
def test_install_task_use_cache(install_mockery, monkeypatch):
|
||||
installer = create_installer(["trivial-install-test-package"], {})
|
||||
request = installer.build_requests[0]
|
||||
task = create_install_task(request.pkg)
|
||||
task = create_build_task(request.pkg)
|
||||
|
||||
monkeypatch.setattr(inst, "_install_from_cache", _true)
|
||||
installer._install_task(task)
|
||||
installer._install_task(task, None)
|
||||
assert request.pkg_id in installer.installed
|
||||
|
||||
|
||||
@@ -711,7 +751,7 @@ def _missing(*args, **kwargs):
|
||||
assert inst.package_id(popped_task.pkg.spec) not in installer.build_tasks
|
||||
|
||||
monkeypatch.setattr(task, "execute", _missing)
|
||||
installer._install_task(task)
|
||||
installer._install_task(task, None)
|
||||
|
||||
# Ensure the dropped task/spec was added back by _install_task
|
||||
assert inst.package_id(popped_task.pkg.spec) in installer.build_tasks
|
||||
@@ -759,7 +799,7 @@ def test_requeue_task(install_mockery, capfd):
|
||||
# temporarily set tty debug messages on so we can test output
|
||||
current_debug_level = tty.debug_level()
|
||||
tty.set_debug(1)
|
||||
installer._requeue_task(task)
|
||||
installer._requeue_task(task, None)
|
||||
tty.set_debug(current_debug_level)
|
||||
|
||||
ids = list(installer.build_tasks)
|
||||
@@ -912,11 +952,11 @@ def test_install_failed_not_fast(install_mockery, monkeypatch, capsys):
|
||||
assert "Skipping build of pkg-a" in out
|
||||
|
||||
|
||||
def _interrupt(installer, task, **kwargs):
|
||||
def _interrupt(installer, task, install_status, **kwargs):
|
||||
if task.pkg.name == "pkg-a":
|
||||
raise KeyboardInterrupt("mock keyboard interrupt for pkg-a")
|
||||
else:
|
||||
return installer._real_install_task(task)
|
||||
return installer._real_install_task(task, None)
|
||||
# installer.installed.add(task.pkg.name)
|
||||
|
||||
|
||||
@@ -942,12 +982,12 @@ class MyBuildException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def _install_fail_my_build_exception(installer, task, **kwargs):
|
||||
def _install_fail_my_build_exception(installer, task, install_status, **kwargs):
|
||||
if task.pkg.name == "pkg-a":
|
||||
raise MyBuildException("mock internal package build error for pkg-a")
|
||||
else:
|
||||
# No need for more complex logic here because no splices
|
||||
task.execute(installer.progress)
|
||||
task.execute(install_status)
|
||||
installer._update_installed(task)
|
||||
|
||||
|
||||
@@ -1032,8 +1072,8 @@ def test_install_lock_failures(install_mockery, monkeypatch, capfd):
|
||||
"""Cover basic install lock failure handling in a single pass."""
|
||||
|
||||
# Note: this test relies on installing a package with no dependencies
|
||||
def _requeued(installer, task):
|
||||
tty.msg(f"requeued {task.pkg.spec.name}")
|
||||
def _requeued(installer, task, install_status):
|
||||
tty.msg("requeued {0}".format(task.pkg.spec.name))
|
||||
|
||||
installer = create_installer(["pkg-c"], {})
|
||||
|
||||
@@ -1066,7 +1106,7 @@ def _prep(installer, task):
|
||||
# also do not allow the package to be locked again
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_ensure_locked", _not_locked)
|
||||
|
||||
def _requeued(installer, task):
|
||||
def _requeued(installer, task, install_status):
|
||||
tty.msg(f"requeued {inst.package_id(task.pkg.spec)}")
|
||||
|
||||
# Flag the package as installed
|
||||
@@ -1098,8 +1138,8 @@ def _prep(installer, task):
|
||||
tty.msg("preparing {0}".format(task.pkg.spec.name))
|
||||
assert task.pkg.spec.name not in installer.installed
|
||||
|
||||
def _requeued(installer, task):
|
||||
tty.msg(f"requeued {task.pkg.spec.name}")
|
||||
def _requeued(installer, task, install_status):
|
||||
tty.msg("requeued {0}".format(task.pkg.spec.name))
|
||||
|
||||
# Force a read lock
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_ensure_locked", _read)
|
||||
@@ -1141,7 +1181,7 @@ def test_install_implicit(install_mockery, mock_fetch):
|
||||
assert not create_build_task(pkg).explicit
|
||||
|
||||
|
||||
def test_overwrite_install_backup_success(temporary_store, config, mock_packages, monkeypatch):
|
||||
def test_overwrite_install_backup_success(temporary_store, config, mock_packages, tmpdir):
|
||||
"""
|
||||
When doing an overwrite install that fails, Spack should restore the backup
|
||||
of the original prefix, and leave the original spec marked installed.
|
||||
@@ -1156,12 +1196,11 @@ def test_overwrite_install_backup_success(temporary_store, config, mock_packages
|
||||
installed_file = os.path.join(task.pkg.prefix, "some_file")
|
||||
fs.touchp(installed_file)
|
||||
|
||||
def _install_task(self, task):
|
||||
shutil.rmtree(task.pkg.prefix, ignore_errors=True)
|
||||
fs.mkdirp(task.pkg.prefix)
|
||||
raise Exception("Some fatal install error")
|
||||
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install_task)
|
||||
class InstallerThatWipesThePrefixDir:
|
||||
def _install_task(self, task, install_status):
|
||||
shutil.rmtree(task.pkg.prefix, ignore_errors=True)
|
||||
fs.mkdirp(task.pkg.prefix)
|
||||
raise Exception("Some fatal install error")
|
||||
|
||||
class FakeDatabase:
|
||||
called = False
|
||||
@@ -1169,25 +1208,46 @@ class FakeDatabase:
|
||||
def remove(self, spec):
|
||||
self.called = True
|
||||
|
||||
monkeypatch.setattr(spack.store.STORE, "db", FakeDatabase())
|
||||
fake_installer = InstallerThatWipesThePrefixDir()
|
||||
fake_db = FakeDatabase()
|
||||
overwrite_install = inst.OverwriteInstall(fake_installer, fake_db, task, None)
|
||||
|
||||
# Installation should throw the installation exception, not the backup
|
||||
# failure.
|
||||
with pytest.raises(Exception, match="Some fatal install error"):
|
||||
installer._overwrite_install_task(task)
|
||||
overwrite_install.install()
|
||||
|
||||
# Make sure the package is not marked uninstalled and the original dir
|
||||
# is back.
|
||||
assert not spack.store.STORE.db.called
|
||||
assert not fake_db.called
|
||||
assert os.path.exists(installed_file)
|
||||
|
||||
|
||||
def test_overwrite_install_backup_failure(temporary_store, config, mock_packages, monkeypatch):
|
||||
def test_overwrite_install_backup_failure(temporary_store, config, mock_packages, tmpdir):
|
||||
"""
|
||||
When doing an overwrite install that fails, Spack should try to recover the
|
||||
original prefix. If that fails, the spec is lost, and it should be removed
|
||||
from the database.
|
||||
"""
|
||||
# Note: this test relies on installing a package with no dependencies
|
||||
|
||||
class InstallerThatAccidentallyDeletesTheBackupDir:
|
||||
def _install_task(self, task, install_status):
|
||||
# Remove the backup directory, which is at the same level as the prefix,
|
||||
# starting with .backup
|
||||
backup_glob = os.path.join(
|
||||
os.path.dirname(os.path.normpath(task.pkg.prefix)), ".backup*"
|
||||
)
|
||||
for backup in glob.iglob(backup_glob):
|
||||
shutil.rmtree(backup)
|
||||
raise Exception("Some fatal install error")
|
||||
|
||||
class FakeDatabase:
|
||||
called = False
|
||||
|
||||
def remove(self, spec):
|
||||
self.called = True
|
||||
|
||||
# Get a build task. TODO: refactor this to avoid calling internal methods
|
||||
installer = create_installer(["pkg-c"])
|
||||
installer._init_queue()
|
||||
@@ -1197,32 +1257,18 @@ def test_overwrite_install_backup_failure(temporary_store, config, mock_packages
|
||||
installed_file = os.path.join(task.pkg.prefix, "some_file")
|
||||
fs.touchp(installed_file)
|
||||
|
||||
def _install_task(self, task):
|
||||
# Remove the backup directory, which is at the same level as the prefix,
|
||||
# starting with .backup
|
||||
backup_glob = os.path.join(os.path.dirname(os.path.normpath(task.pkg.prefix)), ".backup*")
|
||||
for backup in glob.iglob(backup_glob):
|
||||
shutil.rmtree(backup)
|
||||
raise Exception("Some fatal install error")
|
||||
|
||||
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install_task)
|
||||
|
||||
class FakeDatabase:
|
||||
called = False
|
||||
|
||||
def remove(self, spec):
|
||||
self.called = True
|
||||
|
||||
monkeypatch.setattr(spack.store.STORE, "db", FakeDatabase())
|
||||
fake_installer = InstallerThatAccidentallyDeletesTheBackupDir()
|
||||
fake_db = FakeDatabase()
|
||||
overwrite_install = inst.OverwriteInstall(fake_installer, fake_db, task, None)
|
||||
|
||||
# Installation should throw the installation exception, not the backup
|
||||
# failure.
|
||||
with pytest.raises(Exception, match="Some fatal install error"):
|
||||
installer._overwrite_install_task(task)
|
||||
overwrite_install.install()
|
||||
|
||||
# Make sure that `remove` was called on the database after an unsuccessful
|
||||
# attempt to restore the backup.
|
||||
assert spack.store.STORE.db.called
|
||||
assert fake_db.called
|
||||
|
||||
|
||||
def test_term_status_line():
|
||||
@@ -1274,7 +1320,7 @@ def test_print_install_test_log_skipped(install_mockery, mock_packages, capfd, r
|
||||
pkg = s.package
|
||||
|
||||
pkg.run_tests = run_tests
|
||||
inst.print_install_test_log(pkg)
|
||||
spack.installer.print_install_test_log(pkg)
|
||||
out = capfd.readouterr()[0]
|
||||
assert out == ""
|
||||
|
||||
@@ -1291,23 +1337,12 @@ def test_print_install_test_log_failures(
|
||||
pkg.run_tests = True
|
||||
pkg.tester.test_log_file = str(tmpdir.join("test-log.txt"))
|
||||
pkg.tester.add_failure(AssertionError("test"), "test-failure")
|
||||
inst.print_install_test_log(pkg)
|
||||
spack.installer.print_install_test_log(pkg)
|
||||
err = capfd.readouterr()[1]
|
||||
assert "no test log file" in err
|
||||
|
||||
# Having test log results in path being output
|
||||
fs.touch(pkg.tester.test_log_file)
|
||||
inst.print_install_test_log(pkg)
|
||||
spack.installer.print_install_test_log(pkg)
|
||||
out = capfd.readouterr()[0]
|
||||
assert "See test results at" in out
|
||||
|
||||
|
||||
def test_specs_count(install_mockery, mock_packages):
|
||||
"""Check SpecCounts DAG visitor total matches expected."""
|
||||
spec = spack.spec.Spec("mpileaks^mpich").concretized()
|
||||
counter = inst.SpecsCount(dt.LINK | dt.RUN | dt.BUILD)
|
||||
number_specs = counter.total([spec])
|
||||
|
||||
json = sjson.load(spec.to_json())
|
||||
number_spec_nodes = len(json["spec"]["nodes"])
|
||||
assert number_specs == number_spec_nodes
|
||||
|
@@ -7,9 +7,9 @@
|
||||
|
||||
|
||||
def test_modified_files(mock_git_package_changes):
|
||||
repo_path, filename, commits = mock_git_package_changes
|
||||
repo, filename, commits = mock_git_package_changes
|
||||
|
||||
with working_dir(repo_path):
|
||||
with working_dir(repo.packages_path):
|
||||
files = get_modified_files(from_ref="HEAD~1", to_ref="HEAD")
|
||||
assert len(files) == 1
|
||||
assert files[0] == filename
|
||||
|
@@ -209,7 +209,7 @@ def configure_args(self):
|
||||
linalg = spec["lapack"].libs + spec["blas"].libs
|
||||
|
||||
# linalg_flavor is selected using the virtual lapack provider
|
||||
is_using_intel_libraries = spec["lapack"].name in INTEL_MATH_LIBRARIES
|
||||
is_using_intel_libraries = spec["lapack"].name == "intel-oneapi-mkl"
|
||||
|
||||
# These *must* be elifs, otherwise spack's lapack provider is ignored
|
||||
# linalg_flavor ends up as "custom", which is not supported by abinit@9.10.3:
|
||||
|
@@ -86,7 +86,7 @@ def cmake_args(self):
|
||||
]
|
||||
args.append(self.define("CUDA_architecture_build_targets", arch_list))
|
||||
|
||||
if self.spec["blas"].name in INTEL_MATH_LIBRARIES:
|
||||
if self.spec.satisfies("^[virtuals=blas] intel-oneapi-mkl"):
|
||||
if self.version >= Version("3.8.0"):
|
||||
args.append(self.define("AF_COMPUTE_LIBRARY", "Intel-MKL"))
|
||||
else:
|
||||
|
@@ -52,7 +52,7 @@ def edit(self, spec, prefix):
|
||||
if spec["blas"].name == "openblas":
|
||||
env["OPENBLAS"] = "1"
|
||||
|
||||
elif spec["blas"].name in INTEL_MATH_LIBRARIES:
|
||||
elif spec.satisfies("^[virtuals=blas] intel-oneapi-mkl"):
|
||||
env["MKL"] = "1"
|
||||
env["MKL_BASE"] = spec["mkl"].prefix.mkl
|
||||
else:
|
||||
|
@@ -26,7 +26,7 @@ class Batchedblas(MakefilePackage):
|
||||
def edit(self, spec, prefix):
|
||||
CCFLAGS = [self.compiler.openmp_flag, "-I./", "-O3"]
|
||||
BLAS = ["-lm", spec["blas"].libs.ld_flags]
|
||||
if spec["blas"].name not in INTEL_MATH_LIBRARIES:
|
||||
if not spec.satisfies("^[virtuals=blas] intel-oneapi-mkl"):
|
||||
CCFLAGS.append("-D_CBLAS_")
|
||||
if spec.satisfies("%intel"):
|
||||
CCFLAGS.extend(["-Os"])
|
||||
|
@@ -69,7 +69,8 @@ class Beatnik(CMakePackage, CudaPackage, ROCmPackage):
|
||||
conflicts("mpich ~cuda", when="+cuda")
|
||||
conflicts("mpich ~rocm", when="+rocm")
|
||||
conflicts("openmpi ~cuda", when="+cuda")
|
||||
conflicts("^intel-mpi") # Heffte won't build with intel MPI because of needed C++ MPI support
|
||||
# Heffte won't build with intel MPI because of needed C++ MPI support
|
||||
conflicts("^intel-oneapi-mpi")
|
||||
conflicts("^spectrum-mpi", when="^cuda@11.3:") # cuda-aware spectrum is broken with cuda 11.3:
|
||||
|
||||
# Propagate CUDA and AMD GPU targets to cabana
|
||||
|
@@ -78,12 +78,7 @@ class Berkeleygw(MakefilePackage):
|
||||
depends_on("cray-fftw+openmp", when="^[virtuals=fftw-api] cray-fftw")
|
||||
depends_on("fftw+openmp", when="^[virtuals=fftw-api] fftw")
|
||||
depends_on("fujitsu-fftw+openmp", when="^[virtuals=fftw-api] fujitsu-fftw")
|
||||
depends_on("intel-mkl threads=openmp", when="^[virtuals=fftw-api] intel-mkl")
|
||||
depends_on("intel-oneapi-mkl threads=openmp", when="^[virtuals=fftw-api] intel-oneapi-mkl")
|
||||
depends_on(
|
||||
"intel-parallel-studio threads=openmp",
|
||||
when="^[virtuals=fftw-api] intel-parallel-studio",
|
||||
)
|
||||
|
||||
with when("~openmp"):
|
||||
depends_on("acfl threads=none", when="^[virtuals=fftw-api] acfl")
|
||||
@@ -92,11 +87,7 @@ class Berkeleygw(MakefilePackage):
|
||||
depends_on("cray-fftw~openmp", when="^[virtuals=fftw-api] cray-fftw")
|
||||
depends_on("fftw~openmp", when="^[virtuals=fftw-api] fftw")
|
||||
depends_on("fujitsu-fftw~openmp", when="^[virtuals=fftw-api] fujitsu-fftw")
|
||||
depends_on("intel-mkl threads=none", when="^[virtuals=fftw-api] intel-mkl")
|
||||
depends_on("intel-oneapi-mkl threads=none", when="^[virtuals=fftw-api] intel-oneapi-mkl")
|
||||
depends_on(
|
||||
"intel-parallel-studio threads=none", when="^[virtuals=fftw-api] intel-parallel-studio"
|
||||
)
|
||||
|
||||
# in order to run the installed python scripts
|
||||
depends_on("python", type=("build", "run"), when="+python")
|
||||
|
@@ -89,17 +89,20 @@ def cosma_blas_cmake_arg(self):
|
||||
query_to_cmake_arg = [
|
||||
("+cuda", "CUDA"),
|
||||
("+rocm", "ROCM"),
|
||||
("^intel-mkl", "MKL"),
|
||||
("^intel-oneapi-mkl", "MKL"),
|
||||
("^cray-libsci", "CRAY_LIBSCI"),
|
||||
("^netlib-lapack", "CUSTOM"),
|
||||
("^openblas", "OPENBLAS"),
|
||||
("^fujitsu-ssl2", "SSL2"),
|
||||
("^[virtuals=blas] intel-oneapi-mkl", "MKL"),
|
||||
("^[virtuals=blas] cray-libsci", "CRAY_LIBSCI"),
|
||||
("^[virtuals=blas] netlib-lapack", "CUSTOM"),
|
||||
("^[virtuals=blas] openblas", "OPENBLAS"),
|
||||
("^[virtuals=blas] fujitsu-ssl2", "SSL2"),
|
||||
]
|
||||
|
||||
if self.version >= Version("2.4.0"):
|
||||
query_to_cmake_arg.extend(
|
||||
[("^blis", "BLIS"), ("^amdblis", "BLIS"), ("^atlas", "ATLAS")]
|
||||
[
|
||||
("^[virtuals=blas] blis", "BLIS"),
|
||||
("^[virtuals=blas] amdblis", "BLIS"),
|
||||
("^[virtuals=blas] atlas", "ATLAS"),
|
||||
]
|
||||
)
|
||||
|
||||
for query, cmake_arg in query_to_cmake_arg:
|
||||
@@ -113,7 +116,7 @@ def cosma_scalapack_cmake_arg(self):
|
||||
|
||||
if spec.satisfies("~scalapack"):
|
||||
return "OFF"
|
||||
elif spec.satisfies("^intel-mkl") or spec.satisfies("^intel-oneapi-mkl"):
|
||||
elif spec.satisfies("^[virtuals=scalapack] intel-oneapi-mkl"):
|
||||
return "MKL"
|
||||
elif spec.satisfies("^cray-libsci"):
|
||||
return "CRAY_LIBSCI"
|
||||
|
@@ -58,9 +58,9 @@ def costa_scalapack_cmake_arg(self):
|
||||
|
||||
if spec.satisfies("~scalapack"):
|
||||
return "OFF"
|
||||
elif spec.satisfies("^intel-mkl") or spec.satisfies("^intel-oneapi-mkl"):
|
||||
elif spec.satisfies("^[virtuals=scalapack] intel-oneapi-mkl"):
|
||||
return "MKL"
|
||||
elif spec.satisfies("^cray-libsci"):
|
||||
elif spec.satisfies("^[virtuals=scalapack] cray-libsci"):
|
||||
return "CRAY_LIBSCI"
|
||||
|
||||
return "CUSTOM"
|
||||
|
@@ -495,7 +495,7 @@ def edit(self, pkg, spec, prefix):
|
||||
}
|
||||
|
||||
dflags = ["-DNDEBUG"] if spec.satisfies("@:2023.2") else []
|
||||
if fftw.name in ("intel-mkl", "intel-parallel-studio", "intel-oneapi-mkl"):
|
||||
if fftw.name == "intel-oneapi-mkl":
|
||||
cppflags = ["-D__FFTW3_MKL", "-I{0}".format(fftw_header_dir)]
|
||||
else:
|
||||
cppflags = ["-D__FFTW3", "-I{0}".format(fftw_header_dir)]
|
||||
@@ -705,7 +705,7 @@ def edit(self, pkg, spec, prefix):
|
||||
if spec.satisfies("platform=darwin"):
|
||||
cppflags.extend(["-D__NO_STATM_ACCESS"])
|
||||
|
||||
if spec["blas"].name in ("intel-mkl", "intel-parallel-studio", "intel-oneapi-mkl"):
|
||||
if spec["blas"].name == "intel-oneapi-mkl":
|
||||
cppflags += ["-D__MKL"]
|
||||
elif spec["blas"].name == "accelerate":
|
||||
cppflags += ["-D__ACCELERATE"]
|
||||
@@ -725,8 +725,6 @@ def edit(self, pkg, spec, prefix):
|
||||
else:
|
||||
mpi = spec["mpi:cxx"].libs
|
||||
|
||||
# while intel-mkl has a mpi variant and adds the scalapack
|
||||
# libs to its libs, intel-oneapi-mkl does not.
|
||||
if spec["scalapack"].name == "intel-oneapi-mkl":
|
||||
mpi_impl = "openmpi" if spec["mpi"].name in ["openmpi", "hpcx-mpi"] else "intelmpi"
|
||||
scalapack = [
|
||||
@@ -1098,7 +1096,7 @@ def cmake_args(self):
|
||||
lapack = spec["lapack"]
|
||||
blas = spec["blas"]
|
||||
|
||||
if blas.name in ["intel-mkl", "intel-parallel-studio", "intel-oneapi-mkl"]:
|
||||
if blas.name == "intel-oneapi-mkl":
|
||||
args += ["-DCP2K_BLAS_VENDOR=MKL"]
|
||||
if sys.platform == "darwin":
|
||||
args += [
|
||||
|
@@ -43,7 +43,7 @@ def url_for_version(self, version):
|
||||
def configure_args(self):
|
||||
config_args = []
|
||||
|
||||
if self.spec["fftw-api"].name in INTEL_MATH_LIBRARIES:
|
||||
if self.spec.satisfies("^[virtuals=fftw-api] intel-oneapi-mkl"):
|
||||
config_args.extend(
|
||||
[
|
||||
"--enable-mkl",
|
||||
|
@@ -388,12 +388,16 @@ class Dealii(CMakePackage, CudaPackage):
|
||||
|
||||
# Check that the combination of variants makes sense
|
||||
# 64-bit BLAS:
|
||||
for _package in ["openblas", "intel-mkl", "intel-parallel-studio+mkl"]:
|
||||
conflicts(
|
||||
"^{0}+ilp64".format(_package),
|
||||
when="@:8.5.1",
|
||||
msg="64bit BLAS is only supported from 9.0.0",
|
||||
)
|
||||
conflicts(
|
||||
"^[virtuals=lapack] openblas+ilp64",
|
||||
when="@:8.5.1",
|
||||
msg="64bit BLAS is only supported from 9.0.0",
|
||||
)
|
||||
conflicts(
|
||||
"^[virtuals=lapack] intel-oneapi-mkl+ilp64",
|
||||
when="@:8.5.1",
|
||||
msg="64bit BLAS is only supported from 9.0.0",
|
||||
)
|
||||
|
||||
# MPI requirements:
|
||||
for _package in [
|
||||
@@ -505,10 +509,8 @@ def cmake_args(self):
|
||||
# 64 bit indices
|
||||
options.append(self.define_from_variant("DEAL_II_WITH_64BIT_INDICES", "int64"))
|
||||
|
||||
if (
|
||||
spec.satisfies("^openblas+ilp64")
|
||||
or spec.satisfies("^intel-mkl+ilp64")
|
||||
or spec.satisfies("^intel-parallel-studio+mkl+ilp64")
|
||||
if spec.satisfies("^[virtuals=lapack] openblas+ilp64") or spec.satisfies(
|
||||
"^[virtuals=lapack] intel-oneapi-mkl+ilp64"
|
||||
):
|
||||
options.append(self.define("LAPACK_WITH_64BIT_BLAS_INDICES", True))
|
||||
|
||||
@@ -570,20 +572,9 @@ def cmake_args(self):
|
||||
options.append(self.define_from_variant("DEAL_II_WITH_TBB", "threads"))
|
||||
else:
|
||||
options.append(self.define_from_variant("DEAL_II_WITH_THREADS", "threads"))
|
||||
|
||||
if spec.satisfies("+threads"):
|
||||
if spec.satisfies("^intel-parallel-studio+tbb"):
|
||||
# deal.II/cmake will have hard time picking up TBB from Intel.
|
||||
tbb_ver = ".".join(("%s" % spec["tbb"].version).split(".")[1:])
|
||||
options.extend(
|
||||
[
|
||||
self.define("TBB_FOUND", True),
|
||||
self.define("TBB_VERSION", tbb_ver),
|
||||
self.define("TBB_INCLUDE_DIRS", ";".join(spec["tbb"].headers.directories)),
|
||||
self.define("TBB_LIBRARIES", spec["tbb"].libs.joined(";")),
|
||||
]
|
||||
)
|
||||
else:
|
||||
options.append(self.define("TBB_DIR", spec["tbb"].prefix))
|
||||
options.append(self.define("TBB_DIR", spec["tbb"].prefix))
|
||||
|
||||
# Optional dependencies for which library names are the same as CMake
|
||||
# variables:
|
||||
|
@@ -182,71 +182,41 @@ def cmake_args(self):
|
||||
args.append(self.define_from_variant("BUILD_SHARED_LIBS", "shared"))
|
||||
|
||||
# BLAS/LAPACK
|
||||
if spec.version <= Version("0.4") and spec["lapack"].name in INTEL_MATH_LIBRARIES:
|
||||
if spec.version <= Version("0.4") and spec.satisfies(
|
||||
"^[virtuals=lapack] intel-oneapi-mkl"
|
||||
):
|
||||
mkl_provider = spec["lapack"].name
|
||||
|
||||
vmap = {
|
||||
"intel-oneapi-mkl": {
|
||||
"threading": {
|
||||
"none": "sequential",
|
||||
"openmp": "gnu_thread",
|
||||
"tbb": "tbb_thread",
|
||||
},
|
||||
"mpi": {"intel-mpi": "intelmpi", "mpich": "mpich", "openmpi": "openmpi"},
|
||||
},
|
||||
"intel-mkl": {
|
||||
"threading": {"none": "seq", "openmp": "omp", "tbb": "tbb"},
|
||||
"mpi": {"intel-mpi": "mpich", "mpich": "mpich", "openmpi": "ompi"},
|
||||
},
|
||||
"threading": {"none": "sequential", "openmp": "gnu_thread", "tbb": "tbb_thread"},
|
||||
"mpi": {"intel-oneapi-mpi": "intelmpi", "mpich": "mpich", "openmpi": "openmpi"},
|
||||
}
|
||||
|
||||
if mkl_provider not in vmap.keys():
|
||||
raise RuntimeError(
|
||||
f"dla-future does not support {mkl_provider} as lapack provider"
|
||||
)
|
||||
mkl_mapper = vmap[mkl_provider]
|
||||
|
||||
mkl_threads = mkl_mapper["threading"][spec[mkl_provider].variants["threads"].value]
|
||||
if mkl_provider == "intel-oneapi-mkl":
|
||||
args += [
|
||||
self.define("DLAF_WITH_MKL", True),
|
||||
self.define("MKL_INTERFACE", "lp64"),
|
||||
self.define("MKL_THREADING", mkl_threads),
|
||||
]
|
||||
elif mkl_provider == "intel-mkl":
|
||||
args += [
|
||||
(
|
||||
self.define("DLAF_WITH_MKL", True)
|
||||
if spec.version <= Version("0.3")
|
||||
else self.define("DLAF_WITH_MKL_LEGACY", True)
|
||||
),
|
||||
self.define("MKL_LAPACK_TARGET", f"mkl::mkl_intel_32bit_{mkl_threads}_dyn"),
|
||||
]
|
||||
mkl_threads = vmap["threading"][spec["intel-oneapi-mkl"].variants["threads"].value]
|
||||
args += [
|
||||
self.define("DLAF_WITH_MKL", True),
|
||||
self.define("MKL_INTERFACE", "lp64"),
|
||||
self.define("MKL_THREADING", mkl_threads),
|
||||
]
|
||||
|
||||
if spec.satisfies("+scalapack"):
|
||||
try:
|
||||
mpi_provider = spec["mpi"].name
|
||||
if mpi_provider in ["mpich", "cray-mpich", "mvapich", "mvapich2"]:
|
||||
mkl_mpi = mkl_mapper["mpi"]["mpich"]
|
||||
mkl_mpi = vmap["mpi"]["mpich"]
|
||||
else:
|
||||
mkl_mpi = mkl_mapper["mpi"][mpi_provider]
|
||||
mkl_mpi = vmap["mpi"][mpi_provider]
|
||||
except KeyError:
|
||||
raise RuntimeError(
|
||||
f"dla-future does not support {spec['mpi'].name} as mpi provider with "
|
||||
f"the selected scalapack provider {mkl_provider}"
|
||||
)
|
||||
|
||||
if mkl_provider == "intel-oneapi-mkl":
|
||||
args.append(self.define("MKL_MPI", mkl_mpi))
|
||||
elif mkl_provider == "intel-mkl":
|
||||
args.append(
|
||||
self.define(
|
||||
"MKL_SCALAPACK_TARGET",
|
||||
f"mkl::scalapack_{mkl_mpi}_intel_32bit_{mkl_threads}_dyn",
|
||||
)
|
||||
)
|
||||
args.append(self.define("MKL_MPI", mkl_mpi))
|
||||
else:
|
||||
args.append(self.define("DLAF_WITH_MKL", spec["lapack"].name in INTEL_MATH_LIBRARIES))
|
||||
args.append(
|
||||
self.define("DLAF_WITH_MKL", spec.satisfies("^[virtuals=lapack] intel-oneapi-mkl"))
|
||||
)
|
||||
add_dlaf_prefix = lambda x: x if spec.satisfies("@:0.6") else "DLAF_" + x
|
||||
args.append(
|
||||
self.define(
|
||||
|
@@ -63,7 +63,6 @@ class Dyninst(CMakePackage):
|
||||
# package layout. Need to use tbb provided config instead.
|
||||
conflicts("^intel-tbb@2021.1:")
|
||||
conflicts("^intel-oneapi-tbb@2021.1:")
|
||||
conflicts("^intel-parallel-studio")
|
||||
|
||||
depends_on("tbb")
|
||||
requires("^[virtuals=tbb] intel-tbb@2019.9:", when="@13.0.0:")
|
||||
|
@@ -81,11 +81,7 @@ class Elk(MakefilePackage):
|
||||
|
||||
depends_on("mkl", when="linalg=mkl")
|
||||
with when("linalg=mkl +openmp"):
|
||||
depends_on("intel-mkl threads=openmp", when="^[virtuals=mkl] intel-mkl")
|
||||
depends_on("intel-oneapi-mkl threads=openmp", when="^[virtuals=mkl] intel-oneapi-mkl")
|
||||
depends_on(
|
||||
"intel-parallel-studio threads=openmp", when="^[virtuals=mkl] intel-parallel-studio"
|
||||
)
|
||||
|
||||
depends_on("openblas", when="linalg=openblas")
|
||||
depends_on("openblas threads=openmp", when="linalg=openblas +openmp")
|
||||
@@ -200,8 +196,6 @@ def edit(self, spec, prefix):
|
||||
config["SRC_FFT"] += " cfftifc_mkl.f90"
|
||||
cp = which("cp")
|
||||
mkl_prefix = spec["mkl"].prefix
|
||||
if spec.satisfies("^intel-mkl"):
|
||||
mkl_prefix = mkl_prefix.mkl
|
||||
cp(
|
||||
join_path(mkl_prefix.include, "mkl_dfti.f90"),
|
||||
join_path(self.build_directory, "src"),
|
||||
|
@@ -69,7 +69,7 @@ def cmake_args(self):
|
||||
else:
|
||||
args.append("-DWITH_MPI=OFF")
|
||||
|
||||
if self.spec.satisfies("^intel-mkl"):
|
||||
if self.spec.satisfies("^[virtuals=lapack] intel-oneapi-mkl"):
|
||||
args.append("-DWITH_MKL:BOOL=TRUE")
|
||||
|
||||
if spec.satisfies("+openmp"):
|
||||
|
@@ -93,14 +93,9 @@ class Elpa(AutotoolsPackage, CudaPackage, ROCmPackage):
|
||||
# https://gitlab.mpcdf.mpg.de/elpa/elpa/-/blob/master/documentation/PERFORMANCE_TUNING.md?ref_type=heads#builds-with-openmp-enabled
|
||||
with when("+openmp"):
|
||||
requires("^openblas threads=openmp", when="^[virtuals=blas,lapack] openblas")
|
||||
requires("^intel-mkl threads=openmp", when="^[virtuals=blas,lapack] intel-mkl")
|
||||
requires(
|
||||
"^intel-oneapi-mkl threads=openmp", when="^[virtuals=blas,lapack] intel-oneapi-mkl"
|
||||
)
|
||||
requires(
|
||||
"^intel-parallel-studio threads=openmp",
|
||||
when="^[virtuals=blas,lapack] intel-parallel-studio",
|
||||
)
|
||||
|
||||
# fails to build due to broken type-bound procedures in OMP parallel regions
|
||||
conflicts(
|
||||
|
@@ -24,6 +24,7 @@ class Ensmallen(CMakePackage):
|
||||
version("2.21.1", sha256="820eee4d8aa32662ff6a7d883a1bcaf4e9bf9ca0a3171d94c5398fe745008750")
|
||||
version("2.19.1", sha256="f36ad7f08b0688d2a8152e1c73dd437c56ed7a5af5facf65db6ffd977b275b2e")
|
||||
|
||||
depends_on("c", type="build")
|
||||
depends_on("cxx", type="build")
|
||||
|
||||
variant("openmp", default=True, description="Use OpenMP for parallelization")
|
||||
|
@@ -343,11 +343,7 @@ def setup_build_environment(self, env):
|
||||
"^[virtuals=mpi] hpcx-mpi"
|
||||
):
|
||||
env.set("ESMF_COMM", "openmpi")
|
||||
elif (
|
||||
spec.satisfies("^[virtuals=mpi] intel-parallel-studio+mpi")
|
||||
or spec.satisfies("^[virtuals=mpi] intel-mpi")
|
||||
or spec.satisfies("^[virtuals=mpi] intel-oneapi-mpi")
|
||||
):
|
||||
elif spec.satisfies("^[virtuals=mpi] intel-oneapi-mpi"):
|
||||
env.set("ESMF_COMM", "intelmpi")
|
||||
elif spec.satisfies("^[virtuals=mpi] mpt"):
|
||||
# MPT is the HPE (SGI) variant of mpich
|
||||
|
@@ -30,21 +30,15 @@ class Exabayes(AutotoolsPackage):
|
||||
# GCC 7.1.0 is used.
|
||||
conflicts("%gcc@:4.5.4, 7.1.0:", when="@:1.5.0")
|
||||
conflicts("%clang@:3.1")
|
||||
conflicts("^intel-mpi", when="+mpi")
|
||||
conflicts("^intel-parallel-studio+mpi", when="+mpi")
|
||||
conflicts("^intel-oneapi-mpi", when="+mpi")
|
||||
conflicts("^mvapich2", when="+mpi")
|
||||
conflicts("^spectrum-mpi", when="+mpi")
|
||||
|
||||
def configure_args(self):
|
||||
args = []
|
||||
if self.spec.satisfies("+mpi"):
|
||||
args.append("--enable-mpi")
|
||||
else:
|
||||
args.append("--disable-mpi")
|
||||
return args
|
||||
return self.enable_or_disable("mpi")
|
||||
|
||||
def flag_handler(self, name, flags):
|
||||
if name.lower() == "cxxflags":
|
||||
# manual cites need for c++11
|
||||
flags.append(self.compiler.cxx11_flag)
|
||||
return (flags, None, None)
|
||||
return flags, None, None
|
||||
|
@@ -42,11 +42,8 @@ class Exciting(MakefilePackage):
|
||||
depends_on("mkl", when="+mkl")
|
||||
depends_on("mpi", when="+mpi")
|
||||
depends_on("scalapack", when="+scalapack")
|
||||
# conflicts('%gcc@10:', msg='exciting cannot be built with GCC 10')
|
||||
|
||||
requires("%intel", when="^mkl", msg="Intel MKL only works with the Intel compiler")
|
||||
requires("%intel", when="^intel-mkl", msg="Intel MKL only works with the Intel compiler")
|
||||
requires("%intel", when="^intel-mpi", msg="Intel MPI only works with the Intel compiler")
|
||||
conflicts("%intel")
|
||||
|
||||
def patch(self):
|
||||
"""Fix bad logic in m_makespectrum.f90 for the Oxygen release"""
|
||||
|
@@ -44,10 +44,7 @@ class Fds(MakefilePackage):
|
||||
)
|
||||
|
||||
requires(
|
||||
"^intel-mkl",
|
||||
"^intel-oneapi-mkl",
|
||||
policy="one_of",
|
||||
msg="FDS builds require either Intel MKL or Intel oneAPI MKL library",
|
||||
"^intel-oneapi-mkl", policy="one_of", msg="FDS builds require Intel oneAPI MKL library"
|
||||
)
|
||||
|
||||
requires(
|
||||
@@ -56,12 +53,6 @@ class Fds(MakefilePackage):
|
||||
msg="OpenMPI can only be used with GNU Fortran on Linux platform",
|
||||
)
|
||||
|
||||
requires(
|
||||
"^intel-mpi^intel-mkl",
|
||||
when="platform=linux %intel",
|
||||
msg="Intel MPI and Intel MKL can only be used with Intel Fortran on Linux platform",
|
||||
)
|
||||
|
||||
requires(
|
||||
"^intel-oneapi-mpi^intel-oneapi-mkl",
|
||||
when="platform=linux %oneapi",
|
||||
@@ -85,7 +76,7 @@ def edit(self, spec, prefix):
|
||||
@property
|
||||
def build_targets(self):
|
||||
spec = self.spec
|
||||
mpi_mapping = {"openmpi": "ompi", "intel-oneapi-mpi": "impi", "intel-mpi": "impi"}
|
||||
mpi_mapping = {"openmpi": "ompi", "intel-oneapi-mpi": "impi"}
|
||||
compiler_mapping = {"gcc": "gnu", "oneapi": "intel", "intel": "intel"}
|
||||
platform_mapping = {"linux": "linux", "darwin": "osx"}
|
||||
mpi_prefix = mpi_mapping[spec["mpi"].name]
|
||||
|
@@ -7,7 +7,7 @@
|
||||
|
||||
|
||||
class G4tendl(Package):
|
||||
"""Geant4 data for incident particles [optional]"""
|
||||
"""Optional Geant4 data for incident particles."""
|
||||
|
||||
homepage = "https://geant4.web.cern.ch"
|
||||
url = "https://geant4-data.web.cern.ch/geant4-data/datasets/G4TENDL.1.3.tar.gz"
|
||||
|
@@ -142,7 +142,7 @@ class Gaudi(CMakePackage, CudaPackage):
|
||||
# ROOT does not like being exposed to LLVM symbols.
|
||||
|
||||
# The Intel VTune dependency is taken aside because it requires a license
|
||||
depends_on("intel-parallel-studio -mpi +vtune", when="+vtune")
|
||||
depends_on("intel-oneapi-vtune", when="+vtune")
|
||||
|
||||
def patch(self):
|
||||
# ensure an empty pytest.ini is present to prevent finding one
|
||||
|
@@ -21,6 +21,7 @@ class Geant4(CMakePackage):
|
||||
|
||||
maintainers("drbenmorgan", "sethrj")
|
||||
|
||||
version("11.3.1", sha256="9059da076928f25cab1ff1f35e0f611a4d7fe005e374e9b8d7f3ff2434b7af54")
|
||||
version("11.3.0", sha256="d9d71daff8890a7b5e0e33ea9a65fe6308ad6713000b43ba6705af77078e7ead")
|
||||
version("11.2.2", sha256="3a8d98c63fc52578f6ebf166d7dffaec36256a186d57f2520c39790367700c8d")
|
||||
version("11.2.1", sha256="76c9093b01128ee2b45a6f4020a1bcb64d2a8141386dea4674b5ae28bcd23293")
|
||||
|
@@ -41,7 +41,7 @@ class Gearshifft(CMakePackage):
|
||||
depends_on("clfft@2.12.0:", when="+clfft")
|
||||
depends_on("fftw@3.3.4:~mpi~openmp", when="+fftw~openmp")
|
||||
depends_on("fftw@3.3.4:~mpi+openmp", when="+fftw+openmp")
|
||||
depends_on("intel-mkl threads=openmp", when="+mkl")
|
||||
depends_on("intel-oneapi-mkl threads=openmp", when="+mkl")
|
||||
depends_on("rocfft", when="+rocfft")
|
||||
|
||||
def cmake_args(self):
|
||||
|
@@ -15,6 +15,8 @@ class Glab(GoPackage):
|
||||
|
||||
license("MIT")
|
||||
|
||||
version("1.55.0", sha256="21f58698b92035461e8e8ba9040429f4b5a0f6d528d8333834ef522a973384c8")
|
||||
version("1.54.0", sha256="99f5dd785041ad26c8463ae8630e98a657aa542a2bb02333d50243dd5cfdf9cb")
|
||||
version("1.53.0", sha256="2930aa5dd76030cc6edcc33483bb49dd6a328eb531d0685733ca7be7b906e915")
|
||||
version("1.52.0", sha256="585495e53d3994172fb927218627b7470678bc766320cb52f4b4204238677dde")
|
||||
version("1.51.0", sha256="6a95d827004fee258aacb49a427875e3b505b063cc578933d965cd56481f5a19")
|
||||
@@ -34,20 +36,38 @@ class Glab(GoPackage):
|
||||
version("1.21.1", sha256="8bb35c5cf6b011ff14d1eaa9ab70ec052d296978792984250e9063b006ee4d50")
|
||||
version("1.20.0", sha256="6beb0186fa50d0dea3b05fcfe6e4bc1f9be0c07aa5fa15b37ca2047b16980412")
|
||||
|
||||
depends_on("go@1.13:", type="build")
|
||||
depends_on("go@1.17:", type="build", when="@1.22:")
|
||||
depends_on("go@1.18:", type="build", when="@1.23:")
|
||||
depends_on("go@1.19:", type="build", when="@1.35:")
|
||||
depends_on("go@1.21:", type="build", when="@1.37:")
|
||||
depends_on("go@1.22.3:", type="build", when="@1.41:")
|
||||
depends_on("go@1.22.4:", type="build", when="@1.42:")
|
||||
depends_on("go@1.22.5:", type="build", when="@1.44:")
|
||||
depends_on("go@1.23:", type="build", when="@1.46:")
|
||||
depends_on("go@1.23.2:", type="build", when="@1.48:")
|
||||
depends_on("go@1.23.4:", type="build", when="@1.52:")
|
||||
with default_args(type="build"):
|
||||
depends_on("go@1.24.1:", when="@1.54:")
|
||||
depends_on("go@1.23.4:", when="@1.52:")
|
||||
depends_on("go@1.23.2:", when="@1.48:")
|
||||
depends_on("go@1.23.0:", when="@1.46:")
|
||||
depends_on("go@1.22.5:", when="@1.44:")
|
||||
depends_on("go@1.22.4:", when="@1.42:")
|
||||
depends_on("go@1.22.3:", when="@1.41:")
|
||||
depends_on("go@1.21.0:", when="@1.37:")
|
||||
depends_on("go@1.19.0:", when="@1.35:")
|
||||
depends_on("go@1.18.0:", when="@1.23:")
|
||||
depends_on("go@1.17.0:", when="@1.22:")
|
||||
depends_on("go@1.13.0:")
|
||||
|
||||
build_directory = "cmd/glab"
|
||||
|
||||
# Required to correctly set the version
|
||||
# https://gitlab.com/gitlab-org/cli/-/blob/v1.55.0/Makefile?ref_type=tags#L44
|
||||
@property
|
||||
def build_args(self):
|
||||
extra_ldflags = [f"-X 'main.version=v{self.version}'"]
|
||||
|
||||
args = super().build_args
|
||||
|
||||
if "-ldflags" in args:
|
||||
ldflags_index = args.index("-ldflags") + 1
|
||||
args[ldflags_index] = args[ldflags_index] + " " + " ".join(extra_ldflags)
|
||||
else:
|
||||
args.extend(["-ldflags", " ".join(extra_ldflags)])
|
||||
|
||||
return args
|
||||
|
||||
@run_after("install")
|
||||
def install_completions(self):
|
||||
glab = Executable(self.prefix.bin.glab)
|
||||
|
@@ -75,24 +75,25 @@ def configure_args(self):
|
||||
spec = self.spec
|
||||
args = ["--with-gmp", "--with-mpfr"]
|
||||
|
||||
if spec.satisfies("^intel-mkl"):
|
||||
if spec.satisfies("+fftw") or spec.satisfies("+lapack"):
|
||||
args.append("--enable-mkl")
|
||||
if spec.satisfies("^[virtuals=lapack] intel-oneapi-mkl") or spec.satisfies(
|
||||
"^[virtuals=fftw-api] intel-oneapi-mkl"
|
||||
):
|
||||
args.append("--enable-mkl")
|
||||
else:
|
||||
if spec.satisfies("+fftw"):
|
||||
args.append("--with-fftw={0}".format(self.spec["fftw-api"].prefix))
|
||||
args.append(f"--with-fftw={self.spec['fftw-api'].prefix}")
|
||||
if spec.satisfies("+lapack"):
|
||||
args.append("--enable-lapack={0}".format(self.spec["lapack"].prefix))
|
||||
args.append(f"--enable-lapack={self.spec['lapack'].prefix}")
|
||||
# lapack is searched only as `-llapack`, so anything else
|
||||
# wouldn't be found, causing an error.
|
||||
args.append("LIBS={0}".format(self.spec["lapack"].libs.ld_flags))
|
||||
args.append(f"LIBS={self.spec['lapack'].libs.ld_flags}")
|
||||
|
||||
if "comms=none" not in spec:
|
||||
# The build system can easily get very confused about MPI support
|
||||
# and what linker to use. In many case it'd end up building the
|
||||
# code with support for MPI but without using `mpicxx` or linking to
|
||||
# `-lmpi`, wreaking havoc. Forcing `CXX` to be mpicxx should help.
|
||||
args.extend(["CC={0}".format(spec["mpi"].mpicc), "CXX={0}".format(spec["mpi"].mpicxx)])
|
||||
args.extend([f"CC={spec['mpi'].mpicc}", f"CXX={spec['mpi'].mpicxx}"])
|
||||
|
||||
args += self.enable_or_disable("timers")
|
||||
args += self.enable_or_disable("chroma")
|
||||
@@ -119,11 +120,11 @@ def configure_args(self):
|
||||
args.extend(
|
||||
[
|
||||
"--enable-simd=GEN",
|
||||
"--enable-gen-simd-width={0}".format(spec.variants["gen-simd-width"].value),
|
||||
f"--enable-gen-simd-width={spec.variants['gen-simd-width'].value}",
|
||||
]
|
||||
)
|
||||
|
||||
args.append("--enable-comms={0}".format(spec.variants["comms"].value))
|
||||
args.append("--enable-rng={0}".format(spec.variants["rng"].value))
|
||||
args.append(f"--enable-comms={spec.variants['comms'].value}")
|
||||
args.append(f"--enable-rng={spec.variants['rng'].value}")
|
||||
|
||||
return args
|
||||
|
@@ -529,9 +529,8 @@ class Gromacs(CMakePackage, CudaPackage):
|
||||
)
|
||||
|
||||
# If the Intel suite is used for Lapack, it must be used for fftw and vice-versa
|
||||
for _intel_pkg in INTEL_MATH_LIBRARIES:
|
||||
requires(f"^[virtuals=fftw-api] {_intel_pkg}", when=f"^[virtuals=lapack] {_intel_pkg}")
|
||||
requires(f"^[virtuals=lapack] {_intel_pkg}", when=f"^[virtuals=fftw-api] {_intel_pkg}")
|
||||
requires("^[virtuals=fftw-api] intel-oneapi-mkl", when="^[virtuals=lapack] intel-oneapi-mkl")
|
||||
requires("^[virtuals=lapack] intel-oneapi-mkl", when="^[virtuals=fftw-api] intel-oneapi-mkl")
|
||||
|
||||
patch("gmxDetectCpu-cmake-3.14.patch", when="@2018:2019.3^cmake@3.14.0:")
|
||||
patch("gmxDetectSimd-cmake-3.14.patch", when="@5.0:2017^cmake@3.14.0:")
|
||||
@@ -911,9 +910,8 @@ def cmake_args(self):
|
||||
)
|
||||
options.append(f"-DNVSHMEM_ROOT={nvshmem_root}")
|
||||
|
||||
if self.spec["lapack"].name in INTEL_MATH_LIBRARIES:
|
||||
# fftw-api@3 is provided by intel-mkl or intel-parallel-studio
|
||||
# we use the mkl interface of gromacs
|
||||
if self.spec.satisfies("^[virtuals=lapack] intel-oneapi-mkl"):
|
||||
# fftw-api@3 is provided by intel-oneapi-mkl
|
||||
options.append("-DGMX_FFT_LIBRARY=mkl")
|
||||
if self.spec.satisfies("@:2022"):
|
||||
options.append(
|
||||
|
@@ -37,14 +37,11 @@ class Gsibec(CMakePackage):
|
||||
|
||||
depends_on("lapack", type=("build", "run"))
|
||||
|
||||
depends_on("ecbuild", type=("build"))
|
||||
depends_on("jedi-cmake", type=("build"))
|
||||
depends_on("sp", type=("build"))
|
||||
depends_on("ecbuild", type="build")
|
||||
depends_on("jedi-cmake", type="build")
|
||||
depends_on("sp", type="build")
|
||||
|
||||
def cmake_args(self):
|
||||
args = []
|
||||
|
||||
mkl_providers = ["intel-mkl", "intel-oneapi-mkl", "intel-parallel-studio"]
|
||||
args.append(self.define("ENABLE_MKL", self.spec["lapack"].name in mkl_providers))
|
||||
|
||||
return args
|
||||
return [
|
||||
self.define("ENABLE_MKL", self.spec.satisfies("^[virtuals=lapack] intel-oneapi-mkl"))
|
||||
]
|
||||
|
@@ -12,6 +12,7 @@ class Gslib(Package):
|
||||
git = "https://github.com/gslib/gslib.git"
|
||||
|
||||
version("develop", branch="master")
|
||||
version("1.0.9", tag="v1.0.9", commit="95acf5b42301d6cb48fda88d662f1d784b863089")
|
||||
version("1.0.7", tag="v1.0.7", commit="88f90cb96953527e3e833f8dbf2719273fc8346d")
|
||||
version("1.0.6", tag="v1.0.6", commit="1c2f74420fec36d5abe1d75f194a457c61f0df53")
|
||||
version("1.0.5", tag="v1.0.5", commit="1de2fba1d94e27e20f3bc3af6a3a35901e223ecd")
|
||||
|
@@ -14,12 +14,14 @@ class GtkDoc(AutotoolsPackage):
|
||||
pdf/man-pages with some extra work."""
|
||||
|
||||
homepage = "https://wiki.gnome.org/DocumentationProject/GtkDoc"
|
||||
url = "https://gitlab.gnome.org/GNOME/gtk-doc/-/archive/1.33.2/gtk-doc-1.33.2.tar.gz"
|
||||
url = "https://download.gnome.org/sources/gtk-doc/1.33/gtk-doc-1.33.2.tar.xz"
|
||||
list_url = "https://download.gnome.org/sources/gtk-doc/"
|
||||
list_depth = 1
|
||||
|
||||
license("GPL-2.0-or-later AND GFDL-1.1-or-later")
|
||||
|
||||
version("1.33.2", sha256="2d1b0cbd26edfcb54694b2339106a02a81d630a7dedc357461aeb186874cc7c0")
|
||||
version("1.32", sha256="0890c1f00d4817279be51602e67c4805daf264092adc58f9c04338566e8225ba")
|
||||
version("1.33.2", sha256="cc1b709a20eb030a278a1f9842a362e00402b7f834ae1df4c1998a723152bf43")
|
||||
version("1.32", sha256="de0ef034fb17cb21ab0c635ec730d19746bce52984a6706e7bbec6fb5e0b907c")
|
||||
|
||||
depends_on("c", type="build") # generated
|
||||
|
||||
@@ -60,14 +62,8 @@ def installcheck(self):
|
||||
pass
|
||||
|
||||
def url_for_version(self, version):
|
||||
"""Handle gnome's version-based custom URLs."""
|
||||
|
||||
if version <= Version("1.32"):
|
||||
url = "https://gitlab.gnome.org/GNOME/gtk-doc/-/archive/GTK_DOC_{0}/gtk-doc-GTK_DOC_{0}.tar.gz"
|
||||
return url.format(version.underscored)
|
||||
|
||||
url = "https://gitlab.gnome.org/GNOME/gtk-doc/-/archive/{0}/gtk-doc-{0}.tar.gz"
|
||||
return url.format(version)
|
||||
url = "https://download.gnome.org/sources/gtk-doc/{0}/gtk-doc-{1}.tar.xz"
|
||||
return url.format(version.up_to(2), version)
|
||||
|
||||
def configure_args(self):
|
||||
args = ["--with-xml-catalog={0}".format(self["docbook-xml"].catalog)]
|
||||
|
@@ -45,11 +45,16 @@ class Hdf5(CMakePackage):
|
||||
|
||||
# Odd versions are considered experimental releases
|
||||
# Even versions are maintenance versions
|
||||
version(
|
||||
"1.14.6",
|
||||
sha256="e4defbac30f50d64e1556374aa49e574417c9e72c6b1de7a4ff88c4b1bea6e9b",
|
||||
url="https://support.hdfgroup.org/releases/hdf5/v1_14/v1_14_6/downloads/hdf5-1.14.6.tar.gz",
|
||||
preferred=True,
|
||||
)
|
||||
version(
|
||||
"1.14.5",
|
||||
sha256="ec2e13c52e60f9a01491bb3158cb3778c985697131fc6a342262d32a26e58e44",
|
||||
url="https://support.hdfgroup.org/releases/hdf5/v1_14/v1_14_5/downloads/hdf5-1.14.5.tar.gz",
|
||||
preferred=True,
|
||||
)
|
||||
version(
|
||||
"1.14.4-3",
|
||||
|
@@ -120,9 +120,8 @@ def edit(self, spec, prefix):
|
||||
lin_alg_libs.append(join_path(spec["fftw-api"].prefix.lib, "libsfftw_mpi.so"))
|
||||
lin_alg_libs.append(join_path(spec["fftw-api"].prefix.lib, "libsfftw.so"))
|
||||
|
||||
elif (
|
||||
self.spec.variants["fft"].value == "mkl"
|
||||
and spec["fftw-api"].name in INTEL_MATH_LIBRARIES
|
||||
elif self.spec.variants["fft"].value == "mkl" and spec.satisfies(
|
||||
"^[virtuals=fftw-api] intel-oneapi-mkl"
|
||||
):
|
||||
mklroot = env["MKLROOT"]
|
||||
self.config["@LAINC@"] += f" -I{join_path(mklroot, 'include/fftw')}"
|
||||
@@ -159,8 +158,6 @@ def edit(self, spec, prefix):
|
||||
|
||||
# Compiler flags for CPU architecture optimizations
|
||||
if spec.satisfies("%intel"):
|
||||
# with intel-parallel-studio+mpi the '-march' arguments
|
||||
# are not passed to icc
|
||||
arch_opt = optimization_flags(self.compiler, spec.target)
|
||||
self.config["@CCFLAGS@"] = f"-O3 -restrict -ansi-alias -ip {arch_opt}"
|
||||
self.config["@CCNOOPT@"] = "-restrict"
|
||||
|
@@ -126,11 +126,7 @@ def configure_args(self):
|
||||
if self.spec.satisfies("+openmp"):
|
||||
cflags.append(self.compiler.openmp_flag)
|
||||
|
||||
if (
|
||||
self.spec.satisfies("^intel-mkl")
|
||||
or self.spec.satisfies("^intel-oneapi-mkl")
|
||||
or self.spec.satisfies("^intel-parallel-studio+mkl")
|
||||
):
|
||||
if self.spec.satisfies("^intel-oneapi-mkl"):
|
||||
ldflags.append(self.spec["blas"].libs.ld_flags)
|
||||
|
||||
if self.spec.satisfies("%aocc"):
|
||||
|
@@ -1,187 +0,0 @@
|
||||
# Copyright Spack Project Developers. See COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import sys
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
@IntelOneApiPackage.update_description
|
||||
class IntelMkl(IntelPackage):
|
||||
"""Intel Math Kernel Library. This package has been replaced by
|
||||
intel-oneapi-mkl.
|
||||
|
||||
"""
|
||||
|
||||
maintainers("rscohn2")
|
||||
|
||||
homepage = "https://software.intel.com/en-us/intel-mkl"
|
||||
|
||||
version(
|
||||
"2020.4.304",
|
||||
sha256="2314d46536974dbd08f2a4e4f9e9a155dc7e79e2798c74e7ddfaad00a5917ea5",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16917/l_mkl_2020.4.304.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2020.3.279",
|
||||
sha256="2b8e434ecc9462491130ba25a053927fd1a2eca05e12acb5936b08c486857a04",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16903/l_mkl_2020.3.279.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2020.2.254",
|
||||
sha256="ed00a267af362a6c14212bd259ab1673d64337e077263033edeef8ac72c10223",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16849/l_mkl_2020.2.254.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2020.1.217",
|
||||
sha256="082a4be30bf4f6998e4d6e3da815a77560a5e66a68e254d161ab96f07086066d",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16533/l_mkl_2020.1.217.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2020.0.166",
|
||||
sha256="f6d92deb3ff10b11ba3df26b2c62bb4f0f7ae43e21905a91d553e58f0f5a8ae0",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16232/l_mkl_2020.0.166.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.5.281",
|
||||
sha256="9995ea4469b05360d509c9705e9309dc983c0a10edc2ae3a5384bc837326737e",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15816/l_mkl_2019.5.281.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.4.243",
|
||||
sha256="fcac7b0369665d93f0c4dd98afe2816aeba5410e2b760655fe55fc477f8f33d0",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15540/l_mkl_2019.4.243.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.3.199",
|
||||
sha256="06de2b54f4812e7c39a118536259c942029fe1d6d8918ad9df558a83c4162b8f",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15275/l_mkl_2019.3.199.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.2.187",
|
||||
sha256="2bf004e6b5adb4f956993d6c20ea6ce289bb630314dd501db7f2dd5b9978ed1d",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15095/l_mkl_2019.2.187.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.1.144",
|
||||
sha256="5205a460a9c685f7a442868367389b2d0c25e1455346bc6a37c5b8ff90a20fbb",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/14895/l_mkl_2019.1.144.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.0.117",
|
||||
sha256="4e1fe2c705cfc47050064c0d6c4dee1a8c6740ac1c4f64dde9c7511c4989c7ad",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13575/l_mkl_2019.0.117.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2018.4.274",
|
||||
sha256="18eb3cde3e6a61a88f25afff25df762a560013f650aaf363f7d3d516a0d04881",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13725/l_mkl_2018.4.274.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2018.3.222",
|
||||
sha256="108d59c0927e58ce8c314db6c2b48ee331c3798f7102725f425d6884eb6ed241",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13005/l_mkl_2018.3.222.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2018.2.199",
|
||||
sha256="e28d12173bef9e615b0ded2f95f59a42b3e9ad0afa713a79f8801da2bfb31936",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12725/l_mkl_2018.2.199.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2018.1.163",
|
||||
sha256="f6dc263fc6f3c350979740a13de1b1e8745d9ba0d0f067ece503483b9189c2ca",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12414/l_mkl_2018.1.163.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2018.0.128",
|
||||
sha256="c368baa40ca88057292512534d7fad59fa24aef06da038ea0248e7cd1e280cec",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12070/l_mkl_2018.0.128.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2017.4.239",
|
||||
sha256="dcac591ed1e95bd72357fd778edba215a7eab9c6993236373231cc16c200c92a",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12147/l_mkl_2017.4.239.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2017.3.196",
|
||||
sha256="fd7295870fa164d6138c9818304f25f2bb263c814a6c6539c9fe4e104055f1ca",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11544/l_mkl_2017.3.196.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2017.2.174",
|
||||
sha256="0b8a3fd6bc254c3c3d9d51acf047468c7f32bf0baff22aa1e064d16d9fea389f",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11306/l_mkl_2017.2.174.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2017.1.132",
|
||||
sha256="8c6bbeac99326d59ef3afdc2a95308c317067efdaae50240d2f4a61f37622e69",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11024/l_mkl_2017.1.132.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2017.0.098",
|
||||
sha256="f2233e8e011f461d9c15a853edf7ed0ae8849aa665a1ec765c1ff196fd70c4d9",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9662/l_mkl_2017.0.098.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
# built from parallel_studio_xe_2016.3.x
|
||||
version(
|
||||
"11.3.3.210",
|
||||
sha256="ff858f0951fd698e9fb30147ea25a8a810c57f0126c8457b3b0cdf625ea43372",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9068/l_mkl_11.3.3.210.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
# built from parallel_studio_xe_2016.2.062
|
||||
version(
|
||||
"11.3.2.181",
|
||||
sha256="bac04a07a1fe2ae4996a67d1439ee90c54f31305e8663d1ccfce043bed84fc27",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/8711/l_mkl_11.3.2.181.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
|
||||
depends_on("cpio", type="build")
|
||||
|
||||
conflicts("target=ppc64:", msg="intel-mkl is only available for x86_64")
|
||||
conflicts("target=ppc64le:", msg="intel-mkl is only available for x86_64")
|
||||
conflicts("target=aarch64:", msg="intel-mkl is only available for x86_64")
|
||||
|
||||
variant("shared", default=True, description="Builds shared library")
|
||||
variant("ilp64", default=False, description="64 bit integers")
|
||||
variant(
|
||||
"threads",
|
||||
default="none",
|
||||
description="Multithreading support",
|
||||
values=("openmp", "tbb", "none"),
|
||||
multi=False,
|
||||
)
|
||||
|
||||
provides("blas", "lapack")
|
||||
provides("lapack@3.9.0", when="@2020.4")
|
||||
provides("lapack@3.7.0", when="@11.3")
|
||||
provides("scalapack")
|
||||
provides("mkl")
|
||||
provides("fftw-api@3", when="@2017:")
|
||||
|
||||
if sys.platform == "darwin":
|
||||
# there is no libmkl_gnu_thread on macOS
|
||||
conflicts("threads=openmp", when="%gcc")
|
@@ -36,15 +36,11 @@ class IntelMpiBenchmarks(MakefilePackage):
|
||||
version("2019.2", sha256="0bc2224a913073aaa5958f6ae08341e5fcd39cedc6722a09bfd4a3d7591a340b")
|
||||
version("2019.1", sha256="fe0d065b9936b6943ea83cb3d00aede43b17565285c6b1791fee8e340853ef79")
|
||||
version("2019.0", sha256="1c7d44aa7fd86ca84ac7cae1a69a8426243048d6294582337f1de7b4ffe68d37")
|
||||
version("2018.1", sha256="718a4eb155f18cf15a736f6496332407b5837cf1f19831723d4cfe5266c43507")
|
||||
version("2018.0", sha256="2e60a9894a686a95791be2227bc569bf81ca3875421b5307df7d83f885b1de88")
|
||||
|
||||
depends_on("c", type="build") # generated
|
||||
depends_on("cxx", type="build") # generated
|
||||
|
||||
depends_on("mpi", when="@2019:")
|
||||
depends_on("intel-mpi", when="@2018")
|
||||
depends_on("gmake", type="build", when="@2018")
|
||||
|
||||
conflicts(
|
||||
"^openmpi",
|
||||
|
@@ -1,174 +0,0 @@
|
||||
# Copyright Spack Project Developers. See COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
@IntelOneApiPackage.update_description
|
||||
class IntelMpi(IntelPackage):
|
||||
"""Intel MPI. This package has been deprecated. Use intel-oneapi-mpi instead."""
|
||||
|
||||
maintainers("rscohn2")
|
||||
|
||||
homepage = "https://software.intel.com/en-us/intel-mpi-library"
|
||||
|
||||
version(
|
||||
"2019.10.317",
|
||||
sha256="28e1b615e63d2170a99feedc75e3b0c5a7e1a07dcdaf0a4181831b07817a5346",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/17534/l_mpi_2019.10.317.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.9.304",
|
||||
sha256="618a5dc2de54306645e6428c5eb7d267b54b11b5a83dfbcad7d0f9e0d90bb2e7",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/17263/l_mpi_2019.9.304.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.8.254",
|
||||
sha256="fa163b4b79bd1b7509980c3e7ad81b354fc281a92f9cf2469bf4d323899567c0",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16814/l_mpi_2019.8.254.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.7.217",
|
||||
sha256="90383b0023f84ac003a55d8bb29dbcf0c639f43a25a2d8d8698a16e770ac9c07",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16546/l_mpi_2019.7.217.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.6.166",
|
||||
sha256="119be69f1117c93a9e5e9b8b4643918e55d2a55a78ad9567f77d16cdaf18cd6e",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16120/l_mpi_2019.6.166.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.5.281",
|
||||
sha256="9c59da051f1325b221e5bc4d8b689152e85d019f143069fa39e17989306811f4",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15838/l_mpi_2019.5.281.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.4.243",
|
||||
sha256="233a8660b92ecffd89fedd09f408da6ee140f97338c293146c9c080a154c5fcd",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15553/l_mpi_2019.4.243.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.3.199",
|
||||
sha256="5304346c863f64de797250eeb14f51c5cfc8212ff20813b124f20e7666286990",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15260/l_mpi_2019.3.199.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.2.187",
|
||||
sha256="6a3305933b5ef9e3f7de969e394c91620f3fa4bb815a4f439577739d04778b20",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15040/l_mpi_2019.2.187.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.1.144",
|
||||
sha256="dac86a5db6b86503313742b17535856a432955604f7103cb4549a9bfc256c3cd",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/14879/l_mpi_2019.1.144.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2019.0.117",
|
||||
sha256="dfb403f49c1af61b337aa952b71289c7548c3a79c32c57865eab0ea0f0e1bc08",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13584/l_mpi_2019.0.117.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2018.4.274",
|
||||
sha256="a1114b3eb4149c2f108964b83cad02150d619e50032059d119ac4ffc9d5dd8e0",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13741/l_mpi_2018.4.274.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2018.3.222",
|
||||
sha256="5021d14b344fc794e89f146e4d53d70184d7048610895d7a6a1e8ac0cf258999",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13112/l_mpi_2018.3.222.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2018.2.199",
|
||||
sha256="0927f1bff90d10974433ba2892e3fd38e6fee5232ab056a9f9decf565e814460",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12748/l_mpi_2018.2.199.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2018.1.163",
|
||||
sha256="130b11571c3f71af00a722fa8641db5a1552ac343d770a8304216d8f5d00e75c",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12414/l_mpi_2018.1.163.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2018.0.128",
|
||||
sha256="debaf2cf80df06db9633dfab6aa82213b84a665a55ee2b0178403906b5090209",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12120/l_mpi_2018.0.128.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2017.4.239",
|
||||
sha256="5a1048d284dce8bc75b45789471c83c94b3c59f8f159cab43d783fc44302510b",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12209/l_mpi_2017.4.239.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2017.3.196",
|
||||
sha256="dad9efbc5bbd3fd27cce7e1e2507ad77f342d5ecc929747ae141c890e7fb87f0",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11595/l_mpi_2017.3.196.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2017.2.174",
|
||||
sha256="106a4b362c13ddc6978715e50f5f81c58c1a4c70cd2d20a99e94947b7e733b88",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11334/l_mpi_2017.2.174.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"2017.1.132",
|
||||
sha256="8d30a63674fe05f17b0a908a9f7d54403018bfed2de03c208380b171ab99be82",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11014/l_mpi_2017.1.132.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
# built from parallel_studio_xe_2016.3.068
|
||||
version(
|
||||
"5.1.3.223",
|
||||
sha256="544f4173b09609beba711fa3ba35567397ff3b8390e4f870a3307f819117dd9b",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9278/l_mpi_p_5.1.3.223.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
|
||||
provides("mpi")
|
||||
|
||||
variant(
|
||||
"external-libfabric", default=False, description="Enable external libfabric dependency"
|
||||
)
|
||||
depends_on("libfabric", when="+external-libfabric", type=("build", "link", "run"))
|
||||
depends_on("cpio", type="build")
|
||||
|
||||
def setup_dependent_build_environment(self, env, dependent_spec):
|
||||
# Handle in callback, conveying client's compilers in additional arg.
|
||||
# CAUTION - DUP code in:
|
||||
# ../intel-mpi/package.py
|
||||
# ../intel-parallel-studio/package.py
|
||||
dependent_module = dependent_spec.package.module
|
||||
self._setup_dependent_env_callback(
|
||||
env,
|
||||
dependent_spec,
|
||||
compilers_of_client={
|
||||
"CC": dependent_module.spack_cc,
|
||||
"CXX": dependent_module.spack_cxx,
|
||||
"F77": dependent_module.spack_f77,
|
||||
"F90": dependent_module.spack_fc,
|
||||
"FC": dependent_module.spack_fc,
|
||||
},
|
||||
)
|
||||
|
||||
def setup_run_environment(self, env):
|
||||
super().setup_run_environment(env)
|
||||
|
||||
for name, value in self.mpi_compiler_wrappers.items():
|
||||
env.set(name, value)
|
@@ -186,14 +186,13 @@ class IntelOneapiMkl(IntelOneApiLibraryPackage):
|
||||
# If a +cluster then mpi_family must be set
|
||||
with when("+cluster"):
|
||||
conflicts("mpi_family=none")
|
||||
requires("mpi_family=mpich", when="^intel-oneapi-mpi")
|
||||
requires("mpi_family=mpich", when="^intel-mpi")
|
||||
requires("mpi_family=mpich", when="^mpich")
|
||||
requires("mpi_family=mpich", when="^mvapich")
|
||||
requires("mpi_family=mpich", when="^mvapich2")
|
||||
requires("mpi_family=mpich", when="^cray-mpich")
|
||||
requires("mpi_family=openmpi", when="^openmpi")
|
||||
requires("mpi_family=openmpi", when="^hpcx-mpi")
|
||||
requires("mpi_family=mpich", when="^[virtuals=mpi] intel-oneapi-mpi")
|
||||
requires("mpi_family=mpich", when="^[virtuals=mpi] mpich")
|
||||
requires("mpi_family=mpich", when="^[virtuals=mpi] mvapich")
|
||||
requires("mpi_family=mpich", when="^[virtuals=mpi] mvapich2")
|
||||
requires("mpi_family=mpich", when="^[virtuals=mpi] cray-mpich")
|
||||
requires("mpi_family=openmpi", when="^[virtuals=mpi] openmpi")
|
||||
requires("mpi_family=openmpi", when="^[virtuals=mpi] hpcx-mpi")
|
||||
|
||||
provides("fftw-api@3")
|
||||
provides("scalapack", when="+cluster")
|
||||
|
@@ -1,690 +0,0 @@
|
||||
# Copyright Spack Project Developers. See COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
@IntelOneApiPackage.update_description
|
||||
class IntelParallelStudio(IntelPackage):
|
||||
"""This is an earlier version of Intel parallel software development
|
||||
tools and has now been replaced by the Intel oneAPI Toolkits.
|
||||
|
||||
"""
|
||||
|
||||
homepage = "https://software.intel.com/en-us/intel-parallel-studio-xe"
|
||||
|
||||
maintainers("rscohn2")
|
||||
|
||||
depends_on("patchelf", type="build")
|
||||
|
||||
# As of 2016, the product comes in three "editions" that vary by scope.
|
||||
#
|
||||
# In Spack, select the edition via the version number in the spec, e.g.:
|
||||
# intel-parallel-studio@cluster.2018
|
||||
|
||||
# NB: When updating the version numbers here, please also update them
|
||||
# in the 'intel' package.
|
||||
|
||||
# Cluster Edition (top tier; all components included)
|
||||
version(
|
||||
"cluster.2020.4",
|
||||
sha256="f36e49da97b6ce24d2d464d73d7ff49d71cff20e1698c20e607919819602a9f5",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/17113/parallel_studio_xe_2020_update4_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2020.2",
|
||||
sha256="4795c44374e8988b91da20ac8f13022d7d773461def4a26ca210a8694f69f133",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16744/parallel_studio_xe_2020_update2_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2020.1",
|
||||
sha256="fd11d8de72b2bd60474f8bce7b463e4cbb2255969b9eaf24f689575aa2a2abab",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16526/parallel_studio_xe_2020_update1_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2020.0",
|
||||
sha256="573b1d20707d68ce85b70934cfad15b5ad9cc14124a261c17ddd7717ba842c64",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16225/parallel_studio_xe_2020_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"cluster.2019.5",
|
||||
sha256="c03421de616bd4e640ed25ce4103ec9c5c85768a940a5cb5bd1e97b45be33904",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15809/parallel_studio_xe_2019_update5_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2019.4",
|
||||
sha256="32aee12de3b5ca14caf7578313c06b205795c67620f4a9606ea45696ee3b3d9e",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15533/parallel_studio_xe_2019_update4_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2019.3",
|
||||
sha256="b5b022366d6d1a98dbb63b60221c62bc951c9819653ad6f5142192e89f78cf63",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15268/parallel_studio_xe_2019_update3_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2019.2",
|
||||
sha256="8c526bdd95d1da454e5cada00f7a2353089b86d0c9df2088ca7f842fe3ff4cae",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15088/parallel_studio_xe_2019_update2_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2019.1",
|
||||
sha256="3a1eb39f15615f7a2688426b9835e5e841e0c030f21dcfc899fe23e09bd2c645",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/14850/parallel_studio_xe_2019_update1_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2019.0",
|
||||
sha256="1096dd4139bdd4b3abbda69a17d1e229a606759f793f5b0ba0d39623928ee4a1",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13589/parallel_studio_xe_2019_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"cluster.2018.4",
|
||||
sha256="210a5904a860e11b861720e68416f91fd47a459e4500976853291fa8b0478566",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13717/parallel_studio_xe_2018_update4_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2018.3",
|
||||
sha256="23c64b88cea5056eaeef7b4ae0f4c6a86485c97f5e41d6c8419cb00aa4929287",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12998/parallel_studio_xe_2018_update3_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2018.2",
|
||||
sha256="550bc4758f7dd70e75830d329947532ad8b7cbb85225b8ec6db7e78a3f1d6d84",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12717/parallel_studio_xe_2018_update2_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2018.1",
|
||||
sha256="f7a94e83248d2641eb7ae2c1abf681067203a5b4372619e039861b468744774c",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12374/parallel_studio_xe_2018_update1_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2018.0",
|
||||
sha256="526e5e71c420dc9b557b0bae2a81abb33eedb9b6a28ac94996ccbcf71cf53774",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12058/parallel_studio_xe_2018_cluster_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"cluster.2017.7",
|
||||
sha256="133c3aa99841a4fe48149938a90f971467452a82f033be10cd9464ba810f6360",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12856/parallel_studio_xe_2017_update7.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2017.6",
|
||||
sha256="d771b00d3658934c424f294170125dc58ae9b03639aa898a2f115d7a7482dd3a",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12534/parallel_studio_xe_2017_update6.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2017.5",
|
||||
sha256="36e496d1d1d7d7168cc3ba8f5bca9b52022339f30b62a87ed064b77a5cbccc09",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12138/parallel_studio_xe_2017_update5.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2017.4",
|
||||
sha256="27d34625adfc635d767c136b5417a372f322fabe6701b651d858a8fe06d07f2d",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11537/parallel_studio_xe_2017_update4.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2017.3",
|
||||
sha256="856950c0493de3e8b4150e18f8821675c1cf75c2eea5ff0804f59eb301414bbe",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11460/parallel_studio_xe_2017_update3.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2017.2",
|
||||
sha256="83a655f0c2969409758488d70d6719fb5ea81a84b6da3feb641ce67bb240bc8a",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11298/parallel_studio_xe_2017_update2.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2017.1",
|
||||
sha256="c808be744c98f7471c61258144859e8e8fc92771934281a16135803e941fd9b0",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/10973/parallel_studio_xe_2017_update1.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2017.0",
|
||||
sha256="f380a56a25cf17941eb691a640035e79f92516346500e0df80fbdd46c5c1b301",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9651/parallel_studio_xe_2017.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"cluster.2016.4",
|
||||
sha256="ea43c150ed6f9967bc781fe4253169a0447c69bac4fe2c563016a1ad2875ae23",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9781/parallel_studio_xe_2016_update4.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2016.3",
|
||||
sha256="aa7c6f1a6603fae07c2b01409c12de0811aa5947eaa71dfb1fe9898076c2773e",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9061/parallel_studio_xe_2016_update3.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2016.2",
|
||||
sha256="280bf39c75d7f52f206759ca4d8b6334ab92d5970957b90f5aa286bb0aa8d65e",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/8676/parallel_studio_xe_2016_update2.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2016.1",
|
||||
sha256="f5a3ab9fb581e19bf1bd966f7d40a11905e002a2bfae1c4a2140544288ca3e48",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/8365/parallel_studio_xe_2016_update1.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2016.0",
|
||||
sha256="fd4c32352fd78fc919601bedac5658ad5ac48efbc5700d9a8d42ed7d53bd8bb7",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/7997/parallel_studio_xe_2016.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"cluster.2015.6",
|
||||
sha256="e604ed2bb45d227b151dd2898f3edd93526d58d1db1cb9d6b6f614907864f392",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/8469/parallel_studio_xe_2015_update6.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"cluster.2015.1",
|
||||
sha256="84fdf48d1de20e1d580ba5d419a5bc1c55d217a4f5dc1807190ecffe0229a62b",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/4992/parallel_studio_xe_2015_update1.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
|
||||
# Professional Edition (middle tier; excluded: MPI/TAC/Cluster Checker)
|
||||
#
|
||||
# NB: Pre-2018 download packages for Professional are the same as for
|
||||
# Cluster; differences manifest only in the tokens present in the license
|
||||
# file delivered as part of the purchase.
|
||||
version(
|
||||
"professional.2020.4",
|
||||
sha256="f9679a40c63575191385837f4f1bdafbcfd3736f09ac51d0761248b9ca9cc9e6",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/17114/parallel_studio_xe_2020_update4_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2020.2",
|
||||
sha256="96f9bca551a43e09d9648e8cba357739a759423adb671d1aa5973b7a930370c5",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16756/parallel_studio_xe_2020_update2_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2020.1",
|
||||
sha256="5b547be92ecf50cb338b3038a565f5609135b27aa98a8b7964879eb2331eb29a",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16527/parallel_studio_xe_2020_update1_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2020.0",
|
||||
sha256="e88cad18d28da50ed9cb87b12adccf13efd91bf94731dc33290481306c6f15ac",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16226/parallel_studio_xe_2020_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"professional.2019.5",
|
||||
sha256="0ec638330214539361f8632e20759f385a5a78013dcc980ee93743d86d354452",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15810/parallel_studio_xe_2019_update5_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2019.4",
|
||||
sha256="9b2818ea5739ade100841e99ce79ef7f4049a2513beb2ce20fc94706f1ba0231",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15534/parallel_studio_xe_2019_update4_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2019.3",
|
||||
sha256="92a8879106d0bdf1ecf4670cd97fbcdc67d78b13bdf484f2c516a533aa7a27f9",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15269/parallel_studio_xe_2019_update3_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2019.2",
|
||||
sha256="cdb629d74612d135ca197f1f64e6a081e31df68cda92346a29e1223bb06e64ea",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15089/parallel_studio_xe_2019_update2_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2019.1",
|
||||
sha256="bc83ef5a728903359ae11a2b90ad7dae4ae61194afb28bb5bb419f6a6aea225d",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/14825/parallel_studio_xe_2019_update1_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2019.0",
|
||||
sha256="94b9714e353e5c4f58d38cb236e2f8911cbef31c4b42a148d60c988e926411e2",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13578/parallel_studio_xe_2019_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"professional.2018.4",
|
||||
sha256="54ab4320da849108602096fa7a34aa21751068467e0d1584aa8f16352b77d323",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13718/parallel_studio_xe_2018_update4_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2018.3",
|
||||
sha256="3d8e72ccad31f243e43b72a925ad4a6908e2955682433898640ab783decf9960",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12999/parallel_studio_xe_2018_update3_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2018.2",
|
||||
sha256="fc577b29fb2c687441d4faea14a6fb6da529fc78fcb778cbface59f40e128e02",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12718/parallel_studio_xe_2018_update2_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2018.1",
|
||||
sha256="dd3e118069d87eebb614336732323b48172c8c8a653cde673a8ef02f7358e94d",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12375/parallel_studio_xe_2018_update1_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2018.0",
|
||||
sha256="72308ffa088391ea65726a79d7a73738206fbb1d8ed8563e3d06eab3120fb1a0",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12062/parallel_studio_xe_2018_professional_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"professional.2017.7",
|
||||
sha256="133c3aa99841a4fe48149938a90f971467452a82f033be10cd9464ba810f6360",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12856/parallel_studio_xe_2017_update7.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2017.6",
|
||||
sha256="d771b00d3658934c424f294170125dc58ae9b03639aa898a2f115d7a7482dd3a",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12534/parallel_studio_xe_2017_update6.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2017.5",
|
||||
sha256="36e496d1d1d7d7168cc3ba8f5bca9b52022339f30b62a87ed064b77a5cbccc09",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12138/parallel_studio_xe_2017_update5.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2017.4",
|
||||
sha256="27d34625adfc635d767c136b5417a372f322fabe6701b651d858a8fe06d07f2d",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11537/parallel_studio_xe_2017_update4.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2017.3",
|
||||
sha256="856950c0493de3e8b4150e18f8821675c1cf75c2eea5ff0804f59eb301414bbe",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11460/parallel_studio_xe_2017_update3.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2017.2",
|
||||
sha256="83a655f0c2969409758488d70d6719fb5ea81a84b6da3feb641ce67bb240bc8a",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11298/parallel_studio_xe_2017_update2.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2017.1",
|
||||
sha256="c808be744c98f7471c61258144859e8e8fc92771934281a16135803e941fd9b0",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/10973/parallel_studio_xe_2017_update1.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2017.0",
|
||||
sha256="f380a56a25cf17941eb691a640035e79f92516346500e0df80fbdd46c5c1b301",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9651/parallel_studio_xe_2017.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"professional.2016.4",
|
||||
sha256="ea43c150ed6f9967bc781fe4253169a0447c69bac4fe2c563016a1ad2875ae23",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9781/parallel_studio_xe_2016_update4.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2016.3",
|
||||
sha256="aa7c6f1a6603fae07c2b01409c12de0811aa5947eaa71dfb1fe9898076c2773e",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9061/parallel_studio_xe_2016_update3.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2016.2",
|
||||
sha256="280bf39c75d7f52f206759ca4d8b6334ab92d5970957b90f5aa286bb0aa8d65e",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/8676/parallel_studio_xe_2016_update2.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2016.1",
|
||||
sha256="f5a3ab9fb581e19bf1bd966f7d40a11905e002a2bfae1c4a2140544288ca3e48",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/8365/parallel_studio_xe_2016_update1.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2016.0",
|
||||
sha256="fd4c32352fd78fc919601bedac5658ad5ac48efbc5700d9a8d42ed7d53bd8bb7",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/7997/parallel_studio_xe_2016.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"professional.2015.6",
|
||||
sha256="e604ed2bb45d227b151dd2898f3edd93526d58d1db1cb9d6b6f614907864f392",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/8469/parallel_studio_xe_2015_update6.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"professional.2015.1",
|
||||
sha256="84fdf48d1de20e1d580ba5d419a5bc1c55d217a4f5dc1807190ecffe0229a62b",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/4992/parallel_studio_xe_2015_update1.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
|
||||
# Composer Edition (basic tier; excluded: MPI/..., Advisor/Inspector/Vtune)
|
||||
version(
|
||||
"composer.2020.4",
|
||||
sha256="ac1efeff608a8c3a416e6dfe20364061e8abf62d35fbaacdffe3fc9676fc1aa3",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16759/parallel_studio_xe_2020_update2_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2020.2",
|
||||
sha256="42af16e9a91226978bb401d9f17b628bc279aa8cb104d4a38ba0808234a79bdd",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16759/parallel_studio_xe_2020_update2_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2020.1",
|
||||
sha256="26c7e7da87b8a83adfd408b2a354d872be97736abed837364c1bf10f4469b01e",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16530/parallel_studio_xe_2020_update1_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2020.0",
|
||||
sha256="9168045466139b8e280f50f0606b9930ffc720bbc60bc76f5576829ac15757ae",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16229/parallel_studio_xe_2020_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"composer.2019.5",
|
||||
sha256="e8c8e4b9b46826a02c49325c370c79f896858611bf33ddb7fb204614838ad56c",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15813/parallel_studio_xe_2019_update5_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2019.4",
|
||||
sha256="1915993445323e1e78d6de73702a88fa3df2036109cde03d74ee38fef9f1abf2",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15537/parallel_studio_xe_2019_update4_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2019.3",
|
||||
sha256="15373ac6df2a84e6dd9fa0eac8b5f07ab00cdbb67f494161fd0d4df7a71aff8e",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15272/parallel_studio_xe_2019_update3_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2019.2",
|
||||
sha256="1e0f400be1f458592a8c2e7d55c1b2a4506f68f22bacbf1175af947809a4cd87",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15092/parallel_studio_xe_2019_update2_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2019.1",
|
||||
sha256="db000cb2ebf411f6e91719db68a0c68b8d3f7d38ad7f2049ea5b2f1b5f006c25",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/14832/parallel_studio_xe_2019_update1_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2019.0",
|
||||
sha256="e1a29463038b063e01f694e2817c0fcf1a8e824e24f15a26ce85f20afa3f963a",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13581/parallel_studio_xe_2019_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"composer.2018.4",
|
||||
sha256="94aca8f091dff9535b02f022a37aef150b36925c8ef069335621496f8e4db267",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13722/parallel_studio_xe_2018_update4_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2018.3",
|
||||
sha256="f21f7759709a3d3e3390a8325fa89ac79b1fce8890c292e73b2ba3ec576ebd2b",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13002/parallel_studio_xe_2018_update3_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2018.2",
|
||||
sha256="02d2a9fb10d9810f85dd77700215c4348d2e4475e814e4f086eb1442462667ff",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12722/parallel_studio_xe_2018_update2_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2018.1",
|
||||
sha256="db9aa417da185a03a63330c9d76ee8e88496ae6b771584d19003a29eedc7cab5",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12381/parallel_studio_xe_2018_update1_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2018.0",
|
||||
sha256="ecad64360fdaff2548a0ea250a396faf680077c5a83c3c3ce2c55f4f4270b904",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12067/parallel_studio_xe_2018_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"composer.2017.7",
|
||||
sha256="661e33b68e47bf335694d2255f5883955234e9085c8349783a5794eed2a937ad",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12860/parallel_studio_xe_2017_update7_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2017.6",
|
||||
sha256="771f50746fe130ea472394c42e25d2c7edae049ad809d2050945ef637becf65f",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12538/parallel_studio_xe_2017_update6_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2017.5",
|
||||
sha256="ede4ea9351fcf263103588ae0f130b4c2a79395529cdb698b0d6e866c4871f78",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12144/parallel_studio_xe_2017_update5_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2017.4",
|
||||
sha256="4304766f80206a27709be61641c16782fccf2b3fcf7285782cce921ddc9b10ff",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11541/parallel_studio_xe_2017_update4_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2017.3",
|
||||
sha256="3648578d7bba993ebb1da37c173979bfcfb47f26e7f4e17f257e78dea8fd96ab",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11464/parallel_studio_xe_2017_update3_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2017.2",
|
||||
sha256="abd26ab2a703e73ab93326984837818601c391782a6bce52da8b2a246798ad40",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11302/parallel_studio_xe_2017_update2_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2017.1",
|
||||
sha256="bc592abee829ba6e00a4f60961b486b80c15987ff1579d6560186407c84add6f",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/10978/parallel_studio_xe_2017_update1_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2017.0",
|
||||
sha256="d218db66a5bb57569bea00821ac95d4647eda7422bf8a178d1586b0fb314935a",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9656/parallel_studio_xe_2017_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"composer.2016.4",
|
||||
sha256="17606c52cab6f5114223a2425923c8dd69f1858f5a3bdf280e0edea49ebd430d",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9785/parallel_studio_xe_2016_composer_edition_update4.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2016.3",
|
||||
sha256="fcec90ba97533e4705077e0701813b5a3bcc197b010b03e96f83191a35c26acf",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9063/parallel_studio_xe_2016_composer_edition_update3.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2016.2",
|
||||
sha256="6309ef8be1abba7737d3c1e17af64ca2620672b2da57afe2c3c643235f65b4c7",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/8680/parallel_studio_xe_2016_composer_edition_update2.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
# Pre-2016, the only product was "Composer XE"; dir structure is different.
|
||||
version(
|
||||
"composer.2015.6",
|
||||
sha256="b1e09833469ca76a2834cd0a5bb5fea11ec9986da85abf4c6eed42cd96ec24cb",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/8432/l_compxe_2015.6.233.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"composer.2015.1",
|
||||
sha256="8a438fe20103e27bfda132955616d0c886aa6cfdd86dcd9764af5d937a8799d9",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/4933/l_compxe_2015.1.133.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
|
||||
# Generic Variants
|
||||
variant("rpath", default=True, description="Add rpath to .cfg files")
|
||||
variant(
|
||||
"newdtags", default=False, description="Allow use of --enable-new-dtags in MPI wrappers"
|
||||
)
|
||||
variant("shared", default=True, description="Builds shared library")
|
||||
variant("ilp64", default=False, description="64 bit integers")
|
||||
variant(
|
||||
"threads",
|
||||
default="none",
|
||||
description="Multithreading support",
|
||||
values=("openmp", "none"),
|
||||
multi=False,
|
||||
)
|
||||
|
||||
auto_dispatch_options = IntelPackage.auto_dispatch_options
|
||||
variant(
|
||||
"auto_dispatch",
|
||||
values=any_combination_of(*auto_dispatch_options),
|
||||
description="Enable generation of multiple auto-dispatch code paths",
|
||||
)
|
||||
|
||||
# Components available in all editions
|
||||
variant("daal", default=True, description="Install the Intel DAAL libraries")
|
||||
variant(
|
||||
"gdb", default=False, description="Install the Intel Debugger for Heterogeneous Compute"
|
||||
)
|
||||
variant("ipp", default=True, description="Install the Intel IPP libraries")
|
||||
variant("mkl", default=True, description="Install the Intel MKL library")
|
||||
variant("mpi", default=True, description="Install the Intel MPI library")
|
||||
variant("tbb", default=True, description="Install the Intel TBB libraries")
|
||||
|
||||
# Components only available in the Professional and Cluster Editions
|
||||
variant("advisor", default=False, description="Install the Intel Advisor")
|
||||
variant("clck", default=False, description="Install the Intel Cluster Checker")
|
||||
variant("inspector", default=False, description="Install the Intel Inspector")
|
||||
variant("itac", default=False, description="Install the Intel Trace Analyzer and Collector")
|
||||
variant("vtune", default=False, description="Install the Intel VTune Amplifier XE")
|
||||
|
||||
provides("daal", when="+daal")
|
||||
provides("ipp", when="+ipp")
|
||||
|
||||
provides("mkl", when="+mkl")
|
||||
provides("blas", "lapack", when="+mkl")
|
||||
provides("scalapack", when="+mkl")
|
||||
|
||||
provides("fftw-api@3", when="+mkl@professional.2017:")
|
||||
provides("fftw-api@3", when="+mkl@cluster.2017:")
|
||||
provides("fftw-api@3", when="+mkl@composer.2017:")
|
||||
|
||||
provides("mpi", when="+mpi")
|
||||
provides("tbb", when="+tbb")
|
||||
|
||||
conflicts("target=ppc64:", msg="intel-parallel-studio is only available for x86_64")
|
||||
conflicts("target=ppc64le:", msg="intel-parallel-studio is only available for x86_64")
|
||||
conflicts("target=aarch64:", msg="intel-parallel-studio is only available for x86_64")
|
||||
|
||||
# For TBB, static linkage is not and has never been supported by Intel:
|
||||
# https://www.threadingbuildingblocks.org/faq/there-version-tbb-provides-statically-linked-libraries
|
||||
conflicts("+tbb", when="~shared")
|
||||
|
||||
conflicts("+advisor", when="@composer.0:composer")
|
||||
conflicts("+clck", when="@composer.0:composer")
|
||||
conflicts("+inspector", when="@composer.0:composer")
|
||||
conflicts("+itac", when="@composer.0:composer")
|
||||
conflicts("+mpi", when="@composer.0:composer")
|
||||
conflicts("+vtune", when="@composer.0:composer")
|
||||
|
||||
conflicts("+clck", when="@professional.0:professional")
|
||||
conflicts("+itac", when="@professional.0:professional")
|
||||
conflicts("+mpi", when="@professional.0:professional")
|
||||
|
||||
# The following components are not available before 2016
|
||||
conflicts("+daal", when="@professional.0:professional.2015.7")
|
||||
conflicts("+daal", when="@cluster.0:cluster.2015.7")
|
||||
conflicts("+daal", when="@composer.0:composer.2015.7")
|
||||
|
||||
# MacOS does not support some of the auto dispatch settings
|
||||
conflicts("auto_dispatch=SSE2", "platform=darwin", msg="SSE2 is not supported on MacOS")
|
||||
conflicts(
|
||||
"auto_dispatch=SSE3",
|
||||
"platform=darwin target=x86_64:",
|
||||
msg="SSE3 is not supported on MacOS x86_64",
|
||||
)
|
||||
|
||||
def setup_dependent_build_environment(self, env, dependent_spec):
|
||||
# Handle in callback, conveying client's compilers in additional arg.
|
||||
# CAUTION - DUP code in:
|
||||
# ../intel-mpi/package.py
|
||||
# ../intel-parallel-studio/package.py
|
||||
dependent_module = dependent_spec.package.module
|
||||
self._setup_dependent_env_callback(
|
||||
env,
|
||||
dependent_spec,
|
||||
compilers_of_client={
|
||||
"CC": dependent_module.spack_cc,
|
||||
"CXX": dependent_module.spack_cxx,
|
||||
"F77": dependent_module.spack_f77,
|
||||
"F90": dependent_module.spack_fc,
|
||||
"FC": dependent_module.spack_fc,
|
||||
},
|
||||
)
|
||||
|
||||
def setup_run_environment(self, env):
|
||||
super().setup_run_environment(env)
|
||||
|
||||
for name, value in self.mpi_compiler_wrappers.items():
|
||||
env.set(name, value)
|
@@ -1,25 +0,0 @@
|
||||
paths:
|
||||
- layout:
|
||||
- executables:
|
||||
- "bin/intel64/icc"
|
||||
script: |
|
||||
echo "icc (ICC) 18.0.5 20180823"
|
||||
echo "Copyright (C) 1985-2018 Intel Corporation. All rights reserved."
|
||||
- executables:
|
||||
- "bin/intel64/icpc"
|
||||
script: |
|
||||
echo "icpc (ICC) 18.0.5 20180823"
|
||||
echo "Copyright (C) 1985-2018 Intel Corporation. All rights reserved."
|
||||
- executables:
|
||||
- "bin/intel64/ifort"
|
||||
script: |
|
||||
echo "ifort (IFORT) 18.0.5 20180823"
|
||||
echo "Copyright (C) 1985-2018 Intel Corporation. All rights reserved."
|
||||
platforms: ["darwin", "linux"]
|
||||
results:
|
||||
- spec: 'intel@18.0.5'
|
||||
extra_attributes:
|
||||
compilers:
|
||||
c: ".*/bin/intel64/icc"
|
||||
cxx: ".*/bin/intel64/icpc"
|
||||
fortran: ".*/bin/intel64/ifort"
|
@@ -1,289 +0,0 @@
|
||||
# Copyright Spack Project Developers. See COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import re
|
||||
|
||||
import llnl.util.tty as tty
|
||||
|
||||
import spack.build_systems.compiler
|
||||
from spack.package import *
|
||||
|
||||
|
||||
@IntelOneApiPackage.update_description
|
||||
class Intel(IntelPackage):
|
||||
"""Intel Compilers. This package has been replaced by
|
||||
intel-oneapi-compilers.
|
||||
|
||||
"""
|
||||
|
||||
homepage = "https://software.intel.com/en-us/intel-parallel-studio-xe"
|
||||
|
||||
# Robert Cohn
|
||||
maintainers("rscohn2")
|
||||
|
||||
depends_on("patchelf", type="build")
|
||||
|
||||
# Same as in ../intel-parallel-studio/package.py, Composer Edition,
|
||||
# but the version numbering in Spack differs.
|
||||
version(
|
||||
"20.0.4",
|
||||
sha256="ac1efeff608a8c3a416e6dfe20364061e8abf62d35fbaacdffe3fc9676fc1aa3",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/17117/parallel_studio_xe_2020_update4_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"20.0.2",
|
||||
sha256="42af16e9a91226978bb401d9f17b628bc279aa8cb104d4a38ba0808234a79bdd",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16759/parallel_studio_xe_2020_update2_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"20.0.1",
|
||||
sha256="26c7e7da87b8a83adfd408b2a354d872be97736abed837364c1bf10f4469b01e",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16530/parallel_studio_xe_2020_update1_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"20.0.0",
|
||||
sha256="9168045466139b8e280f50f0606b9930ffc720bbc60bc76f5576829ac15757ae",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16229/parallel_studio_xe_2020_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"19.1.2",
|
||||
sha256="42af16e9a91226978bb401d9f17b628bc279aa8cb104d4a38ba0808234a79bdd",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16759/parallel_studio_xe_2020_update2_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"19.1.1",
|
||||
sha256="26c7e7da87b8a83adfd408b2a354d872be97736abed837364c1bf10f4469b01e",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16530/parallel_studio_xe_2020_update1_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"19.1.0",
|
||||
sha256="9168045466139b8e280f50f0606b9930ffc720bbc60bc76f5576829ac15757ae",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/16229/parallel_studio_xe_2020_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"19.0.5",
|
||||
sha256="e8c8e4b9b46826a02c49325c370c79f896858611bf33ddb7fb204614838ad56c",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15813/parallel_studio_xe_2019_update5_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"19.0.4",
|
||||
sha256="1915993445323e1e78d6de73702a88fa3df2036109cde03d74ee38fef9f1abf2",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15537/parallel_studio_xe_2019_update4_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"19.0.3",
|
||||
sha256="15373ac6df2a84e6dd9fa0eac8b5f07ab00cdbb67f494161fd0d4df7a71aff8e",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/15272/parallel_studio_xe_2019_update3_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"19.0.1",
|
||||
sha256="db000cb2ebf411f6e91719db68a0c68b8d3f7d38ad7f2049ea5b2f1b5f006c25",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/14832/parallel_studio_xe_2019_update1_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"19.0.0",
|
||||
sha256="e1a29463038b063e01f694e2817c0fcf1a8e824e24f15a26ce85f20afa3f963a",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13581/parallel_studio_xe_2019_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
|
||||
# Version 18.0.5 comes with parallel studio 2018 update 4. See:
|
||||
# https://software.intel.com/en-us/articles/intel-compiler-and-composer-update-version-numbers-to-compiler-version-number-mapping
|
||||
version(
|
||||
"18.0.5",
|
||||
sha256="94aca8f091dff9535b02f022a37aef150b36925c8ef069335621496f8e4db267",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13722/parallel_studio_xe_2018_update4_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"18.0.3",
|
||||
sha256="f21f7759709a3d3e3390a8325fa89ac79b1fce8890c292e73b2ba3ec576ebd2b",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/13002/parallel_studio_xe_2018_update3_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"18.0.2",
|
||||
sha256="02d2a9fb10d9810f85dd77700215c4348d2e4475e814e4f086eb1442462667ff",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12722/parallel_studio_xe_2018_update2_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"18.0.1",
|
||||
sha256="db9aa417da185a03a63330c9d76ee8e88496ae6b771584d19003a29eedc7cab5",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12381/parallel_studio_xe_2018_update1_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"18.0.0",
|
||||
sha256="ecad64360fdaff2548a0ea250a396faf680077c5a83c3c3ce2c55f4f4270b904",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12067/parallel_studio_xe_2018_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"17.0.7",
|
||||
sha256="661e33b68e47bf335694d2255f5883955234e9085c8349783a5794eed2a937ad",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12860/parallel_studio_xe_2017_update7_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"17.0.6",
|
||||
sha256="771f50746fe130ea472394c42e25d2c7edae049ad809d2050945ef637becf65f",
|
||||
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12538/parallel_studio_xe_2017_update6_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"17.0.5",
|
||||
sha256="ede4ea9351fcf263103588ae0f130b4c2a79395529cdb698b0d6e866c4871f78",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/12144/parallel_studio_xe_2017_update5_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"17.0.4",
|
||||
sha256="4304766f80206a27709be61641c16782fccf2b3fcf7285782cce921ddc9b10ff",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11541/parallel_studio_xe_2017_update4_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"17.0.3",
|
||||
sha256="3648578d7bba993ebb1da37c173979bfcfb47f26e7f4e17f257e78dea8fd96ab",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11464/parallel_studio_xe_2017_update3_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"17.0.2",
|
||||
sha256="abd26ab2a703e73ab93326984837818601c391782a6bce52da8b2a246798ad40",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/11302/parallel_studio_xe_2017_update2_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"17.0.1",
|
||||
sha256="bc592abee829ba6e00a4f60961b486b80c15987ff1579d6560186407c84add6f",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/10978/parallel_studio_xe_2017_update1_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"17.0.0",
|
||||
sha256="d218db66a5bb57569bea00821ac95d4647eda7422bf8a178d1586b0fb314935a",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9656/parallel_studio_xe_2017_composer_edition.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
version(
|
||||
"16.0.4",
|
||||
sha256="17606c52cab6f5114223a2425923c8dd69f1858f5a3bdf280e0edea49ebd430d",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9785/parallel_studio_xe_2016_composer_edition_update4.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"16.0.3",
|
||||
sha256="fcec90ba97533e4705077e0701813b5a3bcc197b010b03e96f83191a35c26acf",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/9063/parallel_studio_xe_2016_composer_edition_update3.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"16.0.2",
|
||||
sha256="6309ef8be1abba7737d3c1e17af64ca2620672b2da57afe2c3c643235f65b4c7",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/8680/parallel_studio_xe_2016_composer_edition_update2.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
#
|
||||
# Grandfathered release; different directory structure.
|
||||
version(
|
||||
"15.0.6",
|
||||
sha256="b1e09833469ca76a2834cd0a5bb5fea11ec9986da85abf4c6eed42cd96ec24cb",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/8432/l_compxe_2015.6.233.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
version(
|
||||
"15.0.1",
|
||||
sha256="8a438fe20103e27bfda132955616d0c886aa6cfdd86dcd9764af5d937a8799d9",
|
||||
url="http://registrationcenter-download.intel.com/akdlm/IRC_NAS/tec/4933/l_compxe_2015.1.133.tgz",
|
||||
deprecated=True,
|
||||
)
|
||||
|
||||
variant("rpath", default=True, description="Add rpath to .cfg files")
|
||||
|
||||
auto_dispatch_options = IntelPackage.auto_dispatch_options
|
||||
variant(
|
||||
"auto_dispatch",
|
||||
values=any_combination_of(*auto_dispatch_options),
|
||||
description="Enable generation of multiple auto-dispatch code paths",
|
||||
)
|
||||
|
||||
# MacOS does not support some of the auto dispatch settings
|
||||
conflicts("auto_dispatch=SSE2", "platform=darwin", msg="SSE2 is not supported on MacOS")
|
||||
conflicts(
|
||||
"auto_dispatch=SSE3",
|
||||
"platform=darwin target=x86_64:",
|
||||
msg="SSE3 is not supported on MacOS x86_64",
|
||||
)
|
||||
|
||||
executables = ["^icc$", "^icpc$", "^ifort$"]
|
||||
|
||||
@classmethod
|
||||
def determine_version(cls, exe):
|
||||
version_regex = re.compile(r"\((?:IFORT|ICC)\) ([^ ]+)")
|
||||
try:
|
||||
output = spack.build_systems.compiler.compiler_output(
|
||||
exe, version_argument="--version"
|
||||
)
|
||||
match = version_regex.search(output)
|
||||
if match:
|
||||
return match.group(1)
|
||||
except ProcessError:
|
||||
pass
|
||||
except Exception as e:
|
||||
tty.debug(str(e))
|
||||
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def determine_variants(cls, exes, version_str):
|
||||
compilers = {}
|
||||
for exe in exes:
|
||||
if "icc" in exe:
|
||||
compilers["c"] = exe
|
||||
if "icpc" in exe:
|
||||
compilers["cxx"] = exe
|
||||
if "ifort" in exe:
|
||||
compilers["fortran"] = exe
|
||||
return "", {"compilers": compilers}
|
||||
|
||||
@property
|
||||
def cc(self):
|
||||
msg = "cannot retrieve C compiler [spec is not concrete]"
|
||||
assert self.spec.concrete, msg
|
||||
if self.spec.external:
|
||||
return self.spec.extra_attributes["compilers"].get("c", None)
|
||||
return str(self.spec.prefix.bin.intel64.icc)
|
||||
|
||||
@property
|
||||
def cxx(self):
|
||||
msg = "cannot retrieve C++ compiler [spec is not concrete]"
|
||||
assert self.spec.concrete, msg
|
||||
if self.spec.external:
|
||||
return self.spec.extra_attributes["compilers"].get("cxx", None)
|
||||
return str(self.spec.prefix.bin.intel64.icpc)
|
||||
|
||||
@property
|
||||
def fortran(self):
|
||||
msg = "cannot retrieve Fortran compiler [spec is not concrete]"
|
||||
assert self.spec.concrete, msg
|
||||
if self.spec.external:
|
||||
return self.spec.extra_attributes["compilers"].get("fortran", None)
|
||||
return str(self.spec.prefix.bin.intel64.ifort)
|
||||
|
||||
# Since the current package is a subset of 'intel-parallel-studio',
|
||||
# all remaining Spack actions are handled in the package class.
|
@@ -83,7 +83,7 @@ def edit(self, spec, prefix):
|
||||
vinc += " -DHAVE_LAPACK_CONFIG_H"
|
||||
vinc += " -DLAPACK_COMPLEX_STRUCTURE"
|
||||
filter_file("#PLATFORM=lapack", vinc, mf, string=True)
|
||||
elif ltype == "intel-mkl":
|
||||
elif ltype == "intel-oneapi-mkl":
|
||||
vpla = "PLATFORM=mkl"
|
||||
filter_file("#PLATFORM=lapack", vinc, mf, string=True)
|
||||
|
||||
|
@@ -75,7 +75,7 @@ class Itk(CMakePackage):
|
||||
)
|
||||
|
||||
def cmake_args(self):
|
||||
use_mkl = self.spec["fftw-api"].name in INTEL_MATH_LIBRARIES
|
||||
use_mkl = self.spec.satisfies("^[virtuals=fftw-api] intel-oneapi-mkl")
|
||||
args = [
|
||||
self.define("BUILD_TESTING", False),
|
||||
self.define("BUILD_SHARED_LIBS", True),
|
||||
|
@@ -73,9 +73,7 @@ def install(self, spec, prefix):
|
||||
configure_args.append("--atlas-root=" + spec["blas"].prefix)
|
||||
if "+pthread" in spec["blas"].variants:
|
||||
configure_args.append("--threaded-atlas")
|
||||
elif spec.satisfies("^[virtuals=blas] intel-parallel-studio") or spec.satisfies(
|
||||
"^[virtuals=blas] intel-mkl"
|
||||
):
|
||||
elif spec.satisfies("^[virtuals=blas] intel-oneapi-mkl"):
|
||||
configure_args.append("--mathlib=MKL")
|
||||
configure_args.append("--mkl-root=" + spec["blas"].prefix.mkl)
|
||||
if "+openmp" in spec["blas"].variants:
|
||||
|
@@ -28,14 +28,13 @@ class Ldak(Package):
|
||||
depends_on("blas")
|
||||
depends_on("lapack")
|
||||
depends_on("openblas threads=openmp", when="^[virtuals=blas] openblas")
|
||||
depends_on("intel-mkl threads=openmp", when="^[virtuals=blas] intel-mkl")
|
||||
depends_on("intel-oneapi-mkl threads=openmp", when="^[virtuals=blas] intel-oneapi-mkl")
|
||||
depends_on("glpk", when="+glpk")
|
||||
|
||||
requires("target=x86_64:", when="~glpk", msg="bundled qsopt is only for x86_64")
|
||||
requires(
|
||||
"^openblas",
|
||||
*[f"^{intel_pkg}" for intel_pkg in INTEL_MATH_LIBRARIES],
|
||||
"^[virtuals=lapack] openblas",
|
||||
"^[virtuals=lapack] intel-oneapi-mkl",
|
||||
policy="one_of",
|
||||
msg="Only mkl or openblas are supported for blas/lapack with ldak",
|
||||
)
|
||||
|
@@ -31,8 +31,6 @@ class Ligra(MakefilePackage):
|
||||
def setup_build_environment(self, env):
|
||||
if self.spec.satisfies("+openmp"):
|
||||
env.set("OPENMP", "1")
|
||||
# when +mkl, MKLROOT will be defined by intel-mkl package,
|
||||
# triggering a build with mkl support
|
||||
|
||||
def setup_run_environment(self, env):
|
||||
env.prepend_path("PATH", self.prefix.apps)
|
||||
|
@@ -85,12 +85,11 @@ def edit(self, spec, prefix):
|
||||
flags["PREFIX"] = prefix
|
||||
|
||||
# Set LAPACK and SCALAPACK
|
||||
if (
|
||||
spec["scalapack"].name in INTEL_MATH_LIBRARIES
|
||||
or spec["lapack"].name in INTEL_MATH_LIBRARIES
|
||||
or spec["blas"].name in INTEL_MATH_LIBRARIES
|
||||
if spec.satisfies("^[virtuals=scalapack] intel-oneapi-mkl") or spec.satisfies(
|
||||
"^[virtuals=lapack] intel-oneapi-mkl"
|
||||
):
|
||||
flags["LAPACK"] = self._get_mkl_ld_flags(spec)
|
||||
flags["CPPFLAGS"] = flags.get("CPPFLAGS", "") + " -DHAVE_MKL "
|
||||
else:
|
||||
flags["LAPACK"] = spec["lapack"].libs.ld_flags + " " + spec["blas"].libs.ld_flags
|
||||
if "+scalapack" in spec:
|
||||
@@ -116,13 +115,6 @@ def edit(self, spec, prefix):
|
||||
if "+scalapack" in spec:
|
||||
flags["CPPFLAGS"] = flags.get("CPPFLAGS", "") + " -DHAVE_SCALAPACK -DHAVE_MPI "
|
||||
|
||||
if (
|
||||
spec["lapack"].name in INTEL_MATH_LIBRARIES
|
||||
or spec["scalapack"].name in INTEL_MATH_LIBRARIES
|
||||
or spec["blas"].name in INTEL_MATH_LIBRARIES
|
||||
):
|
||||
flags["CPPFLAGS"] = flags.get("CPPFLAGS", "") + " -DHAVE_MKL "
|
||||
|
||||
# Write configuration file
|
||||
with open("my_machine.arch", "w") as f:
|
||||
for k, v in flags.items():
|
||||
|
@@ -236,7 +236,9 @@ def write_makefile_inc(self):
|
||||
# As of version 5.2.0, MUMPS is able to take advantage
|
||||
# of the GEMMT BLAS extension. MKL and amdblis are the only
|
||||
# known BLAS implementation supported.
|
||||
if self.spec["blas"].name in INTEL_MATH_LIBRARIES and self.spec.satisfies("@5.2.0:"):
|
||||
if self.spec.satisfies("^[virtuals=blas] intel-oneapi-mkl") and self.spec.satisfies(
|
||||
"@5.2.0:"
|
||||
):
|
||||
optf.append("-DGEMMT_AVAILABLE")
|
||||
|
||||
if "@5.2.0: ^amdblis@3.0:" in self.spec:
|
||||
|
@@ -72,12 +72,7 @@ class Ngspice(AutotoolsPackage):
|
||||
depends_on("cray-fftw+openmp", when="^[virtuals=fftw-api] cray-fftw")
|
||||
depends_on("fftw+openmp", when="^[virtuals=fftw-api] fftw")
|
||||
depends_on("fujitsu-fftw+openmp", when="^[virtuals=fftw-api] fujitsu-fftw")
|
||||
depends_on("intel-mkl threads=openmp", when="^[virtuals=fftw-api] intel-mkl")
|
||||
depends_on("intel-oneapi-mkl threads=openmp", when="^[virtuals=fftw-api] intel-oneapi-mkl")
|
||||
depends_on(
|
||||
"intel-parallel-studio threads=openmp",
|
||||
when="^[virtuals=fftw-api] intel-parallel-studio",
|
||||
)
|
||||
|
||||
with when("+fft~openmp"):
|
||||
depends_on("acfl threads=none", when="^[virtuals=fftw-api] acfl")
|
||||
@@ -86,11 +81,7 @@ class Ngspice(AutotoolsPackage):
|
||||
depends_on("cray-fftw~openmp", when="^[virtuals=fftw-api] cray-fftw")
|
||||
depends_on("fftw~openmp", when="^[virtuals=fftw-api] fftw")
|
||||
depends_on("fujitsu-fftw~openmp", when="^[virtuals=fftw-api] fujitsu-fftw")
|
||||
depends_on("intel-mkl threads=none", when="^[virtuals=fftw-api] intel-mkl")
|
||||
depends_on("intel-oneapi-mkl threads=none", when="^[virtuals=fftw-api] intel-oneapi-mkl")
|
||||
depends_on(
|
||||
"intel-parallel-studio threads=none", when="^[virtuals=fftw-api] intel-parallel-studio"
|
||||
)
|
||||
|
||||
depends_on("readline", when="+readline build=bin")
|
||||
|
||||
|
@@ -44,6 +44,13 @@ class Nlcglib(CMakePackage, CudaPackage, ROCmPackage):
|
||||
depends_on("mpi")
|
||||
depends_on("lapack")
|
||||
|
||||
requires(
|
||||
"^[virtuals=lapack] openblas",
|
||||
"^[virtuals=lapack] intel-oneapi-mkl",
|
||||
policy="one_of",
|
||||
msg="Only mkl or openblas are supported for blas/lapack with ldak",
|
||||
)
|
||||
|
||||
depends_on("kokkos~cuda~rocm", when="~cuda~rocm")
|
||||
depends_on("kokkos+openmp", when="+openmp")
|
||||
|
||||
@@ -81,13 +88,10 @@ def cmake_args(self):
|
||||
self.define_from_variant("USE_CUDA", "cuda"),
|
||||
]
|
||||
|
||||
if self.spec["blas"].name in ["intel-mkl", "intel-parallel-studio"]:
|
||||
options += [self.define("LAPACK_VENDOR", "MKL")]
|
||||
elif self.spec["blas"].name in ["intel-oneapi-mkl"]:
|
||||
options += [self.define("LAPACK_VENDOR", "MKLONEAPI")]
|
||||
if self.spec.satisfies("^[virtuals=lapack] intel-oneapi-mkl"):
|
||||
mkl_mapper = {
|
||||
"threading": {"none": "sequential", "openmp": "gnu_thread", "tbb": "tbb_thread"},
|
||||
"mpi": {"intel-mpi": "intelmpi", "mpich": "mpich", "openmpi": "openmpi"},
|
||||
"mpi": {"intel-oneapi-mpi": "intelmpi", "mpich": "mpich", "openmpi": "openmpi"},
|
||||
}
|
||||
|
||||
mkl_threads = mkl_mapper["threading"][
|
||||
@@ -102,23 +106,20 @@ def cmake_args(self):
|
||||
|
||||
options.extend(
|
||||
[
|
||||
self.define("LAPACK_VENDOR", "MKLONEAPI"),
|
||||
self.define("MKL_INTERFACE", "lp64"),
|
||||
self.define("MKL_THREADING", mkl_threads),
|
||||
self.define("MKL_MPI", mkl_mpi),
|
||||
]
|
||||
)
|
||||
|
||||
elif self.spec["blas"].name in ["openblas"]:
|
||||
options += [self.define("LAPACK_VENDOR", "OpenBLAS")]
|
||||
else:
|
||||
raise Exception("blas/lapack must be either openblas or mkl.")
|
||||
options.append(self.define("LAPACK_VENDOR", "OpenBLAS"))
|
||||
|
||||
if "+cuda%gcc" in self.spec:
|
||||
options += [
|
||||
self.define(
|
||||
"CMAKE_CXX_COMPILER", "{0}".format(self["kokkos-nvcc-wrapper"].kokkos_cxx)
|
||||
)
|
||||
]
|
||||
options.append(
|
||||
self.define("CMAKE_CXX_COMPILER", self["kokkos-nvcc-wrapper"].kokkos_cxx)
|
||||
)
|
||||
|
||||
if "+cuda" in self.spec:
|
||||
cuda_archs = self.spec.variants["cuda_arch"].value
|
||||
@@ -126,19 +127,23 @@ def cmake_args(self):
|
||||
cuda_flags = " ".join(
|
||||
["-gencode arch=compute_{0},code=sm_{0}".format(x) for x in cuda_archs]
|
||||
)
|
||||
options += [self.define("CMAKE_CUDA_FLAGS", cuda_flags)]
|
||||
options.append(self.define("CMAKE_CUDA_FLAGS", cuda_flags))
|
||||
else:
|
||||
options += [self.define("CMAKE_CUDA_ARCHITECTURES", cuda_archs)]
|
||||
options.append(self.define("CMAKE_CUDA_ARCHITECTURES", cuda_archs))
|
||||
|
||||
if "^cuda+allow-unsupported-compilers" in self.spec:
|
||||
options += [self.define("CMAKE_CUDA_FLAGS", "--allow-unsupported-compiler")]
|
||||
options.append(self.define("CMAKE_CUDA_FLAGS", "--allow-unsupported-compiler"))
|
||||
|
||||
if "+rocm" in self.spec:
|
||||
options.append(self.define("CMAKE_CXX_COMPILER", self.spec["hip"].hipcc))
|
||||
archs = ",".join(self.spec.variants["amdgpu_target"].value)
|
||||
options.append("-DHIP_HCC_FLAGS=--amdgpu-target={0}".format(archs))
|
||||
options.append(
|
||||
"-DCMAKE_CXX_FLAGS=--amdgpu-target={0} --offload-arch={0}".format(archs)
|
||||
options.extend(
|
||||
[
|
||||
self.define("CMAKE_CXX_COMPILER", self.spec["hip"].hipcc),
|
||||
self.define("HIP_HCC_FLAGS", f"--amdgpu-target={archs}"),
|
||||
self.define(
|
||||
"CMAKE_CXX_FLAGS", f"--amdgpu-target={archs} --offload-arch={archs}"
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
return options
|
||||
|
@@ -181,7 +181,7 @@ def configure_args(self):
|
||||
|
||||
if "^fftw" in spec:
|
||||
args.append("--with-fftw-prefix=%s" % spec["fftw"].prefix)
|
||||
elif spec["fftw-api"].name in INTEL_MATH_LIBRARIES:
|
||||
elif spec.satisfies("^[virtuals=fftw-api] intel-oneapi-mkl"):
|
||||
# As of version 10.0, Octopus depends on fftw-api instead
|
||||
# of FFTW. If FFTW is not in the dependency tree, then
|
||||
# it ought to be MKL as it is currently the only providers
|
||||
|
@@ -85,11 +85,11 @@ def cmake_args(self):
|
||||
args.extend([self.define("PASTIX_WITH_STARPU", "ON")])
|
||||
args.extend([self.define_from_variant("PASTIX_WITH_CUDA", "cuda")])
|
||||
|
||||
if "^intel-mkl" in spec or "^intel-parallel-studio+mkl" in spec:
|
||||
if spec.satisfies("^[virtuals=lapack] intel-oneapi-mkl"):
|
||||
args.extend([self.define("BLA_VENDOR", "Intel10_64lp_seq")])
|
||||
elif "^netlib-lapack" in spec:
|
||||
elif spec.satisfies("^[virtuals=lapack] netlib-lapack"):
|
||||
args.extend([self.define("BLA_VENDOR", "Generic")])
|
||||
elif "^openblas" in spec:
|
||||
elif spec.satisfies("^[virtuals=lapack] openblas"):
|
||||
args.extend([self.define("BLA_VENDOR", "OpenBLAS")])
|
||||
|
||||
if spec.satisfies("+mpi"):
|
||||
|
@@ -292,9 +292,6 @@ class Petsc(Package, CudaPackage, ROCmPackage):
|
||||
when="@3.20.2:3.20.4 ^hipsparse@6.0",
|
||||
)
|
||||
|
||||
# 3.8.0 has a build issue with MKL - so list this conflict explicitly
|
||||
conflicts("^intel-mkl", when="@3.8.0")
|
||||
|
||||
# These require +mpi
|
||||
mpi_msg = "Requires +mpi"
|
||||
conflicts("+cgns", when="~mpi", msg=mpi_msg)
|
||||
|
@@ -56,7 +56,6 @@ class PyDevito(PythonPackage):
|
||||
|
||||
depends_on("mpi", type=("build", "run"), when="+mpi")
|
||||
|
||||
depends_on("intel-parallel-studio", type="run", when="%intel@:2021.1.1")
|
||||
depends_on("intel-oneapi-compilers", type="run", when="%intel@2021.1.2:")
|
||||
|
||||
patch("4.8.1.patch", when="@4.8.1")
|
||||
|
@@ -289,10 +289,10 @@ def blas_lapack_pkg_config(self) -> Tuple[str, str]:
|
||||
blas = spec["blas"].libs.names[0]
|
||||
lapack = spec["lapack"].libs.names[0]
|
||||
|
||||
if spec["blas"].name in ["intel-mkl", "intel-parallel-studio", "intel-oneapi-mkl"]:
|
||||
if spec["blas"].name == "intel-oneapi-mkl":
|
||||
blas = "mkl-dynamic-lp64-seq"
|
||||
|
||||
if spec["lapack"].name in ["intel-mkl", "intel-parallel-studio", "intel-oneapi-mkl"]:
|
||||
if spec["lapack"].name == "intel-oneapi-mkl":
|
||||
lapack = "mkl-dynamic-lp64-seq"
|
||||
|
||||
if spec["blas"].name in ["blis", "amdblis"]:
|
||||
@@ -389,11 +389,7 @@ def write_library_dirs(f, dirs):
|
||||
|
||||
# Tell numpy where to find BLAS/LAPACK libraries
|
||||
with open("site.cfg", "w") as f:
|
||||
if (
|
||||
"^intel-mkl" in spec
|
||||
or "^intel-parallel-studio+mkl" in spec
|
||||
or "^intel-oneapi-mkl" in spec
|
||||
):
|
||||
if "^intel-oneapi-mkl" in spec:
|
||||
f.write("[mkl]\n")
|
||||
# FIXME: as of @1.11.2, numpy does not work with separately
|
||||
# specified threading and interface layers. A workaround is a
|
||||
@@ -497,11 +493,7 @@ def setup_build_environment(self, env):
|
||||
# https://github.com/numpy/numpy/pull/13132
|
||||
# https://numpy.org/doc/1.25/user/building.html#accelerated-blas-lapack-libraries
|
||||
# https://numpy.org/doc/1.25/user/building.html#blas
|
||||
if (
|
||||
spec["blas"].name == "intel-mkl"
|
||||
or spec["blas"].name == "intel-parallel-studio"
|
||||
or spec["blas"].name == "intel-oneapi-mkl"
|
||||
):
|
||||
if spec["blas"].name == "intel-oneapi-mkl":
|
||||
blas = "mkl"
|
||||
elif spec["blas"].name == "blis" or spec["blas"].name == "amdblis":
|
||||
blas = "blis"
|
||||
@@ -517,11 +509,7 @@ def setup_build_environment(self, env):
|
||||
env.set("NPY_BLAS_ORDER", blas)
|
||||
|
||||
# https://numpy.org/doc/1.25/user/building.html#lapack
|
||||
if (
|
||||
spec["lapack"].name == "intel-mkl"
|
||||
or spec["lapack"].name == "intel-parallel-studio"
|
||||
or spec["lapack"].name == "intel-oneapi-mkl"
|
||||
):
|
||||
if spec["lapack"].name == "intel-oneapi-mkl":
|
||||
lapack = "mkl"
|
||||
elif spec["lapack"].name == "openblas":
|
||||
lapack = "openblas"
|
||||
|
@@ -27,8 +27,6 @@ class PyTomopy(PythonPackage):
|
||||
depends_on("cuda", when="@master")
|
||||
# The shared opencv is not found by during runtest. Not using GOT/PLT is faster too
|
||||
depends_on("opencv+imgproc~shared@3.4:", when="@master")
|
||||
# During the runtest, the shared MKL libs aren't found yet:
|
||||
# depends_on('intel-mkl~shared')
|
||||
depends_on("cmake@3.17:", type=("build"))
|
||||
depends_on("ninja", type=("build"))
|
||||
depends_on("py-setuptools-scm", type=("build"))
|
||||
|
@@ -662,14 +662,10 @@ def enable_or_disable(variant, keyword="USE", var=None):
|
||||
elif self.spec["lapack"].name in ["libflame", "amdlibflame"]:
|
||||
env.set("BLAS", "FLAME")
|
||||
env.set("WITH_BLAS", "FLAME")
|
||||
elif self.spec["blas"].name in ["intel-mkl", "intel-parallel-studio", "intel-oneapi-mkl"]:
|
||||
elif self.spec["blas"].name == "intel-oneapi-mkl":
|
||||
env.set("BLAS", "MKL")
|
||||
env.set("WITH_BLAS", "mkl")
|
||||
# help find MKL
|
||||
if self.spec["mkl"].name == "intel-oneapi-mkl":
|
||||
env.set("INTEL_MKL_DIR", self.spec["mkl"].prefix.mkl.latest)
|
||||
else:
|
||||
env.set("INTEL_MKL_DIR", self.spec["mkl"].prefix.mkl)
|
||||
env.set("INTEL_MKL_DIR", self.spec["mkl"].prefix.mkl.latest)
|
||||
elif self.spec["blas"].name == "openblas":
|
||||
env.set("BLAS", "OpenBLAS")
|
||||
env.set("WITH_BLAS", "open")
|
||||
|
@@ -69,9 +69,9 @@ class QESirius(CMakePackage):
|
||||
depends_on("hdf5@1.8.16:+fortran+hl~mpi", when="hdf5=serial")
|
||||
|
||||
with when("+openmp"):
|
||||
depends_on("fftw+openmp", when="^[virtuals=fftw-api] fftw")
|
||||
depends_on("openblas threads=openmp", when="^[virtuals=blas] openblas")
|
||||
depends_on("intel-mkl threads=openmp", when="^[virtuals=blas] intel-mkl")
|
||||
requires("^fftw+openmp", when="^[virtuals=fftw-api] fftw")
|
||||
requires("^openblas threads=openmp", when="^[virtuals=blas] openblas")
|
||||
requires("^intel-oneapi-mkl threads=openmp", when="^[virtuals=blas] intel-oneapi-mkl")
|
||||
|
||||
def cmake_args(self):
|
||||
args = [
|
||||
@@ -92,7 +92,7 @@ def cmake_args(self):
|
||||
# Work around spack issue #19970 where spack sets
|
||||
# rpaths for MKL just during make, but cmake removes
|
||||
# them during make install.
|
||||
if self.spec["lapack"].name in INTEL_MATH_LIBRARIES:
|
||||
if self.spec.satisfies("^[virtuals=lapack] intel-oneapi-mkl"):
|
||||
args.append("-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=ON")
|
||||
spec = self.spec
|
||||
args.append(self.define("BLAS_LIBRARIES", spec["blas"].libs.joined(";")))
|
||||
|
@@ -8,9 +8,9 @@
|
||||
|
||||
class Qmcpack(CMakePackage, CudaPackage):
|
||||
"""QMCPACK, is a modern high-performance open-source Quantum Monte
|
||||
Carlo (QMC) simulation code."""
|
||||
Carlo (QMC) simulation code.
|
||||
"""
|
||||
|
||||
# Package information
|
||||
homepage = "https://www.qmcpack.org/"
|
||||
git = "https://github.com/QMCPACK/qmcpack.git"
|
||||
maintainers("ye-luo")
|
||||
@@ -113,11 +113,8 @@ class Qmcpack(CMakePackage, CudaPackage):
|
||||
msg="QMCPACK CUDA+SOA variant does not exist prior to v. 3.5.0.",
|
||||
)
|
||||
|
||||
conflicts("^openblas+ilp64", msg="QMCPACK does not support OpenBLAS 64-bit integer variant")
|
||||
|
||||
conflicts("^openblas threads=none", msg="QMCPACK does not support OpenBLAS without threading")
|
||||
|
||||
conflicts("^openblas threads=pthreads", msg="QMCPACK does not support OpenBLAS with pthreads")
|
||||
requires("^openblas~ilp64 threads=openmp", when="^[virtuals=blas,lapack] openblas")
|
||||
requires("^intel-oneapi-mkl ~ilp64", when="^[virtuals=blas,lapack] intel-oneapi-mkl")
|
||||
|
||||
conflicts(
|
||||
"cuda_arch=none",
|
||||
@@ -125,10 +122,6 @@ class Qmcpack(CMakePackage, CudaPackage):
|
||||
msg="A value for cuda_arch must be specified. Add cuda_arch=XX",
|
||||
)
|
||||
|
||||
# Omitted for now due to concretizer bug
|
||||
# conflicts('^intel-mkl+ilp64',
|
||||
# msg='QMCPACK does not support MKL 64-bit integer variant')
|
||||
|
||||
# QMCPACK 3.15.0 increased the minimum gcc to 9
|
||||
conflicts("%gcc@:8", when="@3.15.0:")
|
||||
|
||||
@@ -164,8 +157,8 @@ class Qmcpack(CMakePackage, CudaPackage):
|
||||
"QMCPACK releases prior to 3.5.0 require the "
|
||||
"Intel compiler when linking against Intel MKL"
|
||||
)
|
||||
conflicts("%gcc", when="@:3.4.0 ^intel-mkl", msg=mkl_warning)
|
||||
conflicts("%llvm", when="@:3.4.0 ^intel-mkl", msg=mkl_warning)
|
||||
conflicts("%gcc", when="@:3.4.0 ^[virtuals=blas,lapack] intel-oneapi-mkl", msg=mkl_warning)
|
||||
conflicts("%llvm", when="@:3.4.0 ^[virtuals=blas,lapack] intel-oneapi-mkl", msg=mkl_warning)
|
||||
|
||||
# Dependencies match those in the QMCPACK manual.
|
||||
# FIXME: once concretizer can unite unconditional and conditional
|
||||
@@ -378,7 +371,7 @@ def cmake_args(self):
|
||||
# Next two environment variables were introduced in QMCPACK 3.5.0
|
||||
# Prior to v3.5.0, these lines should be benign but CMake
|
||||
# may issue a warning.
|
||||
if spec["lapack"].name in INTEL_MATH_LIBRARIES:
|
||||
if spec.satisfies("^[virtuals=lapack] intel-oneapi-mkl"):
|
||||
args.append("-DENABLE_MKL=1")
|
||||
args.append("-DMKL_ROOT=%s" % env["MKLROOT"])
|
||||
else:
|
||||
|
@@ -80,13 +80,13 @@ class QuantumEspresso(CMakePackage, Package):
|
||||
# Need OpenMP threaded FFTW and BLAS libraries when configured
|
||||
# with OpenMP support
|
||||
with when("+openmp"):
|
||||
depends_on("fftw+openmp", when="^[virtuals=fftw-api] fftw")
|
||||
depends_on("amdfftw+openmp", when="^[virtuals=fftw-api] amdfftw")
|
||||
depends_on("openblas threads=openmp", when="^[virtuals=blas] openblas")
|
||||
depends_on("amdblis threads=openmp", when="^[virtuals=blas] amdblis")
|
||||
depends_on("intel-mkl threads=openmp", when="^[virtuals=blas] intel-mkl")
|
||||
depends_on("armpl-gcc threads=openmp", when="^[virtuals=blas] armpl-gcc")
|
||||
depends_on("acfl threads=openmp", when="^[virtuals=blas] acfl")
|
||||
requires("^fftw+openmp", when="^[virtuals=fftw-api] fftw")
|
||||
requires("^amdfftw+openmp", when="^[virtuals=fftw-api] amdfftw")
|
||||
requires("^openblas threads=openmp", when="^[virtuals=blas] openblas")
|
||||
requires("^amdblis threads=openmp", when="^[virtuals=blas] amdblis")
|
||||
requires("^intel-oneapi-mkl threads=openmp", when="^[virtuals=blas] intel-oneapi-mkl")
|
||||
requires("^armpl-gcc threads=openmp", when="^[virtuals=blas] armpl-gcc")
|
||||
requires("^acfl threads=openmp", when="^[virtuals=blas] acfl")
|
||||
|
||||
# Add Cuda Fortran support
|
||||
# depends on NVHPC compiler, not directly on CUDA toolkit
|
||||
@@ -250,9 +250,8 @@ class QuantumEspresso(CMakePackage, Package):
|
||||
depends_on("m4", type="build")
|
||||
|
||||
# If the Intel suite is used for Lapack, it must be used for fftw and vice-versa
|
||||
for _intel_pkg in INTEL_MATH_LIBRARIES:
|
||||
requires(f"^[virtuals=fftw-api] {_intel_pkg}", when=f"^[virtuals=lapack] {_intel_pkg}")
|
||||
requires(f"^[virtuals=lapack] {_intel_pkg}", when=f"^[virtuals=fftw-api] {_intel_pkg}")
|
||||
requires("^[virtuals=fftw-api] intel-oneapi-mkl", when="^[virtuals=lapack] intel-oneapi-mkl")
|
||||
requires("^[virtuals=lapack] intel-oneapi-mkl", when="^[virtuals=fftw-api] intel-oneapi-mkl")
|
||||
|
||||
# CONFLICTS SECTION
|
||||
# Omitted for now due to concretizer bug
|
||||
@@ -538,7 +537,7 @@ def install(self, pkg, spec, prefix):
|
||||
# you need to pass it in the FFTW_INCLUDE and FFT_LIBS directory.
|
||||
# QE supports an internal FFTW2, but only an external FFTW3 interface.
|
||||
|
||||
is_using_intel_libraries = spec["lapack"].name in INTEL_MATH_LIBRARIES
|
||||
is_using_intel_libraries = spec["lapack"].name == "intel-oneapi-mkl"
|
||||
if is_using_intel_libraries:
|
||||
# A seperate FFT library is not needed when linking against MKL
|
||||
options.append("FFTW_INCLUDE={0}".format(join_path(env["MKLROOT"], "include/fftw")))
|
||||
@@ -586,9 +585,9 @@ def install(self, pkg, spec, prefix):
|
||||
|
||||
if "+scalapack" in spec:
|
||||
if is_using_intel_libraries:
|
||||
if "^openmpi" in spec:
|
||||
if "^[virtuals=mpi] openmpi" in spec:
|
||||
scalapack_option = "yes"
|
||||
else: # mpich, intel-mpi
|
||||
else: # mpich
|
||||
scalapack_option = "intel"
|
||||
else:
|
||||
scalapack_option = "yes"
|
||||
|
@@ -24,8 +24,7 @@ class RRmpi(RPackage):
|
||||
depends_on("mpi")
|
||||
|
||||
# The following MPI types are not supported
|
||||
conflicts("^[virtuals=mpi] intel-mpi")
|
||||
conflicts("^[virtuals=mpi] intel-parallel-studio")
|
||||
conflicts("^[virtuals=mpi] intel-oneapi-mpi")
|
||||
conflicts("^[virtuals=mpi] mvapich2")
|
||||
conflicts("^[virtuals=mpi] spectrum-mpi")
|
||||
|
||||
|
@@ -177,7 +177,10 @@ def configure_args(self):
|
||||
|
||||
# R uses LAPACK in Fortran, which requires libmkl_gf_* when gfortran is used.
|
||||
# TODO: cleaning this up seem to require both compilers as dependencies and use variants.
|
||||
if spec["lapack"].name in INTEL_MATH_LIBRARIES and "gfortran" in self.compiler.fc:
|
||||
if (
|
||||
spec.satisfies("^[virtuals=lapack] intel-oneapi-mkl")
|
||||
and "gfortran" in self.compiler.fc
|
||||
):
|
||||
xlp64 = "ilp64" if spec["lapack"].satisfies("+ilp64") else "lp64"
|
||||
blas_flags = blas_flags.replace(f"mkl_intel_{xlp64}", f"mkl_gf_{xlp64}")
|
||||
lapack_flags = lapack_flags.replace(f"mkl_intel_{xlp64}", f"mkl_gf_{xlp64}")
|
||||
@@ -191,15 +194,13 @@ def configure_args(self):
|
||||
f"LDFLAGS=-Wl,-rpath,{extra_rpath}",
|
||||
f"--with-blas={blas_flags}",
|
||||
f"--with-lapack={lapack_flags}",
|
||||
# cannot disable docs with a normal configure option
|
||||
"ac_cv_path_PDFLATEX=",
|
||||
"ac_cv_path_PDFTEX=",
|
||||
"ac_cv_path_TEX=",
|
||||
"ac_cv_path_TEXI2DVI=",
|
||||
f"--with-libintl-prefix={spec['gettext'].prefix}",
|
||||
]
|
||||
|
||||
config_args.append("--with-libintl-prefix={0}".format(spec["gettext"].prefix))
|
||||
|
||||
if "+X" in spec:
|
||||
config_args.append("--with-cairo")
|
||||
config_args.append("--with-jpeglib")
|
||||
|
@@ -41,11 +41,12 @@ class ScineQcmaquis(CMakePackage):
|
||||
|
||||
depends_on("hdf5~mpi")
|
||||
depends_on("lapack")
|
||||
|
||||
depends_on("blas")
|
||||
for _pkg in ["openblas"] + list(INTEL_MATH_LIBRARIES):
|
||||
with when(f"^[virtuals=blas] {_pkg}"):
|
||||
depends_on(f"{_pkg}+ilp64 threads=openmp")
|
||||
|
||||
requires("^openblas +ilp64 threads=openmp", when="^[virtuals=blas,lapack] openblas")
|
||||
requires(
|
||||
"^intel-oneapi-mkl +ilp64 threads=openmp", when="^[virtuals=blas,lapack] intel-oneapi-mkl"
|
||||
)
|
||||
|
||||
depends_on("gsl")
|
||||
depends_on("boost+program_options+filesystem+system+thread+serialization+chrono @1.56:")
|
||||
|
@@ -213,15 +213,17 @@ def configure_args(self):
|
||||
# If autodetection fails for +shmem with one of these available to spack, please add
|
||||
# a "if spec.satisfies():" clause for said package.
|
||||
|
||||
if spec.satisfies("^intel-mpi") or spec.satisfies("^intel-oneapi-mpi"):
|
||||
if spec.satisfies("^[virtuals=mpi] intel-oneapi-mpi"):
|
||||
config_args.append("--with-mpi=intel3")
|
||||
elif (
|
||||
spec.satisfies("^mpich")
|
||||
or spec.satisfies("^mvapich2")
|
||||
or spec.satisfies("^cray-mpich")
|
||||
spec.satisfies("^[virtuals=mpi] mpich")
|
||||
or spec.satisfies("^[virtuals=mpi] mvapich2")
|
||||
or spec.satisfies("^[virtuals=mpi] cray-mpich")
|
||||
):
|
||||
config_args.append("--with-mpi=mpich3")
|
||||
elif spec.satisfies("^openmpi") or spec.satisfies("^hpcx-mpi"):
|
||||
elif spec.satisfies("^[virtuals=mpi] openmpi") or spec.satisfies(
|
||||
"^[virtuals=mpi] hpcx-mpi"
|
||||
):
|
||||
config_args.append("--with-mpi=openmpi")
|
||||
elif "~mpi" in spec:
|
||||
config_args.append("--without-mpi")
|
||||
|
@@ -202,7 +202,7 @@ class Seissol(CMakePackage, CudaPackage, ROCmPackage):
|
||||
depends_on("easi ~asagi jit=impalajit,lua", when="~asagi")
|
||||
depends_on("easi +asagi jit=impalajit,lua", when="+asagi")
|
||||
|
||||
depends_on("intel-mkl threads=none", when="gemm_tools_list=MKL")
|
||||
depends_on("intel-oneapi-mkl threads=none", when="gemm_tools_list=MKL")
|
||||
depends_on("blis threads=none", when="gemm_tools_list=BLIS")
|
||||
depends_on("openblas threads=none", when="gemm_tools_list=OpenBLAS")
|
||||
depends_on("libxsmm@main", when="gemm_tools_list=LIBXSMM_JIT")
|
||||
|
@@ -31,13 +31,14 @@ class Sionlib(AutotoolsPackage):
|
||||
|
||||
def configure_args(self):
|
||||
args = []
|
||||
spec = self.spec
|
||||
|
||||
if spec.satisfies("^intel-mpi"):
|
||||
if self.spec.satisfies("^[virtuals=mpi] intel-oneapi-mpi"):
|
||||
args.append("--mpi=intel2")
|
||||
elif spec.satisfies("^mpich") or spec.satisfies("^mvapich2"):
|
||||
elif self.spec.satisfies("^[virtuals=mpi] mpich") or self.spec.satisfies(
|
||||
"^[virtuals=mpi] mvapich2"
|
||||
):
|
||||
args.append("--mpi=mpich2")
|
||||
elif spec.satisfies("^openmpi"):
|
||||
elif self.spec.satisfies("^[virtuals=mpi] openmpi"):
|
||||
args.append("--mpi=openmpi")
|
||||
|
||||
return args
|
||||
|
@@ -153,9 +153,6 @@ class Sirius(CMakePackage, CudaPackage, ROCmPackage):
|
||||
depends_on("openblas threads=openmp", when="+openmp ^[virtuals=blas,lapack] openblas")
|
||||
depends_on("amdblis threads=openmp", when="+openmp ^[virtuals=blas] amdblis")
|
||||
depends_on("blis threads=openmp", when="+openmp ^[virtuals=blas] blis")
|
||||
depends_on(
|
||||
"intel-mkl threads=openmp", when="+openmp ^[virtuals=blas,lapack,fftw-api] intel-mkl"
|
||||
)
|
||||
depends_on(
|
||||
"intel-oneapi-mkl threads=openmp",
|
||||
when="+openmp ^[virtuals=blas,lapack,fftw-api] intel-oneapi-mkl",
|
||||
@@ -165,7 +162,6 @@ class Sirius(CMakePackage, CudaPackage, ROCmPackage):
|
||||
when="+scalapack ^[virtuals=blas,lapack,fftw-api] intel-oneapi-mkl",
|
||||
)
|
||||
|
||||
conflicts("intel-mkl", when="@7.6.0:")
|
||||
# MKLConfig.cmake introduced in 2021.3
|
||||
conflicts("intel-oneapi-mkl@:2021.2", when="^intel-oneapi-mkl")
|
||||
|
||||
@@ -244,7 +240,7 @@ def cmake_args(self):
|
||||
if "^cray-libsci" in spec:
|
||||
args.append(self.define(cm_label + "USE_CRAY_LIBSCI", "ON"))
|
||||
|
||||
if spec["blas"].name in INTEL_MATH_LIBRARIES:
|
||||
if spec.satisfies("^[virtuals=blas] intel-oneapi-mkl"):
|
||||
args.append(self.define(cm_label + "USE_MKL", "ON"))
|
||||
|
||||
if spec.satisfies("@7.6.0:"):
|
||||
@@ -254,7 +250,11 @@ def cmake_args(self):
|
||||
"openmp": "gnu_thread",
|
||||
"tbb": "tbb_thread",
|
||||
},
|
||||
"mpi": {"intel-mpi": "intelmpi", "mpich": "mpich", "openmpi": "openmpi"},
|
||||
"mpi": {
|
||||
"intel-oneapi-mpi": "intelmpi",
|
||||
"mpich": "mpich",
|
||||
"openmpi": "openmpi",
|
||||
},
|
||||
}
|
||||
|
||||
mkl_threads = mkl_mapper["threading"][
|
||||
|
@@ -145,11 +145,7 @@ class Xyce(CMakePackage):
|
||||
depends_on("blis libs=static", when="^[virtuals=blas] blis+cblas")
|
||||
depends_on("blis libs=static", when="^[virtuals=blas] blis+blas")
|
||||
depends_on("clblast~shared", when="^[virtuals=blas] clblast+netlib")
|
||||
depends_on("intel-mkl~shared", when="^[virtuals=blas] intel-mkl")
|
||||
depends_on("intel-oneapi-mkl~shared", when="^[virtuals=blas] intel-oneapi-mkl")
|
||||
depends_on(
|
||||
"intel-parallel-studio~shared", when="^[virtuals=blas] intel-parallel-studio+mkl"
|
||||
)
|
||||
depends_on("veclibfort~shared", when="^[virtuals=blas] veclibfort")
|
||||
conflicts("^essl", msg="essl not supported with +pymi_static_tpls")
|
||||
conflicts("^flexiblas", msg="flexiblas not supported with +pymi_static_tpls")
|
||||
|
@@ -14,6 +14,7 @@ class Zuo(AutotoolsPackage):
|
||||
license("Apache-2.0 AND MIT", checked_by="Buldram")
|
||||
maintainers("Buldram")
|
||||
|
||||
version("1.12", sha256="0c8a3a86365fb10961d9a1f536b1cd0d7fcdc2779af03236a340539966b33f86")
|
||||
version("1.11", sha256="8404bea8ecae4576f44dece7efcab69d94c8a30ec10ea186f86823d37e74694b")
|
||||
|
||||
variant("big", default=False, description="Enable hygienic macro support")
|
||||
|
Reference in New Issue
Block a user