spack/etc/spack/defaults/config.yaml

164 lines
6.2 KiB
YAML
Raw Normal View History

# -------------------------------------------------------------------------
# This is the default spack configuration file.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/config.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/config.yaml
# -------------------------------------------------------------------------
config:
# This is the path to the root of the Spack install tree.
# You can use $spack here to refer to the root of the spack instance.
install_tree: $spack/opt/spack
Modulefiles generated with a template engine (#3183) * Module files now are generated using a template engine refers #2902 #3173 jinja2 has been hooked into Spack. The python module `modules.py` has been splitted into several modules under the python package `spack/modules`. Unit tests stressing module file generation have been refactored accordingly. The module file generator for Lmod has been extended to multi-providers and deeper hierarchies. * Improved the support for templates in module files. Added an entry in `config.yaml` (`template_dirs`) to list all the directories where Spack could find templates for `jinja2`. Module file generators have a simple override mechanism to override template selection ('modules.yaml' beats 'package.py' beats 'default'). * Added jinja2 and MarkupSafe to vendored packages. * Spec.concretize() sets mutual spec-package references The correct place to set the mutual references between spec and package objects at the end of concretization. After a call to concretize we should now be ensured that spec is the same object as spec.package.spec. Code in `build_environment.py` that was performing the same operation has been turned into an assertion to be defensive on the new behavior. * Improved code and data layout for modules and related tests. Common fixtures related to module file generation have been extracted in `conftest.py`. All the mock configurations for module files have been extracted from python code and have been put into their own yaml file. Added a `context_property` decorator for the template engine, to make it easy to define dictionaries out of properties. The default for `verbose` in `modules.yaml` is now False instead of True. * Extendable module file contexts + short description from docstring The contexts that are used in conjunction with `jinja2` templates to generate module files can now be extended from package.py and modules.yaml. Module files generators now infer the short description from package.py docstring (and as you may expect it's the first paragraph) * 'module refresh' regenerates all modules by default `module refresh` without `--module-type` specified tries to regenerate all known module types. The same holds true for `module rm` Configure options used at build time are extracted and written into the module files where possible. * Fixed python3 compatibility, tests for Lmod and Tcl. Added test for exceptional paths of execution when generating Lmod module files. Fixed a few compatibility issues with python3. Fixed a bug in Tcl with naming_scheme and autoload + unit tests * Updated module file tutorial docs. Fixed a few typos in docstrings. The reference section for module files has been reorganized. The idea is to have only three topics at the highest level: - shell support + spack load/unload use/unuse - module file generation (a.k.a. APIs + modules.yaml) - module file maintenance (spack module refresh/rm) Module file generation will cover the entries in modules.yaml Also: - Licenses have been updated to include NOTICE and extended to 2017 - docstrings have been reformatted according to Google style * Removed redundant arguments to RPackage and WafPackage. All the callbacks in `RPackage` and `WafPackage` that are not build phases have been modified not to accept a `spec` and a `prefix` argument. This permits to leverage the common `configure_args` signature to insert by default the configuration arguments into the generated module files. I think it's preferable to handling those packages differently than `AutotoolsPackage`. Besides only one package seems to override one of these methods. * Fixed broken indentation + improved resiliency of refresh Fixed broken indentation in `spack module refresh` (probably a rebase gone silently wrong?). Filter the writers for blacklisted specs before searching for name clashes. An error with a single writer will not stop regeneration, but instead will print a warning and continue the command.
2017-09-20 03:34:20 +08:00
# Locations where templates should be found
template_dirs:
- $spack/share/spack/templates
# Default directory layout
install_path_scheme: "${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}"
2016-10-28 14:32:42 +08:00
# Locations where different types of modules should be installed.
module_roots:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
# Temporary locations Spack can try to use for builds.
#
# Recommended options are given below.
#
# Builds can be faster in temporary directories on some (e.g., HPC) systems.
# Specifying `$tempdir` will ensure use of the default temporary directory
# (i.e., ``$TMP` or ``$TMPDIR``).
#
# Another option that prevents conflicts and potential permission issues is
# to specify `~/.spack/stage`, which ensures each user builds in their home
# directory.
#
# A more traditional path uses the value of `$spack/var/spack/stage`, which
# builds directly inside Spack's instance without staging them in a
# temporary space. Problems with specifying a path inside a Spack instance
# are that it precludes its use as a system package and its ability to be
# pip installable.
#
# In any case, if the username is not already in the path, Spack will append
# the value of `$user` in an attempt to avoid potential conflicts between
# users in shared temporary spaces.
#
# The build stage can be purged with `spack clean --stage` and
# `spack clean -a`, so it is important that the specified directory uniquely
# identifies Spack staging to avoid accidentally wiping out non-Spack work.
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
# - $spack/var/spack/stage
# Cache directory for already downloaded source tarballs and archived
# repositories. This can be purged with `spack clean --downloads`.
source_cache: $spack/var/spack/cache
# Cache directory for miscellaneous files, like the package index.
# This can be purged with `spack clean --misc-cache`
misc_cache: ~/.spack/cache
# Timeout in seconds used for downloading sources etc. This only applies
# to the connection phase and can be increased for slow connections or
# servers. 0 means no timeout.
connect_timeout: 10
# If this is false, tools like curl that use SSL will not verify
# certifiates. (e.g., curl will use use the -k option)
verify_ssl: true
# Suppress gpg warnings from binary package verification
# Only suppresses warnings, gpg failure will still fail the install
# Potential rationale to set True: users have already explicitly trusted the
# gpg key they are using, and may not want to see repeated warnings that it
# is self-signed or something of the sort.
suppress_gpg_warnings: false
# If set to true, Spack will attempt to build any compiler on the spec
# that is not already available. If set to False, Spack will only use
# compilers already configured in compilers.yaml
install_missing_compilers: False
# If set to true, Spack will always check checksums after downloading
# archives. If false, Spack skips the checksum step.
checksum: true
# If set to true, `spack install` and friends will NOT clean
# potentially harmful variables from the build environment. Use wisely.
dirty: false
# The language the build environment will use. This will produce English
# compiler messages by default, so the log parser can highlight errors.
# If set to C, it will use English (see man locale).
# If set to the empty string (''), it will use the language from the
# user's environment.
build_language: C
# When set to true, concurrent instances of Spack will use locks to
# avoid modifying the install tree, database file, etc. If false, Spack
# will disable all locking, but you must NOT run concurrent instances
# of Spack. For filesystems that don't support locking, you should set
# this to false and run one Spack at a time, but otherwise we recommend
# enabling locks.
locks: true
# The maximum number of jobs to use when running `make` in parallel,
# always limited by the number of cores available. For instance:
# - If set to 16 on a 4 cores machine `spack install` will run `make -j4`
# - If set to 16 on a 18 cores machine `spack install` will run `make -j16`
# If not set, Spack will use all available cores up to 16.
# build_jobs: 16
# If set to true, Spack will use ccache to cache C compiles.
ccache: false
Increase and customize lock timeouts (#9219) Fixes #9166 This is intended to reduce errors related to lock timeouts by making the following changes: * Improves error reporting when acquiring a lock fails (addressing #9166) - there is no longer an attempt to release the lock if an acquire fails * By default locks taken on individual packages no longer have a timeout. This allows multiple spack instances to install overlapping dependency DAGs. For debugging purposes, a timeout can be added by setting 'package_lock_timeout' in config.yaml * Reduces the polling frequency when trying to acquire a lock, to reduce impact in the case where NFS is overtaxed. A simple adaptive strategy is implemented, which starts with a polling interval of .1 seconds and quickly increases to .5 seconds (originally it would poll up to 10^5 times per second). A test is added to check the polling interval generation logic. * The timeout for Spack's whole-database lock (e.g. for managing information about installed packages) is increased from 60s to 120s * Users can configure the whole-database lock timeout using the 'db_lock_timout' setting in config.yaml Generally, Spack locks (those created using spack.llnl.util.lock.Lock) now have no timeout by default This does not address implementations of NFS that do not support file locking, or detect cases where services that may be required (nfslock/statd) aren't running. Users may want to be able to more-aggressively release locks when they know they are the only one using their Spack instance, and they encounter lock errors after a crash (e.g. a remote terminal disconnect mentioned in #8915).
2018-09-26 09:58:51 +08:00
Increase and customize lock timeouts (#9219) Fixes #9166 This is intended to reduce errors related to lock timeouts by making the following changes: * Improves error reporting when acquiring a lock fails (addressing #9166) - there is no longer an attempt to release the lock if an acquire fails * By default locks taken on individual packages no longer have a timeout. This allows multiple spack instances to install overlapping dependency DAGs. For debugging purposes, a timeout can be added by setting 'package_lock_timeout' in config.yaml * Reduces the polling frequency when trying to acquire a lock, to reduce impact in the case where NFS is overtaxed. A simple adaptive strategy is implemented, which starts with a polling interval of .1 seconds and quickly increases to .5 seconds (originally it would poll up to 10^5 times per second). A test is added to check the polling interval generation logic. * The timeout for Spack's whole-database lock (e.g. for managing information about installed packages) is increased from 60s to 120s * Users can configure the whole-database lock timeout using the 'db_lock_timout' setting in config.yaml Generally, Spack locks (those created using spack.llnl.util.lock.Lock) now have no timeout by default This does not address implementations of NFS that do not support file locking, or detect cases where services that may be required (nfslock/statd) aren't running. Users may want to be able to more-aggressively release locks when they know they are the only one using their Spack instance, and they encounter lock errors after a crash (e.g. a remote terminal disconnect mentioned in #8915).
2018-09-26 09:58:51 +08:00
# How long to wait to lock the Spack installation database. This lock is used
# when Spack needs to manage its own package metadata and all operations are
Increase and customize lock timeouts (#9219) Fixes #9166 This is intended to reduce errors related to lock timeouts by making the following changes: * Improves error reporting when acquiring a lock fails (addressing #9166) - there is no longer an attempt to release the lock if an acquire fails * By default locks taken on individual packages no longer have a timeout. This allows multiple spack instances to install overlapping dependency DAGs. For debugging purposes, a timeout can be added by setting 'package_lock_timeout' in config.yaml * Reduces the polling frequency when trying to acquire a lock, to reduce impact in the case where NFS is overtaxed. A simple adaptive strategy is implemented, which starts with a polling interval of .1 seconds and quickly increases to .5 seconds (originally it would poll up to 10^5 times per second). A test is added to check the polling interval generation logic. * The timeout for Spack's whole-database lock (e.g. for managing information about installed packages) is increased from 60s to 120s * Users can configure the whole-database lock timeout using the 'db_lock_timout' setting in config.yaml Generally, Spack locks (those created using spack.llnl.util.lock.Lock) now have no timeout by default This does not address implementations of NFS that do not support file locking, or detect cases where services that may be required (nfslock/statd) aren't running. Users may want to be able to more-aggressively release locks when they know they are the only one using their Spack instance, and they encounter lock errors after a crash (e.g. a remote terminal disconnect mentioned in #8915).
2018-09-26 09:58:51 +08:00
# expected to complete within the default time limit. The timeout should
# therefore generally be left untouched.
Distributed builds (#13100) Fixes #9394 Closes #13217. ## Background Spack provides the ability to enable/disable parallel builds through two options: package `parallel` and configuration `build_jobs`. This PR changes the algorithm to allow multiple, simultaneous processes to coordinate the installation of the same spec (and specs with overlapping dependencies.). The `parallel` (boolean) property sets the default for its package though the value can be overridden in the `install` method. Spack's current parallel builds are limited to build tools supporting `jobs` arguments (e.g., `Makefiles`). The number of jobs actually used is calculated as`min(config:build_jobs, # cores, 16)`, which can be overridden in the package or on the command line (i.e., `spack install -j <# jobs>`). This PR adds support for distributed (single- and multi-node) parallel builds. The goals of this work include improving the efficiency of installing packages with many dependencies and reducing the repetition associated with concurrent installations of (dependency) packages. ## Approach ### File System Locks Coordination between concurrent installs of overlapping packages to a Spack instance is accomplished through bottom-up dependency DAG processing and file system locks. The runs can be a combination of interactive and batch processes affecting the same file system. Exclusive prefix locks are required to install a package while shared prefix locks are required to check if the package is installed. Failures are communicated through a separate exclusive prefix failure lock, for concurrent processes, combined with a persistent store, for separate, related build processes. The resulting file contains the failing spec to facilitate manual debugging. ### Priority Queue Management of dependency builds changed from reliance on recursion to use of a priority queue where the priority of a spec is based on the number of its remaining uninstalled dependencies. Using a queue required a change to dependency build exception handling with the most visible issue being that the `install` method *must* install something in the prefix. Consequently, packages can no longer get away with an install method consisting of `pass`, for example. ## Caveats - This still only parallelizes a single-rooted build. Multi-rooted installs (e.g., for environments) are TBD in a future PR. Tasks: - [x] Adjust package lock timeout to correspond to value used in the demo - [x] Adjust database lock timeout to reduce contention on startup of concurrent `spack install <spec>` calls - [x] Replace (test) package's `install: pass` methods with file creation since post-install `sanity_check_prefix` will otherwise error out with `Install failed .. Nothing was installed!` - [x] Resolve remaining existing test failures - [x] Respond to alalazo's initial feedback - [x] Remove `bin/demo-locks.py` - [x] Add new tests to address new coverage issues - [x] Replace built-in package's `def install(..): pass` to "install" something (i.e., only `apple-libunwind`) - [x] Increase code coverage
2020-02-19 16:04:22 +08:00
db_lock_timeout: 3
Increase and customize lock timeouts (#9219) Fixes #9166 This is intended to reduce errors related to lock timeouts by making the following changes: * Improves error reporting when acquiring a lock fails (addressing #9166) - there is no longer an attempt to release the lock if an acquire fails * By default locks taken on individual packages no longer have a timeout. This allows multiple spack instances to install overlapping dependency DAGs. For debugging purposes, a timeout can be added by setting 'package_lock_timeout' in config.yaml * Reduces the polling frequency when trying to acquire a lock, to reduce impact in the case where NFS is overtaxed. A simple adaptive strategy is implemented, which starts with a polling interval of .1 seconds and quickly increases to .5 seconds (originally it would poll up to 10^5 times per second). A test is added to check the polling interval generation logic. * The timeout for Spack's whole-database lock (e.g. for managing information about installed packages) is increased from 60s to 120s * Users can configure the whole-database lock timeout using the 'db_lock_timout' setting in config.yaml Generally, Spack locks (those created using spack.llnl.util.lock.Lock) now have no timeout by default This does not address implementations of NFS that do not support file locking, or detect cases where services that may be required (nfslock/statd) aren't running. Users may want to be able to more-aggressively release locks when they know they are the only one using their Spack instance, and they encounter lock errors after a crash (e.g. a remote terminal disconnect mentioned in #8915).
2018-09-26 09:58:51 +08:00
Increase and customize lock timeouts (#9219) Fixes #9166 This is intended to reduce errors related to lock timeouts by making the following changes: * Improves error reporting when acquiring a lock fails (addressing #9166) - there is no longer an attempt to release the lock if an acquire fails * By default locks taken on individual packages no longer have a timeout. This allows multiple spack instances to install overlapping dependency DAGs. For debugging purposes, a timeout can be added by setting 'package_lock_timeout' in config.yaml * Reduces the polling frequency when trying to acquire a lock, to reduce impact in the case where NFS is overtaxed. A simple adaptive strategy is implemented, which starts with a polling interval of .1 seconds and quickly increases to .5 seconds (originally it would poll up to 10^5 times per second). A test is added to check the polling interval generation logic. * The timeout for Spack's whole-database lock (e.g. for managing information about installed packages) is increased from 60s to 120s * Users can configure the whole-database lock timeout using the 'db_lock_timout' setting in config.yaml Generally, Spack locks (those created using spack.llnl.util.lock.Lock) now have no timeout by default This does not address implementations of NFS that do not support file locking, or detect cases where services that may be required (nfslock/statd) aren't running. Users may want to be able to more-aggressively release locks when they know they are the only one using their Spack instance, and they encounter lock errors after a crash (e.g. a remote terminal disconnect mentioned in #8915).
2018-09-26 09:58:51 +08:00
# How long to wait when attempting to modify a package (e.g. to install it).
# This value should typically be 'null' (never time out) unless the Spack
# instance only ever has a single user at a time, and only if the user
# anticipates that a significant delay indicates that the lock attempt will
# never succeed.
package_lock_timeout: null
# Control whether Spack embeds RPATH or RUNPATH attributes in ELF binaries.
# Has no effect on macOS. DO NOT MIX these within the same install tree.
# See the Spack documentation for details.
shared_linking: 'rpath'
# Set to 'false' to allow installation on filesystems that doesn't allow setgid bit
# manipulation by unprivileged user (e.g. AFS)
config:allow_sgid: true