Modifications:
- [x] Removed `centos:6` unit test, adjusted vermin checks
- [x] Removed backport of `collections.OrderedDict`
- [x] Removed backport of `functools.total_ordering`
- [x] Removed Python 2.6 specific skip markers in unit tests
- [x] Fixed a few minor Python 2.6 related TODOs in code
Updating the vendored dependencies will be done in separate PRs
This PR permits to specify the `url` and `ref` of the Spack instance used in a container recipe simply by expanding the YAML schema as outlined in #20442:
```yaml
container:
images:
os: amazonlinux:2
spack:
ref: develop
resolve_sha: true
```
The `resolve_sha` option, if true, verifies the `ref` by cloning the Spack repository in a temporary directory and transforming any tag or branch name to a commit sha. When this new ability is leveraged an additional "bootstrap" stage is added, which builds an image with Spack setup and ready to install software. The Spack repository to be used can be customized with the `url` keyword under `spack`.
Modifications:
- [x] Permit to pin the version of Spack, either by branch or tag or sha
- [x] Added a few new OSes (centos:8, amazonlinux:2, ubuntu:20.04, alpine:3, cuda:11.2.1)
- [x] Permit to print the bootstrap image as a standalone
- [x] Add documentation on the new part of the schema
- [x] Add unit tests for different use cases
* py-vermin: add latest version 1.3.1
* Exclude line from Vermin since version is already being checked for
Vermin 1.3.1 finds that `encoding` kwarg of builtin `open()` requires Python 3+.
Installing packages with a lot of dependencies does not have an easy way
of judging the current progress (apart from running `spack spec -I pkg`
in another terminal). This change allows Spack to update the terminal's
title with status information, including its current progress as well as
information about the current and total number of packages.
Spack has logic to preserve an installation prefix when it is being
overwritten: if the new install fails, the old files are restored.
This PR adds error handling for when this backup restoration fails
(i.e. the new install fails, and then some unexpected error prevents
restoration from the backup).
* Make sure PackageInstaller does not remove the just-restored
install dir after failure in spack install --overwrite
* Remove cryptic error message and rethrow actual error
This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.
The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.
Because of this, we need to track open lock files so that we only close them when
a process no longer needs them. We do this by tracking each lockfile by its
inode and process id. This has several nice properties:
1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
number of times necessary for the locks we have.
Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.
Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
needed (this avoids inadvertently releasing locks that should not be released).
* Refactor active environment getters
- Make `spack.environment.active_environment` a trivial getter for the active
environment, replacing `spack.environment.get_env` when the arguments are
not needed
- New method `spack.cmd.require_active_environment(cmd_name)` for
commands that require an environment (rather than abusing
get_env/active_environment)
- Clean up calling code to call spack.environment.active_environment or
spack.cmd.require_active_environment as appropriate
- Remove the `-e` parsing from `active_environment`, because `main.py` is
responsible for processing `-e` and already activates the environment.
- Move `spack.environment.find_environment` to
`spack.cmd.find_environment`, to avoid having spack.environment aware
of argparse.
- Refactor `spack install` command so argument parsing is all handled in the
command, no argparse in spack.environment or spack.installer
- Update documentation
* Python 2: toplevel import errors only with 'as ev'
In two files, `import spack.environment as ev` leads to errors
These errors are not well understood ("'module' object has no attribute
'environment'"). All other files standardize on the above syntax.
This is both a bugfix and a generalization of #25168. In #25168, we attempted to filter padding
*just* from the debug output of `spack.util.executable.Executable` objects. It turns out we got it
wrong -- filtering the command line string instead of the arg list resulted in output like this:
```
==> [2021-08-05-21:34:19.918576] ["'", '/', 'b', 'i', 'n', '/', 't', 'a', 'r', "'", ' ', "'", '-', 'o', 'x', 'f', "'", ' ', "'", '/', 't', 'm', 'p', '/', 'r', 'o', 'o', 't', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '-', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '-', 'w', 'p', 'h', 'p', 't', 'l', 'h', 'w', 'u', 's', 'e', 'i', 'a', '4', 'k', 'p', 'g', 'y', 'd', 'q', 'l', 'l', 'i', '2', '4', 'q', 'b', '5', '5', 'q', 'u', '4', '/', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '.', 't', 'a', 'r', '.', 'b', 'z', '2', "'"]
```
Additionally, plenty of builds output padded paths in other plcaes -- e.g., not just command
arguments, but in other `tty` messages via `llnl.util.filesystem` and other places. `Executable`
isn't really the right place for this.
This PR reverts the changes to `Executable` and moves the filtering into `llnl.util.tty`. There is
now a context manager there that you can use to install a filter for all output.
`spack.installer.build_process()` now uses this context manager to make `tty` do path filtering
when padding is enabled.
- [x] revert filtering in `Executable`
- [x] add ability for `tty` to filter output
- [x] install output filter in `build_process()`
- [x] tests
The output order for `spack diff` is nondeterministic for larger diffs -- if you
ran it several times it will not put the fields in the spec in the same order on
successive invocations.
This makes a few fixes to `spack diff`:
- [x] Implement the change discussed in https://github.com/spack/spack/pull/22283#discussion_r598337448
to make `AspFunction` comparable in and of itself and to eliminate the need for `to_tuple()`
- [x] Sort the lists of diff properties so that the output is always in the same order.
- [x] Make the output for different fields the same as what we use in the solver. Previously, we
would use `Type(value)` for non-string values and `value` for strings. Now we just use
the value. So the output looks a little cleaner:
```
== Old ========================== == New ====================
@@ node_target @@ @@ node_target @@
- gdbm Target(x86_64) - gdbm x86_64
+ zlib Target(skylake) + zlib skylake
@@ variant_value @@ @@ variant_value @@
- ncurses symlinks bool(False) - ncurses symlinks False
+ zlib optimize bool(True) + zlib optimize True
@@ version @@ @@ version @@
- gdbm Version(1.18.1) - gdbm 1.18.1
+ zlib Version(1.2.11) + zlib 1.2.11
@@ node_os @@ @@ node_os @@
- gdbm catalina - gdbm catalina
+ zlib catalina + zlib catalina
```
I suppose if we want to use `repr()` in the output we could do that and could be
consistent but we don't do that elsewhere -- the types of things in Specs are
all stringifiable so the string and the name of the attribute (`version`, `node_os`,
etc.) are sufficient to know what they are.
Spack allows users to set `padded_length` to pad out the installation path in
build farms so that any binaries created are more easily relocatable. The issue
with this is that the padding dominates installation output and makes it
difficult to see what is going on. The padding also causes logs to easily
exceed size limits for things like GitLab artifacts.
This PR fixes this by adding a filter in the logger daemon. If you use a
setting like this:
config:
install_tree:
padded_length: 512
Then lines like this in the output:
==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_pla/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga
will be replaced with the much more readable:
==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/[padded-to-512-chars]/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga
You can see that the padding has been replaced with `[padded-to-512-chars]` to
indicate the total number of characters in the padded prefix. Over a long log
file, this should save a lot of space and allow us to see error messages in
GitHub/GitLab log output.
The *actual* build logs still have full paths in them. Also lines that are
output by Spack and not by a package build are not filtered and will still
display the fully padded path. There aren't that many of these, so the change
should still help reduce file size and readability quite a bit.
* fix remaining flake8 errors
* imports: sort imports everywhere in Spack
We enabled import order checking in #23947, but fixing things manually drives
people crazy. This used `spack style --fix --all` from #24071 to automatically
sort everything in Spack so PR submitters won't have to deal with it.
This should go in after #24071, as it assumes we're using `isort`, not
`flake8-import-order` to order things. `isort` seems to be more flexible and
allows `llnl` mports to be in their own group before `spack` ones, so this
seems like a good switch.
* util.tty.log: read up to 100 lines if ready
Rework to read up to 100 lines from the captured stdin as long as data
is ready to be read immediately. Adds a helper function to poll with
`select` for ready data. This showed a roughly 5-10x perf improvement
for high-rate writes through the logger with relatively short lines.
* util.tty.log: Defer flushes to end of ready reads
Rather than flush per line, flush per set of reads. Since this is a
non-blocking loop, the total perceived wait is short.
* util.tty.log: only scan each line once, usually
Rather than always find all control characters then substitute them all,
use `subn` to count the number of control characters replaced. Only if
control characters exist find out what they are. This could be made
truly single pass with sub with a function, but it's a more intrusive
change and this got 99%ish of the performance improvement (roughly
another 2x in some cases).
* util.tty.log: remove check for `readable`
Python < 3 does not support a readable check on streams, should not be
necessary here since we control the only use and it's explicitly a
stream to be read.
The loading protocol mandates that the the module we are going
to import needs to be already in sys.modules before its code is
executed, so to prevent unbounded recursions and multiple loading.
Loading a module from file exits early if the module is already
in sys.modules
In debug mode, processes taking an exclusive lock write out their node name to
the lock file. We were using `getfqdn()` for this, but it seems to produce
inconsistent results when used from within some github actions containers.
We get this error because getfqdn() seems to return a short name in one place
and a fully qualified name in another:
```
File "/home/runner/work/spack/spack/lib/spack/spack/test/llnl/util/lock.py", line 1211, in p1
assert lock.host == self.host
AssertionError: assert 'fv-az290-764....cloudapp.net' == 'fv-az290-764'
- fv-az290-764.internal.cloudapp.net
+ fv-az290-764
!!!!!!!!!!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!!!!!!!!!
== 1 failed, 2547 passed, 7 skipped, 22 xfailed, 2 xpassed in 1238.67 seconds ==
```
This seems to stem from https://bugs.python.org/issue5004.
We don't really need to get a fully qualified hostname for debugging, so use
`gethostname()` because its results are more consistent. This seems to fix the
issue.
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
We set LC_ALL=C to encourage a build process to generate ASCII
output (so our logger daemon can decode it). Most packages
respect this but it appears that intel-oneapi-compilers does
not in some cases (see #22813). This reads the output of the build
process as UTF-8, which still works if the build process respects
LC_ALL=C but also works if the process generates UTF-8 output.
For Python >= 3.7 all files are opened with UTF-8 encoding by
default. Python 2 does not support the encoding argument on
'open', so to support Python 2 the files would have to be
opened in byte mode and explicitly decoded (as a side note,
this would be the only way to handle other encodings without
being informed of them in advance).
We have been using the `@llnl.util.lang.key_ordering` decorator for specs
and most of their components. This leverages the fact that in Python,
tuple comparison is lexicographic. It allows you to implement a
`_cmp_key` method on your class, and have `__eq__`, `__lt__`, etc.
implemented automatically using that key. For example, you might use
tuple keys to implement comparison, e.g.:
```python
class Widget:
# author implements this
def _cmp_key(self):
return (
self.a,
self.b,
(self.c, self.d),
self.e
)
# operators are generated by @key_ordering
def __eq__(self, other):
return self._cmp_key() == other._cmp_key()
def __lt__(self):
return self._cmp_key() < other._cmp_key()
# etc.
```
The issue there for simple comparators is that we have to bulid the
tuples *and* we have to generate all the values in them up front. When
implementing comparisons for large data structures, this can be costly.
This PR replaces `@key_ordering` with a new decorator,
`@lazy_lexicographic_ordering`. Lazy lexicographic comparison maps the
tuple comparison shown above to generator functions. Instead of comparing
based on pre-constructed tuple keys, users of this decorator can compare
using elements from a generator. So, you'd write:
```python
@lazy_lexicographic_ordering
class Widget:
def _cmp_iter(self):
yield a
yield b
def cd_fun():
yield c
yield d
yield cd_fun
yield e
# operators are added by decorator (but are a bit more complex)
There are no tuples that have to be pre-constructed, and the generator
does not have to complete. Instead of tuples, we simply make functions
that lazily yield what would've been in the tuple. If a yielded value is
a `callable`, the comparison functions will call it and recursively
compar it. The comparator just walks the data structure like you'd expect
it to.
The ``@lazy_lexicographic_ordering`` decorator handles the details of
implementing comparison operators, and the ``Widget`` implementor only
has to worry about writing ``_cmp_iter``, and making sure the elements in
it are also comparable.
Using this PR shaves another 1.5 sec off the runtime of `spack buildcache
list`, and it also speeds up Spec comparison by about 30%. The runtime
improvement comes mostly from *not* calling `hash()` `_cmp_iter()`.
Sometimes we need to patch a file that is a dependency for some other
automatically generated file that comes in a release tarball. As a
result, make tries to regenerate the dependent file using additional
tools (e.g. help2man), which would not be needed otherwise.
In some cases, it's preferable to avoid that (e.g. see #21255). A way
to do that is to save the modification timestamps before patching and
restoring them afterwards. This PR introduces a context wrapper that
does that.
* sbang pushed back to callers;
star moved to util.lang
* updated unit test
* sbang test moved; local tests pass
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
- [x] add `concretize.lp`, `spack.yaml`, etc. to licensed files
- [x] update all licensed files to say 2013-2021 using
`spack license update-copyright-year`
- [x] appease mypy with some additions to package.py that needed
for oneapi.py
I lost my mind a bit after getting the completion stuff working and
decided to get Mypy working for spack as well. This adds a
`.mypy.ini` that checks all of the spack and llnl modules, though
not yet packages, and fixes all of the identified missing types and
type issues for the spack library.
In addition to these changes, this includes:
* rename `spack flake8` to `spack style`
Aliases flake8 to style, and just runs flake8 as before, but with
a warning. The style command runs both `flake8` and `mypy`,
in sequence. Added --no-<tool> options to turn off one or the
other, they are on by default. Fixed two issues caught by the tools.
* stub typing module for python2.x
We don't support typing in Spack for python 2.x. To allow 2.x to
support `import typing` and `from typing import ...` without a
try/except dance to support old versions, this adds a stub module
*just* for python 2.x. Doing it this way means we can only reliably
use all type hints in python3.7+, and mypi.ini has been updated to
reflect that.
* add non-default black check to spack style
This is a first step to requiring black. It doesn't enforce it by
default, but it will check it if requested. Currently enforcing the
line length of 79 since that's what flake8 requires, but it's a bit odd
for a black formatted project to be quite that narrow. All settings are
in the style command since spack has no pyproject.toml and I don't
want to add one until more discussion happens. Also re-format
`style.py` since it no longer passed the black style check
with the new length.
* use style check in github action
Update the style and docs action to use `spack style`, adding in mypy
and black to the action even if it isn't running black right now.
Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.
Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.
This handles the following trickier aspects of testing with direct
support in Spack's package API:
- [x] Caching source or intermediate build files at build time for
use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
packages).
See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
- [x] Solver now uses the Python interface to clingo
- [x] can extract unsatisfiable cores from problems when things go wrong
- [x] use Python callbacks for versions instead of choice rules (this may
ultimately hurt performance)
Spack creates a separate process to do package installation. Different
operating systems and Python versions use different methods to create
it but up until Python 3.8 both Linux and Mac OS used "fork" (which
duplicates process memory, file descriptor table, etc.).
Python >= 3.8 on Mac OS prefers creating an entirely new process
(referred to as the "spawn" start method) because "fork" was found to
cause issues (in other words "spawn" is the default start method used
by multiprocessing.Process). Spack was dependent on the particular
behavior of fork to replicate process memory and transmit file
descriptors.
This PR refactors the Spack internals to support starting a child
process with the "spawn" method. To achieve this, it makes the
following changes:
- ensure that the package repository and other global state are
transmitted to the child process
- ensure that file descriptors are transmitted to the child process in
a way that works with multiprocessing and spawn
- make all the state needed for the build process and tests picklable
(package, stage, etc.)
- move a number of locally-defined functions into global scope so that
they can be pickled
- rework tests where needed to avoid using local functions
This PR also reworks sbang tests to work on macOS, where temporary
directories are deeper than the Linux sbang limit. We make the limit
platform-dependent (macOS supports 512-character shebangs)
See: #14102
* allow environments to specify dev-build packages
* spack develop and spack undevelop commands
* never pull dev-build packges from bincache
* reinstall dev_specs when code has changed; reinstall dependents too
* preserve dev info paths and versions in concretization as special variant
* move install overwrite transaction into installer
* move dev-build argument handling to package.do_install
now that specs are dev-aware, package.do_install can add
necessary args (keep_stage=True, use_cache=False) to dev
builds. This simplifies driving logic in cmd and env._install
* allow 'any' as wildcard for variants
* spec: allow anonymous dependencies
raise an error when constraining by or normalizing an anonymous dep
refactor concretize_develop to remove dev_build variant
refactor tests to check for ^dev_path=any instead of +dev_build
* fix variant class hierarchy
Fixes#18441
When writing an environment, there are cases where the lock file for
the environment may be removed. In this case there was a period
between removing the lock file and writing the new manifest file
where an exception could leave the manifest in its old state (in
which case the lock and manifest would be out of sync).
This adds a context manager which is used to restore the prior lock
file state in cases where the manifest file cannot be written.
As detailed in https://bugs.python.org/issue33725, starting new
processes with 'fork' on Mac OS is not guaranteed to work in general.
As of Python 3.8 the default process spawning mechanism was changed
to avoid this issue.
Spack depends on the fork-based method to preserve file descriptors
transparently, to preserve global state, and to avoid pickling some
objects. An effort is underway to remove dependence on fork-based
process spawning (see #18205). In the meantime, this allows Spack to
run with Python 3.8 on Mac OS by explicitly choosing to use 'fork'.
Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Some of the feature flags are named differently and clwb is missing on
my i7-1065G7. cascadelake and cannonlake might have similar problems but
I do not have access to those architectures to test.
The YAML config for paths and modules of external packages has
changed: the new format allows a single spec to load multiple
modules. Spack will automatically convert from the old format
when reading the configs (the updates do not add new essential
properties, so this change in Spack is backwards-compatible).
With this update, Spack cannot modify existing configs/environments
without updating them (e.g. “spack config add” will fail if the
configuration is in a format that predates this PR). The user is
prompted to do this explicitly and commands are provided. All
config scopes can be updated at once. Each environment must be
updated one at a time.
* switch from bool to int debug levels
* Added debug options and changed lock logging to use more detailed values
* Limit installer and timestamp PIDs to standard debug output
* Reduced verbosity of fetch/stage/install output, changing most to debug level 1
* Combine lock log methods; change build process install to debug
* Changed binary cache install messages to extraction messages
* Separate Apple Clang from LLVM Clang
Apple Clang is a compiler of its own. All places
referring to "-apple" suffix have been updated.
* Hack to use a dash in 'apple-clang'
To be able to use autodoc from Sphinx we need
a valid Python name for the module that contains
Apple's Clang code.
* Updated packages to account for the existence of apple-clang
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added unit test for XCode related functions
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
After migrating to `travis-ci.com`, we saw I/O issues in our tests --
tests that relied on `capfd` and `capsys` were failing. We've also seen
this in GitHub actions, and it's kept us from switching to them so far.
Turns out that the issue is that using streams like `sys.stdout` as
default arguments doesn't play well with `pytest` and output redirection,
as `pytest` changes the values of `sys.stdout` and `sys.stderr`. if these
values are evaluated before output redirection (as they are when used as
default arg values), output won't be captured properly later.
- [x] replace all stream default arg values with `None`, and only assign stream
values inside functions.
- [x] fix tests we didn't notice were relying on this erroneous behavior
Makes the following changes:
* (Fixes#15620) tty configuration was failing when stdout was
redirected. The implementation now creates a pseudo terminal for
stdin and checks stdout properly, so redirections of stdin/out/err
should be handled now.
* Handles terminal configuration when the Spack process moves between
the foreground and background (possibly multiple times) during a
build.
* Spack adjusts terminal settings to allow users to to enable/disable
build process output to the terminal using a "v" toggle, abnormal
exit cases (like CTRL-C) could leave the terminal in an unusable
state. This is addressed here with a special-case handler which
restores terminal settings.
Significantly extend testing of process output logger:
* New PseudoShell object for setting up a master and child process
and configuring file descriptor inheritance between the two
* Tests for "v" verbosity toggle making use of the added PseudoShell
object
* Added `uniq` function which takes a list of elements and replaces
any consecutive sequence of duplicate elements with a single
instance (e.g. "112211" -> "121")
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Spack currently cannot run as a background process uninterrupted because some of the logging functions used in the install method (especially to create the dynamic verbosity toggle with the v key) cause the OS to issue a SIGTTOU to Spack when it's backgrounded.
This PR puts the necessary gatekeeping in place so that Spack doesn't do anything that will cause a signal to stop the process when operating as a background process.
Sometimes one needs to preserve the (relative order) of
mtimes on installed files. So it's better to just copy
over all the metadata from the source tree to the install
tree. If permissions need fixing, that will be done anyway
afterwards.
One major use case are resource()s:
They're unpacked in one place and then copied to their
final place using install_tree(). If the resource is a
source tree using autoconf/automake, resetting mtimes
uncorrectly might force unwanted autoconf/etc calls.