Parser fix (#2769)

* Fixed parser to eliminate need for escape quotes. TODO: Fix double call to shlex, fix spaces in spec __str__

* Fixed double shlex

* cleanup

* rebased on develop

* Fixed parsing for multiple specs; broken since #360

* Revoked elimination of the `-` sigil in the syntax, and added it back into tests

* flake8

* more flake8

* Cleaned up dead code and added comments to parsing code

* bugfix for spaces in arguments; new bug found in testing

* Added unit tests for kv pairs in parsing/lexing

* Even more flake8

* ... yet another flake8

* Allow multiple specs in install

* unfathomable levels of flake8

* Updated documentation to match parser fix
This commit is contained in:
becker33 2017-01-15 19:17:54 -08:00 committed by Todd Gamblin
parent b2f29b855b
commit a091eeceab
9 changed files with 243 additions and 106 deletions

View File

@ -647,8 +647,8 @@ avoid ambiguity.
When spack normalizes specs, it prints them out with no spaces boolean When spack normalizes specs, it prints them out with no spaces boolean
variants using the backwards compatibility syntax and uses only ``~`` variants using the backwards compatibility syntax and uses only ``~``
for disabled boolean variants. We allow ``-`` and spaces on the command for disabled boolean variants. The ``-`` and spaces on the command
line is provided for convenience and legibility. line are provided for convenience and legibility.
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^
Compiler Flags Compiler Flags
@ -658,14 +658,17 @@ Compiler flags are specified using the same syntax as non-boolean variants,
but fulfill a different purpose. While the function of a variant is set by but fulfill a different purpose. While the function of a variant is set by
the package, compiler flags are used by the compiler wrappers to inject the package, compiler flags are used by the compiler wrappers to inject
flags into the compile line of the build. Additionally, compiler flags are flags into the compile line of the build. Additionally, compiler flags are
inherited by dependencies. ``spack install libdwarf cppflags=\"-g\"`` will inherited by dependencies. ``spack install libdwarf cppflags="-g"`` will
install both libdwarf and libelf with the ``-g`` flag injected into their install both libdwarf and libelf with the ``-g`` flag injected into their
compile line. compile line.
Notice that the value of the compiler flags must be escape quoted on the Notice that the value of the compiler flags must be quoted if it
command line. From within python files, the same spec would be specified contains any spaces. Any of ``cppflags=-O3``, ``cppflags="-O3"``,
``libdwarf cppflags="-g"``. This is necessary because of how the shell ``cppflags='-O3'``, and ``cppflags="-O3 -fPIC"`` are acceptable, but
handles the quote symbols. ``cppflags=-O3 -fPIC`` is not. Additionally, if they value of the
compiler flags is not the last thing on the line, it must be followed
by a space. The commmand ``spack install libelf cppflags="-O3"%intel``
will be interpreted as an attempt to set `cppflags="-O3%intel"``.
The six compiler flags are injected in the order of implicit make commands The six compiler flags are injected in the order of implicit make commands
in GNU Autotools. If all flags are set, the order is in GNU Autotools. If all flags are set, the order is

View File

@ -41,7 +41,7 @@ platform, all on the command line.
$ spack install mpileaks@1.1.2 %gcc@4.7.3 +debug $ spack install mpileaks@1.1.2 %gcc@4.7.3 +debug
# Add compiler flags using the conventional names # Add compiler flags using the conventional names
$ spack install mpileaks@1.1.2 %gcc@4.7.3 cppflags=\"-O3 -floop-block\" $ spack install mpileaks@1.1.2 %gcc@4.7.3 cppflags="-O3 -floop-block"
# Cross-compile for a different architecture with arch= # Cross-compile for a different architecture with arch=
$ spack install mpileaks@1.1.2 arch=bgqos_0 $ spack install mpileaks@1.1.2 arch=bgqos_0

View File

@ -632,7 +632,7 @@ the command line:
.. code-block:: console .. code-block:: console
$ spack install openmpi fflags=\"-mismatch\" $ spack install openmpi fflags="-mismatch"
Or it can be set permanently in your ``compilers.yaml``: Or it can be set permanently in your ``compilers.yaml``:

View File

@ -184,15 +184,15 @@ compilers.
[+] ~/spack/opt/spack/linux-redhat6-x86_64/intel-15.0.4/libelf-0.8.13-w33hrejdyqu2j2gggdswitls2zv6kdsi [+] ~/spack/opt/spack/linux-redhat6-x86_64/intel-15.0.4/libelf-0.8.13-w33hrejdyqu2j2gggdswitls2zv6kdsi
The spec syntax also includes compiler flags. Spack accepts ``cppflags``, The spec syntax also includes compiler flags. Spack accepts
``cflags``, ``cxxflags``, ``fflags``, ``ldflags``, and ``ldlibs`` ``cppflags``, ``cflags``, ``cxxflags``, ``fflags``, ``ldflags``, and
parameters. The values of these fields must be escape-quoted with ``\"`` ``ldlibs`` parameters. The values of these fields must be quoted on
on the command line. These values are injected into the compile line the command line if they include spaces. These values are injected
automatically by the Spack compiler wrappers. into the compile line automatically by the Spack compiler wrappers.
.. code-block:: console .. code-block:: console
$ spack install libelf @0.8.12 cppflags=\"-O3\" $ spack install libelf @0.8.12 cppflags="-O3"
==> Installing libelf ==> Installing libelf
==> Trying to fetch from ~/spack/var/spack/cache/libelf/libelf-0.8.12.tar.gz ==> Trying to fetch from ~/spack/var/spack/cache/libelf/libelf-0.8.12.tar.gz
################################################################################################################################################################################# 100.0% ################################################################################################################################################################################# 100.0%
@ -309,7 +309,7 @@ top-level package, we can also specify about a dependency using ``^``.
Packages can also be referred to from the command line by their package Packages can also be referred to from the command line by their package
hash. Using the ``spack find -lf`` command earlier we saw that the hash hash. Using the ``spack find -lf`` command earlier we saw that the hash
of our optimized installation of libelf (``cppflags=\"-O3\"``) began with of our optimized installation of libelf (``cppflags="-O3"``) began with
``vrv2ttb``. We can now explicitly build with that package without typing ``vrv2ttb``. We can now explicitly build with that package without typing
the entire spec, by using the ``/`` sigil to refer to it by hash. As with the entire spec, by using the ``/`` sigil to refer to it by hash. As with
other tools like git, you do not need to specify an *entire* hash on the other tools like git, you do not need to specify an *entire* hash on the
@ -1103,8 +1103,8 @@ already covered in the :ref:`basics-tutorial-install` and
The ``spack find`` command can accept what we call "anonymous specs." The ``spack find`` command can accept what we call "anonymous specs."
These are expressions in spec syntax that do not contain a package These are expressions in spec syntax that do not contain a package
name. For example, `spack find %intel` will return every package built name. For example, `spack find %intel` will return every package built
with the intel compiler, and ``spack find cppflags=\\"-O3\\"`` will with the intel compiler, and ``spack find cppflags="-O3"`` will
return every package which was built with ``cppflags=\\"-O3\\"``. return every package which was built with ``cppflags="-O3"``.
.. code-block:: console .. code-block:: console
@ -1115,7 +1115,7 @@ return every package which was built with ``cppflags=\\"-O3\\"``.
$ spack find cppflags=\"-O3\" $ spack find cppflags="-O3"
==> 1 installed packages. ==> 1 installed packages.
-- linux-redhat6-x86_64 / gcc@4.4.7 ----------------------------- -- linux-redhat6-x86_64 / gcc@4.4.7 -----------------------------
libelf@0.8.12 libelf@0.8.12

View File

@ -108,9 +108,6 @@ def parse_specs(args, **kwargs):
concretize = kwargs.get('concretize', False) concretize = kwargs.get('concretize', False)
normalize = kwargs.get('normalize', False) normalize = kwargs.get('normalize', False)
if isinstance(args, (python_list, tuple)):
args = " ".join(args)
try: try:
specs = spack.spec.parse(args) specs = spack.spec.parse(args)
for spec in specs: for spec in specs:

View File

@ -315,10 +315,10 @@ def install(parser, args, **kwargs):
# Spec from cli # Spec from cli
specs = spack.cmd.parse_specs(args.package, concretize=True) specs = spack.cmd.parse_specs(args.package, concretize=True)
if len(specs) != 1: if len(specs) == 0:
tty.error('only one spec can be installed at a time.') tty.error('The `spack install` command requires a spec to install.')
spec = specs.pop()
for spec in specs:
# Check if we were asked to produce some log for dashboards # Check if we were asked to produce some log for dashboards
if args.log_format is not None: if args.log_format is not None:
# Compute the filename for logging # Compute the filename for logging

View File

@ -23,6 +23,7 @@
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
############################################################################## ##############################################################################
import re import re
import shlex
import itertools import itertools
import spack.error import spack.error
@ -53,18 +54,55 @@ def __cmp__(self, other):
class Lexer(object): class Lexer(object):
"""Base class for Lexers that keep track of line numbers.""" """Base class for Lexers that keep track of line numbers."""
def __init__(self, lexicon): def __init__(self, lexicon0, mode_switches_01=[],
self.scanner = re.Scanner(lexicon) lexicon1=[], mode_switches_10=[]):
self.scanner0 = re.Scanner(lexicon0)
self.mode_switches_01 = mode_switches_01
self.scanner1 = re.Scanner(lexicon1)
self.mode_switches_10 = mode_switches_10
self.mode = 0
def token(self, type, value=''): def token(self, type, value=''):
if self.mode == 0:
return Token(type, value, return Token(type, value,
self.scanner.match.start(0), self.scanner.match.end(0)) self.scanner0.match.start(0),
self.scanner0.match.end(0))
else:
return Token(type, value,
self.scanner1.match.start(0),
self.scanner1.match.end(0))
def lex_word(self, word):
scanner = self.scanner0
mode_switches = self.mode_switches_01
if self.mode == 1:
scanner = self.scanner1
mode_switches = self.mode_switches_10
tokens, remainder = scanner.scan(word)
remainder_used = 0
for i, t in enumerate(tokens):
if t.type in mode_switches:
# Combine post-switch tokens with remainder and
# scan in other mode
self.mode = 1 - self.mode # swap 0/1
remainder_used = 1
tokens = tokens[:i + 1] + self.lex_word(
word[word.index(t.value) + len(t.value):])
break
if remainder and not remainder_used:
raise LexError("Invalid character", word, word.index(remainder))
return tokens
def lex(self, text): def lex(self, text):
tokens, remainder = self.scanner.scan(text) lexed = []
if remainder: for word in text:
raise LexError("Invalid character", text, text.index(remainder)) tokens = self.lex_word(word)
return tokens lexed.extend(tokens)
return lexed
class Parser(object): class Parser(object):
@ -121,6 +159,8 @@ def expect(self, id):
sys.exit(1) sys.exit(1)
def setup(self, text): def setup(self, text):
if isinstance(text, basestring):
text = shlex.split(text)
self.text = text self.text = text
self.push_tokens(self.lexer.lex(text)) self.push_tokens(self.lexer.lex(text))

View File

@ -614,7 +614,7 @@ def __str__(self):
if type(self.value) == bool: if type(self.value) == bool:
return '{0}{1}'.format('+' if self.value else '~', self.name) return '{0}{1}'.format('+' if self.value else '~', self.name)
else: else:
return ' {0}={1}'.format(self.name, self.value) return ' {0}={1} '.format(self.name, self.value)
class VariantMap(HashableMap): class VariantMap(HashableMap):
@ -731,7 +731,8 @@ def __str__(self):
cond_symbol = ' ' if len(sorted_keys) > 0 else '' cond_symbol = ' ' if len(sorted_keys) > 0 else ''
return cond_symbol + ' '.join( return cond_symbol + ' '.join(
str(key) + '=\"' + ' '.join( str(key) + '=\"' + ' '.join(
str(f) for f in self[key]) + '\"' for key in sorted_keys) str(f) for f in self[key]) + '\"'
for key in sorted_keys) + cond_symbol
class DependencyMap(HashableMap): class DependencyMap(HashableMap):
@ -2447,7 +2448,8 @@ def write(s, c):
write(fmt % str(self.variants), c) write(fmt % str(self.variants), c)
elif c == '=': elif c == '=':
if self.architecture and str(self.architecture): if self.architecture and str(self.architecture):
write(fmt % (' arch' + c + str(self.architecture)), c) a_str = ' arch' + c + str(self.architecture) + ' '
write(fmt % (a_str), c)
elif c == '#': elif c == '#':
out.write('-' + fmt % (self.dag_hash(7))) out.write('-' + fmt % (self.dag_hash(7)))
elif c == '$': elif c == '$':
@ -2506,7 +2508,7 @@ def write(s, c):
write(fmt % str(self.variants), '+') write(fmt % str(self.variants), '+')
elif named_str == 'ARCHITECTURE': elif named_str == 'ARCHITECTURE':
if self.architecture and str(self.architecture): if self.architecture and str(self.architecture):
write(fmt % str(self.architecture), ' arch=') write(fmt % str(self.architecture) + ' ', ' arch=')
elif named_str == 'SHA1': elif named_str == 'SHA1':
if self.dependencies: if self.dependencies:
out.write(fmt % str(self.dag_hash(7))) out.write(fmt % str(self.dag_hash(7)))
@ -2576,7 +2578,8 @@ def __cmp__(self, other):
return 0 return 0
def __str__(self): def __str__(self):
return self.format() + self.dep_string() ret = self.format() + self.dep_string()
return ret.strip()
def _install_status(self): def _install_status(self):
"""Helper for tree to print DB install status.""" """Helper for tree to print DB install status."""
@ -2650,7 +2653,7 @@ def __repr__(self):
# #
# These are possible token types in the spec grammar. # These are possible token types in the spec grammar.
# #
HASH, DEP, AT, COLON, COMMA, ON, OFF, PCT, EQ, QT, ID = range(11) HASH, DEP, AT, COLON, COMMA, ON, OFF, PCT, EQ, ID, VAL = range(11)
class SpecLexer(spack.parse.Lexer): class SpecLexer(spack.parse.Lexer):
@ -2671,10 +2674,12 @@ def __init__(self):
(r'\=', lambda scanner, val: self.token(EQ, val)), (r'\=', lambda scanner, val: self.token(EQ, val)),
# This is more liberal than identifier_re (see above). # This is more liberal than identifier_re (see above).
# Checked by check_identifier() for better error messages. # Checked by check_identifier() for better error messages.
(r'([\"\'])(?:(?=(\\?))\2.)*?\1',
lambda scanner, val: self.token(QT, val)),
(r'\w[\w.-]*', lambda scanner, val: self.token(ID, val)), (r'\w[\w.-]*', lambda scanner, val: self.token(ID, val)),
(r'\s+', lambda scanner, val: None)]) (r'\s+', lambda scanner, val: None)],
[EQ],
[(r'[\S].*', lambda scanner, val: self.token(VAL, val)),
(r'\s+', lambda scanner, val: None)],
[VAL])
# Lexer is always the same for every parser. # Lexer is always the same for every parser.
@ -2689,36 +2694,49 @@ def __init__(self):
def do_parse(self): def do_parse(self):
specs = [] specs = []
try: try:
while self.next: while self.next or self.previous:
# TODO: clean this parsing up a bit # TODO: clean this parsing up a bit
if self.previous: if self.previous:
specs.append(self.spec(self.previous.value)) # We picked up the name of this spec while finishing the
if self.accept(ID): # previous spec
self.previous = self.token
if self.accept(EQ):
if not specs:
specs.append(self.spec(None))
if self.accept(QT):
self.token.value = self.token.value[1:-1]
else:
self.expect(ID)
specs[-1]._add_flag(
self.previous.value, self.token.value)
else:
specs.append(self.spec(self.previous.value)) specs.append(self.spec(self.previous.value))
self.previous = None self.previous = None
elif self.accept(ID):
self.previous = self.token
if self.accept(EQ):
# We're either parsing an anonymous spec beginning
# with a key-value pair or adding a key-value pair
# to the last spec
if not specs:
specs.append(self.spec(None))
self.expect(VAL)
specs[-1]._add_flag(
self.previous.value, self.token.value)
self.previous = None
else:
# We're parsing a new spec by name
value = self.previous.value
self.previous = None
specs.append(self.spec(value))
elif self.accept(HASH): elif self.accept(HASH):
# We're finding a spec by hash
specs.append(self.spec_by_hash()) specs.append(self.spec_by_hash())
elif self.accept(DEP): elif self.accept(DEP):
if not specs: if not specs:
# We're parsing an anonymous spec beginning with a
# dependency
self.previous = self.token self.previous = self.token
specs.append(self.spec(None)) specs.append(self.spec(None))
self.previous = None self.previous = None
if self.accept(HASH): if self.accept(HASH):
# We're finding a dependency by hash for an anonymous
# spec
dep = self.spec_by_hash() dep = self.spec_by_hash()
else: else:
# We're adding a dependency to the last spec
self.expect(ID) self.expect(ID)
dep = self.spec(self.token.value) dep = self.spec(self.token.value)
@ -2727,11 +2745,12 @@ def do_parse(self):
specs[-1]._add_dependency(dep, ()) specs[-1]._add_dependency(dep, ())
else: else:
# Attempt to construct an anonymous spec, but check that # If the next token can be part of a valid anonymous spec,
# the first token is valid # create the anonymous spec
# TODO: Is this check even necessary, or will it all be Lex if self.next.type in (AT, ON, OFF, PCT):
# errors now? specs.append(self.spec(None))
specs.append(self.spec(None, True)) else:
self.unexpected_token()
except spack.parse.ParseError as e: except spack.parse.ParseError as e:
raise SpecParseError(e) raise SpecParseError(e)
@ -2768,7 +2787,7 @@ def spec_by_hash(self):
return matches[0] return matches[0]
def spec(self, name, check_valid_token=False): def spec(self, name):
"""Parse a spec out of the input. If a spec is supplied, then initialize """Parse a spec out of the input. If a spec is supplied, then initialize
and return it instead of creating a new one.""" and return it instead of creating a new one."""
if name: if name:
@ -2819,35 +2838,28 @@ def spec(self, name, check_valid_token=False):
for version in vlist: for version in vlist:
spec._add_version(version) spec._add_version(version)
added_version = True added_version = True
check_valid_token = False
elif self.accept(ON): elif self.accept(ON):
spec._add_variant(self.variant(), True) spec._add_variant(self.variant(), True)
check_valid_token = False
elif self.accept(OFF): elif self.accept(OFF):
spec._add_variant(self.variant(), False) spec._add_variant(self.variant(), False)
check_valid_token = False
elif self.accept(PCT): elif self.accept(PCT):
spec._set_compiler(self.compiler()) spec._set_compiler(self.compiler())
check_valid_token = False
elif self.accept(ID): elif self.accept(ID):
self.previous = self.token self.previous = self.token
if self.accept(EQ): if self.accept(EQ):
if self.accept(QT): # We're adding a key-value pair to the spec
self.token.value = self.token.value[1:-1] self.expect(VAL)
else:
self.expect(ID)
spec._add_flag(self.previous.value, self.token.value) spec._add_flag(self.previous.value, self.token.value)
self.previous = None self.previous = None
else: else:
return spec # We've found the start of a new spec. Go back to do_parse
break
else: else:
if check_valid_token:
self.unexpected_token()
break break
# If there was no version in the spec, consier it an open range # If there was no version in the spec, consier it an open range

View File

@ -23,6 +23,7 @@
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
############################################################################## ##############################################################################
import pytest import pytest
import shlex
import spack.spec as sp import spack.spec as sp
from spack.parse import Token from spack.parse import Token
@ -53,6 +54,34 @@
Token(sp.AT), Token(sp.AT),
Token(sp.ID, '8.1_1e')] Token(sp.ID, '8.1_1e')]
# Another sample lexer output with a kv pair.
kv_lex = [Token(sp.ID, 'mvapich_foo'),
Token(sp.ID, 'debug'),
Token(sp.EQ),
Token(sp.VAL, '4'),
Token(sp.DEP),
Token(sp.ID, '_openmpi'),
Token(sp.AT),
Token(sp.ID, '1.2'),
Token(sp.COLON),
Token(sp.ID, '1.4'),
Token(sp.COMMA),
Token(sp.ID, '1.6'),
Token(sp.PCT),
Token(sp.ID, 'intel'),
Token(sp.AT),
Token(sp.ID, '12.1'),
Token(sp.COLON),
Token(sp.ID, '12.6'),
Token(sp.ON),
Token(sp.ID, 'debug'),
Token(sp.OFF),
Token(sp.ID, 'qt_4'),
Token(sp.DEP),
Token(sp.ID, 'stackwalker'),
Token(sp.AT),
Token(sp.ID, '8.1_1e')]
class TestSpecSyntax(object): class TestSpecSyntax(object):
# ======================================================================== # ========================================================================
@ -81,9 +110,10 @@ def check_parse(self, expected, spec=None, remove_arch=True):
def check_lex(self, tokens, spec): def check_lex(self, tokens, spec):
"""Check that the provided spec parses to the provided token list.""" """Check that the provided spec parses to the provided token list."""
spec = shlex.split(spec)
lex_output = sp.SpecLexer().lex(spec) lex_output = sp.SpecLexer().lex(spec)
for tok, spec_tok in zip(tokens, lex_output): for tok, spec_tok in zip(tokens, lex_output):
if tok.type == sp.ID: if tok.type == sp.ID or tok.type == sp.VAL:
assert tok == spec_tok assert tok == spec_tok
else: else:
# Only check the type for non-identifiers. # Only check the type for non-identifiers.
@ -112,10 +142,19 @@ def test_dependencies_with_versions(self):
self.check_parse("openmpi^hwloc@:1.4b7-rc3") self.check_parse("openmpi^hwloc@:1.4b7-rc3")
self.check_parse("openmpi^hwloc@1.2e6:1.4b7-rc3") self.check_parse("openmpi^hwloc@1.2e6:1.4b7-rc3")
@pytest.mark.xfail
def test_multiple_specs(self): def test_multiple_specs(self):
self.check_parse("mvapich emacs") self.check_parse("mvapich emacs")
def test_multiple_specs_after_kv(self):
self.check_parse('mvapich cppflags="-O3 -fPIC" emacs')
self.check_parse('mvapich cflags="-O3" emacs',
'mvapich cflags=-O3 emacs')
def test_multiple_specs_long_second(self):
self.check_parse('mvapich emacs@1.1.1%intel cflags="-O3"',
'mvapich emacs @1.1.1 %intel cflags=-O3')
self.check_parse('mvapich cflags="-O3 -fPIC" emacs^ncurses%intel')
def test_full_specs(self): def test_full_specs(self):
self.check_parse( self.check_parse(
"mvapich_foo" "mvapich_foo"
@ -123,15 +162,15 @@ def test_full_specs(self):
"^stackwalker@8.1_1e") "^stackwalker@8.1_1e")
self.check_parse( self.check_parse(
"mvapich_foo" "mvapich_foo"
"^_openmpi@1.2:1.4,1.6%intel@12.1 debug=2~qt_4" "^_openmpi@1.2:1.4,1.6%intel@12.1 debug=2 ~qt_4"
"^stackwalker@8.1_1e") "^stackwalker@8.1_1e")
self.check_parse( self.check_parse(
'mvapich_foo' 'mvapich_foo'
'^_openmpi@1.2:1.4,1.6%intel@12.1 cppflags="-O3"+debug~qt_4' '^_openmpi@1.2:1.4,1.6%intel@12.1 cppflags="-O3" +debug~qt_4'
'^stackwalker@8.1_1e') '^stackwalker@8.1_1e')
self.check_parse( self.check_parse(
"mvapich_foo" "mvapich_foo"
"^_openmpi@1.2:1.4,1.6%intel@12.1 debug=2~qt_4" "^_openmpi@1.2:1.4,1.6%intel@12.1 debug=2 ~qt_4"
"^stackwalker@8.1_1e arch=test-redhat6-x86_32") "^stackwalker@8.1_1e arch=test-redhat6-x86_32")
def test_canonicalize(self): def test_canonicalize(self):
@ -158,19 +197,19 @@ def test_canonicalize(self):
"x ^y~f+e~d+c~b+a@4,2:3,1%intel@4,3,2,1") "x ^y~f+e~d+c~b+a@4,2:3,1%intel@4,3,2,1")
self.check_parse( self.check_parse(
"x arch=test-redhat6-None" "x arch=test-redhat6-None "
"^y arch=test-None-x86_64" "^y arch=test-None-x86_64 "
"^z arch=linux-None-None", "^z arch=linux-None-None",
"x os=fe" "x os=fe "
"^y target=be" "^y target=be "
"^z platform=linux") "^z platform=linux")
self.check_parse( self.check_parse(
"x arch=test-debian6-x86_64" "x arch=test-debian6-x86_64 "
"^y arch=test-debian6-x86_64", "^y arch=test-debian6-x86_64",
"x os=default_os target=default_target" "x os=default_os target=default_target "
"^y os=default_os target=default_target") "^y os=default_os target=default_target")
self.check_parse("x^y", "x@: ^y@:") self.check_parse("x^y", "x@: ^y@:")
@ -184,7 +223,7 @@ def test_duplicate_variant(self):
'x@1.2+debug+debug', 'x@1.2+debug+debug',
'x ^y@1.2+debug debug=true', 'x ^y@1.2+debug debug=true',
'x ^y@1.2 debug=false debug=true', 'x ^y@1.2 debug=false debug=true',
'x ^y@1.2 debug=false~debug' 'x ^y@1.2 debug=false ~debug'
] ]
self._check_raises(DuplicateVariantError, duplicates) self._check_raises(DuplicateVariantError, duplicates)
@ -277,3 +316,49 @@ def test_way_too_many_spaces(self):
"mvapich_foo " "mvapich_foo "
"^ _openmpi @1.2 : 1.4 , 1.6 % intel @ 12.1 : 12.6 + debug - qt_4 " "^ _openmpi @1.2 : 1.4 , 1.6 % intel @ 12.1 : 12.6 + debug - qt_4 "
"^ stackwalker @ 8.1_1e") "^ stackwalker @ 8.1_1e")
self.check_lex(
complex_lex,
"mvapich_foo "
"^ _openmpi @1.2 : 1.4 , 1.6 % intel @ 12.1 : 12.6 + debug ~ qt_4 "
"^ stackwalker @ 8.1_1e")
def test_kv_with_quotes(self):
self.check_lex(
kv_lex,
"mvapich_foo debug='4' "
"^ _openmpi @1.2 : 1.4 , 1.6 % intel @ 12.1 : 12.6 + debug - qt_4 "
"^ stackwalker @ 8.1_1e")
self.check_lex(
kv_lex,
'mvapich_foo debug="4" '
"^ _openmpi @1.2 : 1.4 , 1.6 % intel @ 12.1 : 12.6 + debug - qt_4 "
"^ stackwalker @ 8.1_1e")
self.check_lex(
kv_lex,
"mvapich_foo 'debug = 4' "
"^ _openmpi @1.2 : 1.4 , 1.6 % intel @ 12.1 : 12.6 + debug - qt_4 "
"^ stackwalker @ 8.1_1e")
def test_kv_without_quotes(self):
self.check_lex(
kv_lex,
"mvapich_foo debug=4 "
"^ _openmpi @1.2 : 1.4 , 1.6 % intel @ 12.1 : 12.6 + debug - qt_4 "
"^ stackwalker @ 8.1_1e")
def test_kv_with_spaces(self):
self.check_lex(
kv_lex,
"mvapich_foo debug = 4 "
"^ _openmpi @1.2 : 1.4 , 1.6 % intel @ 12.1 : 12.6 + debug - qt_4 "
"^ stackwalker @ 8.1_1e")
self.check_lex(
kv_lex,
"mvapich_foo debug =4 "
"^ _openmpi @1.2 : 1.4 , 1.6 % intel @ 12.1 : 12.6 + debug - qt_4 "
"^ stackwalker @ 8.1_1e")
self.check_lex(
kv_lex,
"mvapich_foo debug= 4 "
"^ _openmpi @1.2 : 1.4 , 1.6 % intel @ 12.1 : 12.6 + debug - qt_4 "
"^ stackwalker @ 8.1_1e")