Improve log parsing performance (#7093)

* Allow dashes in command names and fix command name handling

- Command should allow dashes in their names like the reest of spack,
  e.g. `spack log-parse`
  - It might be too late for `spack build-cache` (since it is already
    called `spack buildcache`), but we should try a bit to avoid
    inconsistencies in naming conventions

- The code was inconsistent about where commands should be called by
  their python module name (e.g. `log_parse`) and where the actual
  command name should be used (e.g. `log-parse`).

- This made it hard to make a command with a dash in the name, and it
  made `SpackCommand` fail to recognize commands with dashes.

- The code now uses the user-facing name with dashes for function
  parameters, then converts that the module name when needed.

* Improve performance of log parsing

- A number of regular expressions from ctest_log_parser have really poor
  performance, most due to untethered expressions with * or + (i.e., they
  don't start with ^, so the repetition has to be checked for every
  position in the string with Python's backtracking regex implementation)

- I can't verify that CTest's regexes work with an added ^, so I don't
  really want to touch them.  I tried adding this and found that it
  caused some tests to break.

- Instead of using only "efficient" regular expressions, Added a
  prefilter() class that allows the parser to quickly check a
  precondition before evaluating any of the expensive regexes.

- Preconditions do things like check whether the string contains "error"
  or "warning" (linear time things) before evaluating regexes that would
  require them.  It's sad that Python doesn't use Thompson string
  matching (see https://swtch.com/~rsc/regexp/regexp1.html)

- Even with Python's slow implementation, this makes the parser ~200x
  faster on the input we tried it on.

* Add `spack log-parse` command and improve the display of parsed logs

- Add better coloring and line wrapping to the log parse output.  This
  makes nasty build output look better with the line numbers.

- `spack log-parse` allows the log parsing logic used at the end of
  builds to be executed on arbitrary files, which is handy even outside
  of spack.

- Also provides a profile option -- we can profile arbitrary files and
  show which regular expressions in the magic CTest parser take the most
  time.

* Parallelize log parsing

- Log parsing now uses multiple threads for long logs
- Lines from logs are divided into chnks and farmed out to <ncpus>
- Add -j option to `spack log-parse`
This commit is contained in:
Todd Gamblin 2018-01-31 21:57:56 -08:00 committed by becker33
parent 514f0bf5c5
commit df758e1cfc
6 changed files with 403 additions and 91 deletions

View File

@ -65,18 +65,66 @@
up to date with CTest, just make sure the ``*_matches`` and
``*_exceptions`` lists are kept up to date with CTest's build handler.
"""
from __future__ import print_function
from __future__ import division
import re
import math
import multiprocessing
import time
from contextlib import contextmanager
from six import StringIO
from six import string_types
class prefilter(object):
"""Make regular expressions faster with a simple prefiltering predicate.
error_matches = [
Some regular expressions seem to be much more costly than others. In
most cases, we can evaluate a simple precondition, e.g.::
lambda x: "error" in x
to avoid evaluating expensive regexes on all lines in a file. This
can reduce parse time for large files by orders of magnitude when
evaluating lots of expressions.
A ``prefilter`` object is designed to act like a regex,, but
``search`` and ``match`` check the precondition before bothering to
evaluate the regular expression.
Note that ``match`` and ``search`` just return ``True`` and ``False``
at the moment. Make them return a ``MatchObject`` or ``None`` if it
becomes necessary.
"""
def __init__(self, precondition, *patterns):
self.patterns = [re.compile(p) for p in patterns]
self.pre = precondition
self.pattern = "\n ".join(
('MERGED:',) + patterns)
def search(self, text):
return self.pre(text) and any(p.search(text) for p in self.patterns)
def match(self, text):
return self.pre(text) and any(p.match(text) for p in self.patterns)
_error_matches = [
prefilter(
lambda x: any(s in x for s in (
'Error:', 'error', 'undefined reference', 'multiply defined')),
"([^:]+): error[ \\t]*[0-9]+[ \\t]*:",
"([^:]+): (Error:|error|undefined reference|multiply defined)",
"([^ :]+) : (error|fatal error|catastrophic error)",
"([^:]+)\\(([^\\)]+)\\) ?: (error|fatal error|catastrophic error)"),
prefilter(
lambda s: s.count(':') >= 2,
"[^ :]+:[0-9]+: [^ \\t]"),
"^[Bb]us [Ee]rror",
"^[Ss]egmentation [Vv]iolation",
"^[Ss]egmentation [Ff]ault",
":.*[Pp]ermission [Dd]enied",
"([^ :]+):([0-9]+): ([^ \\t])",
"([^:]+): error[ \\t]*[0-9]+[ \\t]*:",
"^Error ([0-9]+):",
"^Fatal",
"^Error: ",
@ -86,9 +134,6 @@
"^cc[^C]*CC: ERROR File = ([^,]+), Line = ([0-9]+)",
"^ld([^:])*:([ \\t])*ERROR([^:])*:",
"^ild:([ \\t])*\\(undefined symbol\\)",
"([^ :]+) : (error|fatal error|catastrophic error)",
"([^:]+): (Error:|error|undefined reference|multiply defined)",
"([^:]+)\\(([^\\)]+)\\) ?: (error|fatal error|catastrophic error)",
"^fatal error C[0-9]+:",
": syntax error ",
"^collect2: ld returned 1 exit status",
@ -128,7 +173,7 @@
"^Command .* failed with exit code",
]
error_exceptions = [
_error_exceptions = [
"instantiated from ",
"candidates are:",
": warning",
@ -143,32 +188,38 @@
]
#: Regexes to match file/line numbers in error/warning messages
warning_matches = [
"([^ :]+):([0-9]+): warning:",
"([^ :]+):([0-9]+): note:",
_warning_matches = [
prefilter(
lambda x: 'warning' in x,
"([^ :]+):([0-9]+): warning:",
"([^:]+): warning ([0-9]+):",
"([^:]+): warning[ \\t]*[0-9]+[ \\t]*:",
"([^ :]+) : warning",
"([^:]+): warning"),
prefilter(
lambda x: 'note:' in x,
"^([^ :]+):([0-9]+): note:"),
prefilter(
lambda x: any(s in x for s in ('Warning', 'Warnung')),
"^(Warning|Warnung) ([0-9]+):",
"^(Warning|Warnung)[ :]",
"^cxx: Warning:",
"([^ :]+):([0-9]+): (Warning|Warnung)",
"^CMake Warning.*:"),
"file: .* has no symbols",
"^cc[^C]*CC: WARNING File = ([^,]+), Line = ([0-9]+)",
"^ld([^:])*:([ \\t])*WARNING([^:])*:",
"([^:]+): warning ([0-9]+):",
"^\"[^\"]+\", line [0-9]+: [Ww](arning|arnung)",
"([^:]+): warning[ \\t]*[0-9]+[ \\t]*:",
"^(Warning|Warnung) ([0-9]+):",
"^(Warning|Warnung)[ :]",
"WARNING: ",
"([^ :]+) : warning",
"([^:]+): warning",
"\", line [0-9]+\\.[0-9]+: [0-9]+-[0-9]+ \\([WI]\\)",
"^cxx: Warning:",
".*file: .* has no symbols",
"([^ :]+):([0-9]+): (Warning|Warnung)",
"\\([0-9]*\\): remark #[0-9]*",
"\".*\", line [0-9]+: remark\\([0-9]*\\):",
"cc-[0-9]* CC: REMARK File = .*, Line = [0-9]*",
"^CMake Warning.*:",
"^\\[WARNING\\]",
]
#: Regexes to match file/line numbers in error/warning messages
warning_exceptions = [
_warning_exceptions = [
"/usr/.*/X11/Xlib\\.h:[0-9]+: war.*: ANSI C\\+\\+ forbids declaration",
"/usr/.*/X11/Xutil\\.h:[0-9]+: war.*: ANSI C\\+\\+ forbids declaration",
"/usr/.*/X11/XResource\\.h:[0-9]+: war.*: ANSI C\\+\\+ forbids declaration",
@ -188,7 +239,7 @@
]
#: Regexes to match file/line numbers in error/warning messages
file_line_matches = [
_file_line_matches = [
"^Warning W[0-9]+ ([a-zA-Z.\\:/0-9_+ ~-]+) ([0-9]+):",
"^([a-zA-Z./0-9_+ ~-]+):([0-9]+):",
"^([a-zA-Z.\\:/0-9_+ ~-]+)\\(([0-9]+)\\)",
@ -250,25 +301,120 @@ class BuildWarning(LogEvent):
"""LogEvent subclass for build warnings."""
def chunks(l, n):
"""Divide l into n approximately-even chunks."""
chunksize = int(math.ceil(len(l) / n))
return [l[i:i + chunksize] for i in range(0, len(l), chunksize)]
@contextmanager
def _time(times, i):
start = time.time()
yield
end = time.time()
times[i] += end - start
def _match(matches, exceptions, line):
"""True if line matches a regex in matches and none in exceptions."""
return (any(m.search(line) for m in matches) and
not any(e.search(line) for e in exceptions))
def _profile_match(matches, exceptions, line, match_times, exc_times):
"""Profiled version of match().
Timing is expensive so we have two whole functions. This is much
longer because we have to break up the ``any()`` calls.
"""
for i, m in enumerate(matches):
with _time(match_times, i):
if m.search(line):
break
else:
return False
for i, m in enumerate(exceptions):
with _time(exc_times, i):
if m.search(line):
return False
else:
return True
def _parse(lines, offset, profile):
def compile(regex_array):
return [regex if isinstance(regex, prefilter) else re.compile(regex)
for regex in regex_array]
error_matches = compile(_error_matches)
error_exceptions = compile(_error_exceptions)
warning_matches = compile(_warning_matches)
warning_exceptions = compile(_warning_exceptions)
file_line_matches = compile(_file_line_matches)
matcher, args = _match, []
timings = []
if profile:
matcher = _profile_match
timings = [
[0.0] * len(error_matches), [0.0] * len(error_exceptions),
[0.0] * len(warning_matches), [0.0] * len(warning_exceptions)]
errors = []
warnings = []
for i, line in enumerate(lines):
# use CTest's regular expressions to scrape the log for events
if matcher(error_matches, error_exceptions, line, *timings[:2]):
event = BuildError(line.strip(), offset + i + 1)
errors.append(event)
elif matcher(warning_matches, warning_exceptions, line, *timings[2:]):
event = BuildWarning(line.strip(), offset + i + 1)
warnings.append(event)
else:
continue
# get file/line number for each event, if possible
for flm in file_line_matches:
match = flm.search(line)
if match:
event.source_file, source_line_no = match.groups()
return errors, warnings, timings
def _parse_unpack(args):
return _parse(*args)
class CTestLogParser(object):
"""Log file parser that extracts errors and warnings."""
def __init__(self):
def compile(regex_array):
return [re.compile(regex) for regex in regex_array]
def __init__(self, profile=False):
# whether to record timing information
self.timings = []
self.profile = profile
self.error_matches = compile(error_matches)
self.error_exceptions = compile(error_exceptions)
self.warning_matches = compile(warning_matches)
self.warning_exceptions = compile(warning_exceptions)
self.file_line_matches = compile(file_line_matches)
def print_timings(self):
"""Print out profile of time spent in different regular expressions."""
def stringify(elt):
return elt if isinstance(elt, str) else elt.pattern
def parse(self, stream, context=6):
index = 0
for name, arr in [('error_matches', _error_matches),
('error_exceptions', _error_exceptions),
('warning_matches', _warning_matches),
('warning_exceptions', _warning_exceptions)]:
print()
print(name)
for i, elt in enumerate(arr):
print("%16.2f %s" % (
self.timings[index][i] * 1e6, stringify(elt)))
index += 1
def parse(self, stream, context=6, jobs=None):
"""Parse a log file by searching each line for errors and warnings.
Args:
@ -281,30 +427,45 @@ def parse(self, stream, context=6):
"""
if isinstance(stream, string_types):
with open(stream) as f:
return self.parse(f)
return self.parse(f, context, jobs)
lines = [line for line in stream]
errors = []
warnings = []
for i, line in enumerate(lines):
# use CTest's regular expressions to scrape the log for events
if _match(self.error_matches, self.error_exceptions, line):
event = BuildError(line.strip(), i + 1)
errors.append(event)
elif _match(self.warning_matches, self.warning_exceptions, line):
event = BuildWarning(line.strip(), i + 1)
warnings.append(event)
else:
continue
if jobs is None:
jobs = multiprocessing.cpu_count()
# get file/line number for each event, if possible
for flm in self.file_line_matches:
match = flm.search(line)
if match:
event.source_file, source_line_no = match.groups()
# single-thread small logs
if len(lines) < 10 * jobs:
errors, warnings, self.timings = _parse(lines, 0, self.profile)
# add log context, as well
else:
# Build arguments for parallel jobs
args = []
offset = 0
for chunk in chunks(lines, jobs):
args.append((chunk, offset, self.profile))
offset += len(chunk)
# create a pool and farm out the matching job
pool = multiprocessing.Pool(jobs)
try:
# this is a workaround for a Python bug in Pool with ctrl-C
results = pool.map_async(_parse_unpack, args, 1).get(9999999)
errors, warnings, timings = zip(*results)
finally:
pool.terminate()
# merge results
errors = sum(errors, [])
warnings = sum(warnings, [])
if self.profile:
self.timings = [
[sum(i) for i in zip(*t)] for t in zip(*timings)]
# add log context to all events
for event in (errors + warnings):
i = event.line_no - 1
event.pre_context = [
l.rstrip() for l in lines[i - context:i]]
event.post_context = [

View File

@ -59,11 +59,24 @@
command_path = os.path.join(spack.lib_path, "spack", "cmd")
#: Names of all commands
commands = []
def python_name(cmd_name):
"""Convert ``-`` to ``_`` in command name, to make a valid identifier."""
return cmd_name.replace("-", "_")
def cmd_name(python_name):
"""Convert module name (with ``_``) to command name (with ``-``)."""
return python_name.replace('_', '-')
for file in os.listdir(command_path):
if file.endswith(".py") and not re.search(ignore_files, file):
cmd = re.sub(r'.py$', '', file)
commands.append(cmd)
commands.append(cmd_name(cmd))
commands.sort()
@ -76,33 +89,38 @@ def remove_options(parser, *options):
break
def get_python_name(name):
"""Commands can have '-' in their names, unlike Python identifiers."""
return name.replace("-", "_")
def get_module(cmd_name):
"""Imports the module for a particular command name and returns it.
def get_module(name):
"""Imports the module for a particular command name and returns it."""
module_name = "%s.%s" % (__name__, name)
Args:
cmd_name (str): name of the command for which to get a module
(contains ``-``, not ``_``).
"""
pname = python_name(cmd_name)
module_name = "%s.%s" % (__name__, pname)
module = __import__(module_name,
fromlist=[name, SETUP_PARSER, DESCRIPTION],
fromlist=[pname, SETUP_PARSER, DESCRIPTION],
level=0)
attr_setdefault(module, SETUP_PARSER, lambda *args: None) # null-op
attr_setdefault(module, DESCRIPTION, "")
fn_name = get_python_name(name)
if not hasattr(module, fn_name):
if not hasattr(module, pname):
tty.die("Command module %s (%s) must define function '%s'." %
(module.__name__, module.__file__, fn_name))
(module.__name__, module.__file__, pname))
return module
def get_command(name):
"""Imports the command's function from a module and returns it."""
python_name = get_python_name(name)
return getattr(get_module(python_name), python_name)
def get_command(cmd_name):
"""Imports the command's function from a module and returns it.
Args:
cmd_name (str): name of the command for which to get a module
(contains ``-``, not ``_``).
"""
pname = python_name(cmd_name)
return getattr(get_module(pname), pname)
def parse_specs(args, **kwargs):

View File

@ -116,6 +116,10 @@ def edit(parser, args):
if args.path:
path = args.path
if name:
# convert command names to python module name
if path == spack.cmd.command_path:
name = spack.cmd.python_name(name)
path = os.path.join(path, name)
if not os.path.exists(path):
files = glob.glob(path + '*')

View File

@ -0,0 +1,82 @@
##############################################################################
# Copyright (c) 2013-2017, Lawrence Livermore National Security, LLC.
# Produced at the Lawrence Livermore National Laboratory.
#
# This file is part of Spack.
# Created by Todd Gamblin, tgamblin@llnl.gov, All rights reserved.
# LLNL-CODE-647188
#
# For details, see https://github.com/spack/spack
# Please also see the NOTICE and LICENSE files for our notice and the LGPL.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License (as
# published by the Free Software Foundation) version 2.1, February 1999.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
# conditions of the GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
##############################################################################
import sys
import llnl.util.tty as tty
from spack.util.log_parse import parse_log_events, make_log_context
description = "filter errors and warnings from build logs"
section = "build"
level = "long"
event_types = ('errors', 'warnings')
def setup_parser(subparser):
subparser.add_argument(
'--show', action='store', default='errors',
help="comma-separated list of what to show; options: errors, warnings")
subparser.add_argument(
'-c', '--context', action='store', type=int, default=3,
help="lines of context to show around lines of interest")
subparser.add_argument(
'-p', '--profile', action='store_true',
help="print out a profile of time spent in regexes during parse")
subparser.add_argument(
'-w', '--width', action='store', type=int, default=None,
help="wrap width: auto-size to terminal by default; 0 for no wrap")
subparser.add_argument(
'-j', '--jobs', action='store', type=int, default=None,
help="number of jobs to parse log file; default is 1 for short logs, "
"ncpus for long logs")
subparser.add_argument(
'file', help="a log file containing build output, or - for stdin")
def log_parse(parser, args):
input = args.file
if args.file == '-':
input = sys.stdin
errors, warnings = parse_log_events(
input, args.context, args.jobs, args.profile)
if args.profile:
return
types = [s.strip() for s in args.show.split(',')]
for e in types:
if e not in event_types:
tty.die('Invalid event type: %s' % e)
events = []
if 'errors' in types:
events.extend(errors)
print('%d errors' % len(errors))
if 'warnings' in types:
events.extend(warnings)
print('%d warnings' % len(warnings))
print(make_log_context(events, args.width))

View File

@ -40,7 +40,6 @@
from llnl.util.tty.log import log_output
import spack
import spack.cmd
from spack.error import SpackError
@ -159,11 +158,11 @@ def add_group(group):
def add_subcommand_group(title, commands):
"""Add informational help group for a specific subcommand set."""
cmd_set = set(commands)
cmd_set = set(c for c in commands)
# make a dict of commands of interest
cmds = dict((action.metavar, action) for action in self.actions
if action.metavar in cmd_set)
cmds = dict((a.metavar, a) for a in self.actions
if a.metavar in cmd_set)
# add commands to a group in order, and add the group
group = argparse._ArgumentGroup(self, title=title)
@ -235,11 +234,8 @@ def add_subcommand_group(title, commands):
# determine help from format above
return formatter.format_help()
def add_command(self, name):
def add_command(self, cmd_name):
"""Add one subcommand to this parser."""
# convert CLI command name to python module name
name = spack.cmd.get_python_name(name)
# lazily initialize any subparsers
if not hasattr(self, 'subparsers'):
# remove the dummy "command" argument.
@ -250,8 +246,7 @@ def add_command(self, name):
# each command module implements a parser() function, to which we
# pass its subparser for setup.
module = spack.cmd.get_module(name)
cmd_name = name.replace('_', '-')
module = spack.cmd.get_module(cmd_name)
subparser = self.subparsers.add_parser(
cmd_name, help=module.description, description=module.description)
module.setup_parser(subparser)
@ -292,6 +287,7 @@ def make_argument_parser():
parser.add_argument('-m', '--mock', action='store_true',
help="use mock packages instead of real ones")
parser.add_argument('-p', '--profile', action='store_true',
dest='spack_profile',
help="profile execution using cProfile")
parser.add_argument('-P', '--sorted-profile', default=None, metavar="STAT",
help="profile and sort by one or more of:\n[%s]" %
@ -370,7 +366,11 @@ class SpackCommand(object):
their output.
"""
def __init__(self, command):
"""Create a new SpackCommand that invokes ``command`` when called."""
"""Create a new SpackCommand that invokes ``command`` when called.
Args:
command (str): name of the command to invoke
"""
self.parser = make_argument_parser()
self.parser.add_command(command)
self.command_name = command
@ -514,9 +514,9 @@ def main(argv=None):
# Try to load the particular command the caller asked for. If there
# is no module for it, just die.
command_name = spack.cmd.get_python_name(args.command[0])
cmd_name = args.command[0]
try:
parser.add_command(command_name)
parser.add_command(cmd_name)
except ImportError:
if spack.debug:
raise
@ -534,9 +534,9 @@ def main(argv=None):
return 0
# now we can actually execute the command.
command = spack.cmd.get_command(command_name)
command = spack.cmd.get_command(cmd_name)
try:
if args.profile or args.sorted_profile:
if args.spack_profile or args.sorted_profile:
_profile_wrapper(command, parser, args, unknown)
elif args.pdb:
import pdb

View File

@ -24,18 +24,25 @@
##############################################################################
from __future__ import print_function
import sys
from six import StringIO
from ctest_log_parser import CTestLogParser
from ctest_log_parser import CTestLogParser, BuildError, BuildWarning
import llnl.util.tty as tty
from llnl.util.tty.color import colorize
__all__ = ['parse_log_events', 'make_log_context']
def parse_log_events(stream, context=6):
def parse_log_events(stream, context=6, jobs=None, profile=False):
"""Extract interesting events from a log file as a list of LogEvent.
Args:
stream (str or fileobject): build log name or file object
context (int): lines of context to extract around each log event
jobs (int): number of jobs to parse with; default ncpus
profile (bool): print out profile information for parsing
Returns:
(tuple): two lists containig ``BuildError`` and
@ -46,47 +53,87 @@ def parse_log_events(stream, context=6):
that all the regex compilation is only done once.
"""
if parse_log_events.ctest_parser is None:
parse_log_events.ctest_parser = CTestLogParser()
parse_log_events.ctest_parser = CTestLogParser(profile=profile)
return parse_log_events.ctest_parser.parse(stream, context)
result = parse_log_events.ctest_parser.parse(stream, context, jobs)
if profile:
parse_log_events.ctest_parser.print_timings()
return result
#: lazily constructed CTest log parser
parse_log_events.ctest_parser = None
def make_log_context(log_events):
def _wrap(text, width):
"""Break text into lines of specific width."""
lines = []
pos = 0
while pos < len(text):
lines.append(text[pos:pos + width])
pos += width
return lines
def make_log_context(log_events, width=None):
"""Get error context from a log file.
Args:
log_events (list of LogEvent): list of events created by
``ctest_log_parser.parse()``
width (int or None): wrap width; ``0`` for no limit; ``None`` to
auto-size for terminal
Returns:
str: context from the build log with errors highlighted
Parses the log file for lines containing errors, and prints them out
with line numbers and context. Errors are highlighted with '>>' and
with red highlighting (if color is enabled).
Events are sorted by line number before they are displayed.
"""
error_lines = set(e.line_no for e in log_events)
log_events = sorted(log_events, key=lambda e: e.line_no)
num_width = len(str(max(error_lines))) + 4
line_fmt = '%%-%dd%%s' % num_width
indent = ' ' * (5 + num_width)
if width is None:
_, width = tty.terminal_size()
if width <= 0:
width = sys.maxsize
wrap_width = width - num_width - 6
out = StringIO()
next_line = 1
for event in log_events:
start = event.start
if start > next_line:
out.write(' [ ... ]\n')
if isinstance(event, BuildError):
color = 'R'
elif isinstance(event, BuildWarning):
color = 'Y'
else:
color = 'W'
if next_line != 1 and start > next_line:
out.write('\n ...\n\n')
if start < next_line:
start = next_line
for i in range(start, event.end):
# wrap to width
lines = _wrap(event[i], wrap_width)
lines[1:] = [indent + l for l in lines[1:]]
wrapped_line = line_fmt % (i, '\n'.join(lines))
if i in error_lines:
out.write(colorize(' @R{>> %-6d%s}\n' % (i, event[i])))
out.write(
colorize(' @%s{>> %s}\n' % (color, wrapped_line)))
else:
out.write(' %-6d%s\n' % (i, event[i]))
out.write(' %s\n' % wrapped_line)
next_line = event.end