Merge lp:~allenap/launchpad/more-retest into lp:launchpad

Proposed by Gavin Panella
Status: Merged
Approved by: Gavin Panella
Approved revision: no longer in the source branch.
Merged at revision: not available
Proposed branch: lp:~allenap/launchpad/more-retest
Merge into: lp:launchpad
Diff against target: 134 lines
1 file modified
buildout-templates/bin/retest.in (+57/-40)
To merge this branch: bzr merge lp:~allenap/launchpad/more-retest
Reviewer Review Type Date Requested Status
Michael Nelson (community) code Approve
Review via email: mp+12709@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Gavin Panella (allenap) wrote :

This branch adjusts retest.py to allow pasting of a test report (by
using fileinput), changes the parsing code to work with multiple
chunks (so test output can be appended to a log), makes it *not* run
the whole test suite if it doesn't find any tests in the log, improves
the usage docs, and MOST IMPORTANTLY makes it possible to kill the
test process with Ctrl-C, which was especially important in the days
when it could decide to try and run the whole test suite.

It also moves the script to be a buildout template, which ends up
creating bin/retest (currently it's not a template, and lives in
utilities/retest.py).

Revision history for this message
Michael Nelson (michael.nelson) wrote :

> This branch adjusts retest.py to allow pasting of a test report (by
> using fileinput), changes the parsing code to work with multiple
> chunks (so test output can be appended to a log), makes it *not* run
> the whole test suite if it doesn't find any tests in the log, improves
> the usage docs, and MOST IMPORTANTLY makes it possible to kill the
> test process with Ctrl-C, which was especially important in the days
> when it could decide to try and run the whole test suite.
>
> It also moves the script to be a buildout template, which ends up
> creating bin/retest (currently it's not a template, and lives in
> utilities/retest.py).

That's great! Thanks Gavin.

launchpad/more-retest/+merge/12709
<noodles775> allenap: sure thing!
<allenap> noodles775: Thanks :)
<thumper> deryck: https://code.edge.launchpad.net/~thumper/launchpad/inline-lifecycle-status-edit/+merge/12707
* sinzui (n=sinzui@91.189.88.12) has joined #launchpad-reviews
<noodles775> allenap: nice - I hadn't seen takewhile before...
<noodles775> allenap: on line 97, why are you converting to a set rather than a list?
<noodles775> I mean, test output will not have duplicates...?
* noodles775 has changed the topic to: on call: noodles775 || reviewing: allenap || queue: [] || This channel is logged: http://irclogs.ubuntu.com
<allenap> noodles775: If multiple test logs are passed in (one of the new features), this removes duplicates.
<noodles775> allenap: then why not convert it back to a sorted list then and there?
<noodles775> ie., so extract_tests returns a sorted list of unique tests?
<noodles775> That's great that multiple logs can be used - and that it's now using a buildout template!
<allenap> noodles775: I only want them sorted to display them prettily, otherwise a set is what I want. Not much in it really.
<allenap> noodles775: As in, could be either a set or a list, but it happens to be a set because of de-duplication.
<noodles775> oh, I didn't see the set features being used anywhere... looking again.
<noodles775> Ah, gotcha. ok.
* sinzui has quit (Read error: 104 (Connection reset by peer))
* sinzui1 (n=sinzui@91.189.88.12) has joined #launchpad-reviews
<noodles775> allenap: great, r=me... just note, there's one reference to the old utilities/retest in the usage instructions.
<noodles775> er, actually, two :)
<allenap> noodles775: Ah, good spot, thanks :)

review: Approve (code)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== renamed file 'utilities/retest.py' => 'buildout-templates/bin/retest.in'
--- utilities/retest.py 2009-09-25 20:28:53 +0000
+++ buildout-templates/bin/retest.in 2009-10-01 10:38:14 +0000
@@ -1,4 +1,4 @@
1#!/usr/bin/env python1#!${buildout:executable}
2#2#
3# Copyright 2009 Canonical Ltd. This software is licensed under the3# Copyright 2009 Canonical Ltd. This software is licensed under the
4# GNU Affero General Public License version 3 (see the file LICENSE).4# GNU Affero General Public License version 3 (see the file LICENSE).
@@ -7,23 +7,39 @@
7Given an error report, run all of the failed tests again.7Given an error report, run all of the failed tests again.
88
9For instance, it can be used in the following scenario:9For instance, it can be used in the following scenario:
10% bin/test -vvm lp.registry | tee test.out10
11% # Oh nos! Failures!11 % bin/test -vvm lp.registry | tee test.out
12% # Fix tests.12 % # Oh nos! Failures!
13% utilities/retest.py test.out13 % # Fix tests.
14 % bin/retest test.out
15
16Or, when run without arguments (or if any argument is "-"), a test
17report (or a part of) can be piped in, for example by pasting it:
18
19 % bin/retest
20 Tests with failures:
21 lib/lp/registry/browser/tests/sourcepackage-views.txt
22 lib/lp/registry/tests/../stories/product/xx-product-package-pages.txt
23 Total: ... tests, 2 failures, 0 errors in ...
24
14"""25"""
1526
16import subprocess27import fileinput
28import os
29import re
17import sys30import sys
18import re31from itertools import takewhile
19from pprint import pprint32from pprint import pprint
2033
2134
35# The test script for this branch.
36TEST = '${buildout:directory}/bin/test'
37
22# Regular expression to match numbered stories.38# Regular expression to match numbered stories.
23STORY_RE = re.compile("(.*)/\d{2}-.*")39STORY_RE = re.compile("(.*)/\d{2}-.*")
2440
2541
26def getTestName(test):42def get_test_name(test):
27 """Get the test name of a failed test.43 """Get the test name of a failed test.
2844
29 If the test is part of a numbered story,45 If the test is part of a numbered story,
@@ -36,44 +52,45 @@
36 return test52 return test
3753
3854
39def extractTests(input_file):55def gen_test_lines(lines):
40 """Get the set of tests to be run.56 def p_start(line):
4157 return line.startswith('Tests with failures:')
42 Given a file object for a test summary report, find all of the tests to be58 def p_take(line):
43 run.59 return not line.startswith('Total:')
44 """60 lines = iter(lines)
45 failed_tests = set()61 for line in lines:
46 reading_tests = False62 if p_start(line):
47 for line in input_file:63 for line in takewhile(p_take, lines):
48 if line.startswith('Tests with failures:'):64 yield line
49 reading_tests = True65
50 continue66
51 if reading_tests:67def gen_tests(test_lines):
52 if line.startswith('Total:'):68 for test_line in test_lines:
53 break69 yield get_test_name(test_line.strip())
54 test = getTestName(line.strip())70
55 failed_tests.add(test)71
56 return failed_tests72def extract_tests(lines):
73 return set(gen_tests(gen_test_lines(lines)))
5774
5875
59def run_tests(tests):76def run_tests(tests):
60 """Given a set of tests, run them as one group."""77 """Given a set of tests, run them as one group."""
61 cmd = ['bin/test', '-vv']
62 print "Running tests:"78 print "Running tests:"
63 pprint(sorted(list(tests)))79 pprint(sorted(tests))
80 args = ['-vv']
64 for test in tests:81 for test in tests:
65 cmd.append('-t')82 args.append('-t')
66 cmd.append(test)83 args.append(test)
67 p = subprocess.Popen(cmd)84 os.execl(TEST, TEST, *args)
68 p.wait()
6985
7086
71if __name__ == '__main__':87if __name__ == '__main__':
72 try:88 tests = extract_tests(fileinput.input())
73 log_file = sys.argv[1]89 if len(tests) >= 1:
74 except IndexError:90 run_tests(tests)
75 print "Usage: %s test_output_file" % (sys.argv[0])91 else:
76 sys.exit(-1)92 sys.stdout.write(
77 fd = open(log_file, 'r')93 "Error: no tests found\n"
78 failed_tests = extractTests(fd)94 "Usage: %s [test_output_file|-] ...\n\n%s\n\n" % (
79 run_tests(failed_tests)95 sys.argv[0], __doc__.strip()))
96 sys.exit(1)