Merge lp:~jml/testrepository/show-failures-incrementally-613152 into lp:~testrepository/testrepository/trunk

Proposed by Jonathan Lange
Status: Merged
Merged at revision: 100
Proposed branch: lp:~jml/testrepository/show-failures-incrementally-613152
Merge into: lp:~testrepository/testrepository/trunk
Diff against target: 884 lines (+334/-143)
16 files modified
testrepository/commands/__init__.py (+0/-21)
testrepository/commands/failing.py (+21/-22)
testrepository/commands/last.py (+7/-15)
testrepository/commands/load.py (+15/-15)
testrepository/results.py (+13/-0)
testrepository/tests/__init__.py (+17/-0)
testrepository/tests/commands/test_failing.py (+16/-7)
testrepository/tests/commands/test_last.py (+3/-5)
testrepository/tests/commands/test_load.py (+9/-6)
testrepository/tests/test_matchers.py (+17/-0)
testrepository/tests/test_results.py (+28/-0)
testrepository/tests/test_ui.py (+9/-8)
testrepository/tests/ui/test_cli.py (+63/-5)
testrepository/ui/__init__.py (+49/-12)
testrepository/ui/cli.py (+22/-24)
testrepository/ui/model.py (+45/-3)
To merge this branch: bzr merge lp:~jml/testrepository/show-failures-incrementally-613152
Reviewer Review Type Date Requested Status
Robert Collins Approve
Review via email: mp+31765@code.launchpad.net

Commit message

Show test failures and errors as we get them in testr load.

Description of the change

In the Wikipedia spirit of editing boldly, I've hacked up testrepository to show failures incrementally. Most of the damage occurs in the UI contract.

Here's what I've done:

 * Added UI.make_result to the public interface, requiring a UI
   to be responsible for making its object.

 * Changed 'failing', 'load' and 'last' to use the result
   provided by the UI, rather than making their own staging
   result object to capture the stream.

 * Change Command.output_run to no longer take an output stream,
   as such a thing is no longer required -- the UI's result takes
   care of it now.

 * Dropped UI.output_results, since there's no longer any actual
   use case.

 * Gave the CLI UI a TestResult implementation that prints errors
   and failures as it gets them.

Changing 'last' and 'failing' wasn't really necessary to fix bug 613152, neither was removing UI.output_results or changing Command.output_run, but it seemed better to me to go the whole hog.

I also cleaned up pyflakes warnings where I saw them.

To post a comment you must log in.
106. By Jonathan Lange

Remove output_results, not needed.

Revision history for this message
Jelmer Vernooij (jelmer) wrote :

I can't find the "I like this" button.

Revision history for this message
Jonathan Lange (jml) wrote :

Still need a review for this.

Revision history for this message
Robert Collins (lifeless) wrote :

output_run looks like it doesn't output a run anymore: its intent is sufficiently different, I would like it to actually match.

output_run_summary, perhaps.

or

output_result, which seems to be what it does now.

I think things are a little confused in fact; it looks to me that the summary printing at the end of the result wants to happen in the stopRun method of the result resturned from makeResult, and output_run can be wholly deleted.

You've added a TODO which there is an example of doing that TODO elsewhere in your diff.

The rest looks ok.

Thanks for doing this, it is appreciated, I've just been stupidly busy bootstrapping stuff in lp.

review: Needs Fixing
107. By Jonathan Lange

Merge trunk, doing major work to resolve the conflict in the failing command.

108. By Jonathan Lange

make_result now takes a callable that returns the id of the test run.
Not actually used yet.

109. By Jonathan Lange

Refactor the CLITestResult tests so they don't care so much about how results
are constructed.

110. By Jonathan Lange

Wildcard object equal to everything.

111. By Jonathan Lange

Use Wildcard to make matching UI output a little nicer.

112. By Jonathan Lange

Give the UI's TestResult object full responsibility for summing up the result
of the test,

113. By Jonathan Lange

Oops.

114. By Jonathan Lange

Delete unused output_run.

115. By Jonathan Lange

Tests for results module.

116. By Jonathan Lange

Probably not.

Revision history for this message
Jonathan Lange (jml) wrote :

As indicated on IRC, I didn't add a TODO, I just moved it.

It took me a while to refactor the code to use stopTestRun rather than output_run. I have had to change some of the behaviour to do so.

Specifically:
 * 'testr failing' now shows run id, total tests and skip count as well failure count
 * 'testr failing' now has return code 1 when there are failing tests
 * 'testr load' will show skips

There's probably other stuff, although I tried to minimize it. Big enough that it definitely needs review.

Revision history for this message
Robert Collins (lifeless) wrote :

+ def _make_result(self, repo, evaluator):
+ if self.ui.options.list:
+ return evaluator

this two things seem disconnected; perhaps rather than evaluator you should say list_result or something. I think something originally together has been split out far enough that the parameter needs a better name.

+Wildcard = _Wildcard()

perhaps
wildcard = Wildcard()

would be nicer. Many things (like str, object, etc) are lowercase for instances in Python.

+ def _output_run(self, run_id):

def _output_summary

 - I think.

+ return ''.join([
+ self.sep1,
+ '%s: %s\n' % (label, test.id()),
+ self.sep2,
+ error_text,
+ ])

Looks like a lurking UnicodeDecodeError to me; we either need to make this unicode always or manually encode error. One way to test that would be to throw a mixed encoding, non-ascii test outcome at it.

Would you be kind enough to do these tweaks? then its definitely gtg.

review: Approve
Revision history for this message
Jonathan Lange (jml) wrote :

On Sun, Sep 26, 2010 at 9:46 AM, Robert Collins
<email address hidden> wrote:
> Review: Approve
> +    def _make_result(self, repo, evaluator):
> +        if self.ui.options.list:
> +            return evaluator
>
> this two things seem disconnected; perhaps rather than evaluator you should say list_result or something. I think something originally together has been split out far enough that the parameter needs a better name.

Done.

>
>
> +Wildcard = _Wildcard()
>
> perhaps
> wildcard = Wildcard()
>
> would be nicer. Many things (like str, object, etc) are lowercase for instances in Python.
>

Instances of 'type', perhaps. I chose the case to reflect None, True
and False: other singleton constants in Python.

>
> +    def _output_run(self, run_id):
>
> def _output_summary
>
>  - I think.
>

Changed.

>
> +        return ''.join([
> +            self.sep1,
> +            '%s: %s\n' % (label, test.id()),
> +            self.sep2,
> +            error_text,
> +            ])
>
> Looks like a lurking UnicodeDecodeError to me; we either need to make this unicode always or manually encode error. One way to test that would be to throw a mixed encoding, non-ascii test outcome at it.

Well, it's not a *new* lurking UnicodeDecodeError. It's equivalent to
what was there earlier.

We are always going to be getting the error_text from the base
TestResult. In this case, we are relying on testtools to store
unicode. I've changed all of the literals to be unicode to at least
communicate this more clearly.

I was going to add a test (below), but it seems to be a case of
"Doctor, it hurts when I do this!".

    def test_format_error_mixed_encoding(self):
        result = self.make_result()
        error_text = 'foo' + u'проба'.encode('utf-16')
        error = result._format_error(u'label', self, error_text)
        expected = u'%s%s: %s\n%s%s' % (
            result.sep1, u'label', self.id(), result.sep2, error_text)
        self.assertEqual(error, expected)

>
> Would you be kind enough to do these tweaks? then its definitely gtg.

Thanks,
jml

Revision history for this message
Jelmer Vernooij (jelmer) wrote :

W00t!

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'testrepository/commands/__init__.py'
--- testrepository/commands/__init__.py 2010-02-28 23:02:29 +0000
+++ testrepository/commands/__init__.py 2010-09-20 17:47:50 +0000
@@ -150,27 +150,6 @@
150 def _init(self):150 def _init(self):
151 """Per command init call, called into by Command.__init__."""151 """Per command init call, called into by Command.__init__."""
152152
153 def output_run(self, run_id, output, evaluator):
154 """Output a test run.
155
156 :param run_id: The run id.
157 :param output: A StringIO containing a subunit stream for some portion of the run to show.
158 :param evaluator: A TestResult evaluating the entire run.
159 """
160 if self.ui.options.quiet:
161 return
162 if output.getvalue():
163 output.seek(0)
164 self.ui.output_results(subunit.ProtocolTestCase(output))
165 values = [('id', run_id), ('tests', evaluator.testsRun)]
166 failures = len(evaluator.failures) + len(evaluator.errors)
167 if failures:
168 values.append(('failures', failures))
169 skips = sum(map(len, evaluator.skip_reasons.itervalues()))
170 if skips:
171 values.append(('skips', skips))
172 self.ui.output_values(values)
173
174 def run(self):153 def run(self):
175 """The core logic for this command to be implemented by subclasses."""154 """The core logic for this command to be implemented by subclasses."""
176 raise NotImplementedError(self.run)155 raise NotImplementedError(self.run)
177156
=== modified file 'testrepository/commands/failing.py'
--- testrepository/commands/failing.py 2010-09-11 19:56:11 +0000
+++ testrepository/commands/failing.py 2010-09-20 17:47:50 +0000
@@ -14,13 +14,13 @@
1414
15"""Show the current failures in the repository."""15"""Show the current failures in the repository."""
1616
17from cStringIO import StringIO
18import optparse17import optparse
1918
20import subunit.test_results
21from testtools import MultiTestResult, TestResult19from testtools import MultiTestResult, TestResult
2220
23from testrepository.commands import Command21from testrepository.commands import Command
22from testrepository.results import TestResultFilter
23
2424
25class failing(Command):25class failing(Command):
26 """Show the current failures known by the repository.26 """Show the current failures known by the repository.
@@ -41,17 +41,31 @@
41 default=False, help="Show only a list of failing tests."),41 default=False, help="Show only a list of failing tests."),
42 ]42 ]
4343
44 def _list_subunit(self, run):
45 # TODO only failing tests.
46 stream = run.get_subunit_stream()
47 self.ui.output_stream(stream)
48 if stream:
49 return 1
50 else:
51 return 0
52
53 def _make_result(self, repo, evaluator):
54 if self.ui.options.list:
55 return evaluator
56 output_result = self.ui.make_result(repo.latest_id)
57 filtered = TestResultFilter(output_result, filter_skip=True)
58 return MultiTestResult(evaluator, filtered)
59
44 def run(self):60 def run(self):
45 repo = self.repository_factory.open(self.ui.here)61 repo = self.repository_factory.open(self.ui.here)
46 run = repo.get_failing()62 run = repo.get_failing()
63 if self.ui.options.subunit:
64 return self._list_subunit(run)
47 case = run.get_test()65 case = run.get_test()
48 failed = False66 failed = False
49 evaluator = TestResult()67 evaluator = TestResult()
50 output = StringIO()68 result = self._make_result(repo, evaluator)
51 output_stream = subunit.TestProtocolClient(output)
52 filtered = subunit.test_results.TestResultFilter(output_stream,
53 filter_skip=True)
54 result = MultiTestResult(evaluator, filtered)
55 result.startTestRun()69 result.startTestRun()
56 try:70 try:
57 case.run(result)71 case.run(result)
@@ -66,19 +80,4 @@
66 failing_tests = [80 failing_tests = [
67 test for test, _ in evaluator.errors + evaluator.failures]81 test for test, _ in evaluator.errors + evaluator.failures]
68 self.ui.output_tests(failing_tests)82 self.ui.output_tests(failing_tests)
69 return result
70 if self.ui.options.subunit:
71 # TODO only failing tests.
72 self.ui.output_stream(run.get_subunit_stream())
73 return result
74 if self.ui.options.quiet:
75 return result
76 if output.getvalue():
77 output.seek(0)
78 self.ui.output_results(subunit.ProtocolTestCase(output))
79 values = []
80 failures = len(evaluator.failures) + len(evaluator.errors)
81 if failures:
82 values.append(('failures', failures))
83 self.ui.output_values(values)
84 return result83 return result
8584
=== modified file 'testrepository/commands/last.py'
--- testrepository/commands/last.py 2010-01-10 08:52:00 +0000
+++ testrepository/commands/last.py 2010-09-20 17:47:50 +0000
@@ -5,7 +5,7 @@
5# license at the users choice. A copy of both licenses are available in the5# license at the users choice. A copy of both licenses are available in the
6# project source as Apache-2.0 and BSD. You may not use this file except in6# project source as Apache-2.0 and BSD. You may not use this file except in
7# compliance with one of these two licences.7# compliance with one of these two licences.
8# 8#
9# Unless required by applicable law or agreed to in writing, software9# Unless required by applicable law or agreed to in writing, software
10# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT10# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
11# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the11# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
@@ -14,16 +14,13 @@
1414
15"""Show the last run loaded into a repository."""15"""Show the last run loaded into a repository."""
1616
17from cStringIO import StringIO
18
19import subunit.test_results
20from testtools import MultiTestResult, TestResult
21
22from testrepository.commands import Command17from testrepository.commands import Command
18from testrepository.results import TestResultFilter
19
2320
24class last(Command):21class last(Command):
25 """Show the last run loaded into a repository.22 """Show the last run loaded into a repository.
26 23
27 Failing tests are shown on the console and a summary of the run is printed24 Failing tests are shown on the console and a summary of the run is printed
28 at the end.25 at the end.
29 """26 """
@@ -33,19 +30,14 @@
33 run_id = repo.latest_id()30 run_id = repo.latest_id()
34 case = repo.get_test_run(run_id).get_test()31 case = repo.get_test_run(run_id).get_test()
35 failed = False32 failed = False
36 evaluator = TestResult()33 output_result = self.ui.make_result(lambda: run_id)
37 output = StringIO()34 result = TestResultFilter(output_result, filter_skip=True)
38 output_stream = subunit.TestProtocolClient(output)
39 filtered = subunit.test_results.TestResultFilter(output_stream,
40 filter_skip=True)
41 result = MultiTestResult(evaluator, filtered)
42 result.startTestRun()35 result.startTestRun()
43 try:36 try:
44 case.run(result)37 case.run(result)
45 finally:38 finally:
46 result.stopTestRun()39 result.stopTestRun()
47 failed = not evaluator.wasSuccessful()40 failed = not result.wasSuccessful()
48 self.output_run(run_id, output, evaluator)
49 if failed:41 if failed:
50 return 142 return 1
51 else:43 else:
5244
=== modified file 'testrepository/commands/load.py'
--- testrepository/commands/load.py 2010-01-10 08:52:00 +0000
+++ testrepository/commands/load.py 2010-09-20 17:47:50 +0000
@@ -1,11 +1,11 @@
1#1#
2# Copyright (c) 2009 Testrepository Contributors2# Copyright (c) 2009 Testrepository Contributors
3# 3#
4# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause4# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
5# license at the users choice. A copy of both licenses are available in the5# license at the users choice. A copy of both licenses are available in the
6# project source as Apache-2.0 and BSD. You may not use this file except in6# project source as Apache-2.0 and BSD. You may not use this file except in
7# compliance with one of these two licences.7# compliance with one of these two licences.
8# 8#
9# Unless required by applicable law or agreed to in writing, software9# Unless required by applicable law or agreed to in writing, software
10# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT10# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
11# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the11# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
@@ -14,16 +14,16 @@
1414
15"""Load data into a repository."""15"""Load data into a repository."""
1616
17from cStringIO import StringIO17import subunit
1818from testtools import MultiTestResult
19import subunit.test_results
20from testtools import MultiTestResult, TestResult
2119
22from testrepository.commands import Command20from testrepository.commands import Command
21from testrepository.results import TestResultFilter
22
2323
24class load(Command):24class load(Command):
25 """Load a subunit stream into a repository.25 """Load a subunit stream into a repository.
26 26
27 Failing tests are shown on the console and a summary of the stream is27 Failing tests are shown on the console and a summary of the stream is
28 printed at the end.28 printed at the end.
29 """29 """
@@ -34,21 +34,21 @@
34 path = self.ui.here34 path = self.ui.here
35 repo = self.repository_factory.open(path)35 repo = self.repository_factory.open(path)
36 failed = False36 failed = False
37 run_id = None
37 for stream in self.ui.iter_streams('subunit'):38 for stream in self.ui.iter_streams('subunit'):
38 inserter = repo.get_inserter()39 inserter = repo.get_inserter()
39 evaluator = TestResult()40 output_result = self.ui.make_result(lambda: run_id)
40 output = StringIO()41 # XXX: We want to *count* skips, but not show them.
41 output_stream = subunit.TestProtocolClient(output)42 filtered = TestResultFilter(output_result, filter_skip=False)
42 filtered = subunit.test_results.TestResultFilter(output_stream,
43 filter_skip=True)
44 case = subunit.ProtocolTestCase(stream)43 case = subunit.ProtocolTestCase(stream)
44 filtered.startTestRun()
45 inserter.startTestRun()45 inserter.startTestRun()
46 try:46 try:
47 case.run(MultiTestResult(inserter, evaluator, filtered))47 case.run(MultiTestResult(inserter, filtered))
48 finally:48 finally:
49 run_id = inserter.stopTestRun()49 run_id = inserter.stopTestRun()
50 failed = failed or not evaluator.wasSuccessful()50 filtered.stopTestRun()
51 self.output_run(run_id, output, evaluator)51 failed = failed or not filtered.wasSuccessful()
52 if failed:52 if failed:
53 return 153 return 1
54 else:54 else:
5555
=== added file 'testrepository/results.py'
--- testrepository/results.py 1970-01-01 00:00:00 +0000
+++ testrepository/results.py 2010-09-20 17:47:50 +0000
@@ -0,0 +1,13 @@
1from subunit import test_results
2
3
4class TestResultFilter(test_results.TestResultFilter):
5 """Test result filter."""
6
7 def _filtered(self):
8 super(TestResultFilter, self)._filtered()
9 # XXX: This is really crappy. It assumes that the test result we
10 # actually care about is decorated twice. Probably the more correct
11 # thing to do is fix subunit so that incrementing 'testsRun' on a test
12 # result increments them on the decorated test result.
13 self.decorated.decorated.testsRun += 1
014
=== modified file 'testrepository/tests/__init__.py'
--- testrepository/tests/__init__.py 2010-01-16 00:01:45 +0000
+++ testrepository/tests/__init__.py 2010-09-20 17:47:50 +0000
@@ -30,6 +30,22 @@
30 self)30 self)
3131
3232
33class _Wildcard(object):
34 """Object that is equal to everything."""
35
36 def __repr__(self):
37 return '*'
38
39 def __eq__(self, other):
40 return True
41
42 def __ne__(self, other):
43 return False
44
45
46Wildcard = _Wildcard()
47
48
33def test_suite():49def test_suite():
34 packages = [50 packages = [
35 'arguments',51 'arguments',
@@ -43,6 +59,7 @@
43 'matchers',59 'matchers',
44 'monkeypatch',60 'monkeypatch',
45 'repository',61 'repository',
62 'results',
46 'setup',63 'setup',
47 'stubpackage',64 'stubpackage',
48 'testr',65 'testr',
4966
=== modified file 'testrepository/tests/commands/test_failing.py'
--- testrepository/tests/commands/test_failing.py 2010-09-07 12:37:17 +0000
+++ testrepository/tests/commands/test_failing.py 2010-09-20 17:47:50 +0000
@@ -22,7 +22,7 @@
22from testrepository.commands import failing22from testrepository.commands import failing
23from testrepository.ui.model import UI23from testrepository.ui.model import UI
24from testrepository.repository import memory24from testrepository.repository import memory
25from testrepository.tests import ResourcedTestCase25from testrepository.tests import ResourcedTestCase, Wildcard
2626
2727
28class TestCommand(ResourcedTestCase):28class TestCommand(ResourcedTestCase):
@@ -48,14 +48,12 @@
48 Cases('ok').run(inserter)48 Cases('ok').run(inserter)
49 inserter.stopTestRun()49 inserter.stopTestRun()
50 self.assertEqual(1, cmd.execute())50 self.assertEqual(1, cmd.execute())
51 self.assertEqual('results', ui.outputs[0][0])
52 suite = ui.outputs[0][1]
53 ui.outputs[0] = ('results', None)
54 # We should have seen test outputs (of the failure) and summary data.51 # We should have seen test outputs (of the failure) and summary data.
55 self.assertEqual([52 self.assertEqual([
56 ('results', None),53 ('results', Wildcard),
57 ('values', [('failures', 1)])],54 ('values', [('id', 0), ('tests', 1), ('failures', 1)])],
58 ui.outputs)55 ui.outputs)
56 suite = ui.outputs[0][1]
59 result = testtools.TestResult()57 result = testtools.TestResult()
60 result.startTestRun()58 result.startTestRun()
61 try:59 try:
@@ -116,6 +114,16 @@
116 open = cmd.repository_factory.open114 open = cmd.repository_factory.open
117 def decorate_open_with_get_failing(url):115 def decorate_open_with_get_failing(url):
118 repo = open(url)116 repo = open(url)
117 inserter = repo.get_inserter()
118 inserter.startTestRun()
119 class Cases(ResourcedTestCase):
120 def failing(self):
121 self.fail('foo')
122 def ok(self):
123 pass
124 Cases('failing').run(inserter)
125 Cases('ok').run(inserter)
126 inserter.stopTestRun()
119 orig = repo.get_failing127 orig = repo.get_failing
120 def get_failing():128 def get_failing():
121 calls.append(True)129 calls.append(True)
@@ -124,5 +132,6 @@
124 return repo132 return repo
125 cmd.repository_factory.open = decorate_open_with_get_failing133 cmd.repository_factory.open = decorate_open_with_get_failing
126 cmd.repository_factory.initialise(ui.here)134 cmd.repository_factory.initialise(ui.here)
127 self.assertEqual(0, cmd.execute())135 self.assertEqual(1, cmd.execute())
128 self.assertEqual([True], calls)136 self.assertEqual([True], calls)
137
129138
=== modified file 'testrepository/tests/commands/test_last.py'
--- testrepository/tests/commands/test_last.py 2010-01-10 08:52:00 +0000
+++ testrepository/tests/commands/test_last.py 2010-09-20 17:47:50 +0000
@@ -19,7 +19,7 @@
19from testrepository.commands import last19from testrepository.commands import last
20from testrepository.ui.model import UI20from testrepository.ui.model import UI
21from testrepository.repository import memory21from testrepository.repository import memory
22from testrepository.tests import ResourcedTestCase22from testrepository.tests import ResourcedTestCase, Wildcard
2323
2424
25class TestCommand(ResourcedTestCase):25class TestCommand(ResourcedTestCase):
@@ -45,14 +45,12 @@
45 Cases('ok').run(inserter)45 Cases('ok').run(inserter)
46 id = inserter.stopTestRun()46 id = inserter.stopTestRun()
47 self.assertEqual(1, cmd.execute())47 self.assertEqual(1, cmd.execute())
48 self.assertEqual('results', ui.outputs[0][0])
49 suite = ui.outputs[0][1]
50 ui.outputs[0] = ('results', None)
51 # We should have seen test outputs (of the failure) and summary data.48 # We should have seen test outputs (of the failure) and summary data.
52 self.assertEqual([49 self.assertEqual([
53 ('results', None),50 ('results', Wildcard),
54 ('values', [('id', id), ('tests', 2), ('failures', 1)])],51 ('values', [('id', id), ('tests', 2), ('failures', 1)])],
55 ui.outputs)52 ui.outputs)
53 suite = ui.outputs[0][1]
56 result = testtools.TestResult()54 result = testtools.TestResult()
57 result.startTestRun()55 result.startTestRun()
58 try:56 try:
5957
=== modified file 'testrepository/tests/commands/test_load.py'
--- testrepository/tests/commands/test_load.py 2010-01-08 12:08:41 +0000
+++ testrepository/tests/commands/test_load.py 2010-09-20 17:47:50 +0000
@@ -18,7 +18,7 @@
1818
19from testrepository.commands import load19from testrepository.commands import load
20from testrepository.ui.model import UI20from testrepository.ui.model import UI
21from testrepository.tests import ResourcedTestCase21from testrepository.tests import ResourcedTestCase, Wildcard
22from testrepository.tests.test_repository import RecordingRepositoryFactory22from testrepository.tests.test_repository import RecordingRepositoryFactory
23from testrepository.repository import memory23from testrepository.repository import memory
2424
@@ -77,10 +77,8 @@
77 cmd.repository_factory.initialise(ui.here)77 cmd.repository_factory.initialise(ui.here)
78 self.assertEqual(1, cmd.execute())78 self.assertEqual(1, cmd.execute())
79 suite = ui.outputs[0][1]79 suite = ui.outputs[0][1]
80 self.assertEqual('results', ui.outputs[0][0])
81 ui.outputs[0] = ('results', None)
82 self.assertEqual([80 self.assertEqual([
83 ('results', None),81 ('results', Wildcard),
84 ('values', [('id', 0), ('tests', 1), ('failures', 1)])],82 ('values', [('id', 0), ('tests', 1), ('failures', 1)])],
85 ui.outputs)83 ui.outputs)
86 result = testtools.TestResult()84 result = testtools.TestResult()
@@ -100,7 +98,8 @@
100 cmd.repository_factory.initialise(ui.here)98 cmd.repository_factory.initialise(ui.here)
101 self.assertEqual(0, cmd.execute())99 self.assertEqual(0, cmd.execute())
102 self.assertEqual(100 self.assertEqual(
103 [('values', [('id', 0), ('tests', 1), ('skips', 1)])],101 [('results', Wildcard),
102 ('values', [('id', 0), ('tests', 1), ('skips', 1)])],
104 ui.outputs)103 ui.outputs)
105104
106 def test_load_new_shows_test_summary_no_tests(self):105 def test_load_new_shows_test_summary_no_tests(self):
@@ -110,7 +109,9 @@
110 cmd.repository_factory = memory.RepositoryFactory()109 cmd.repository_factory = memory.RepositoryFactory()
111 cmd.repository_factory.initialise(ui.here)110 cmd.repository_factory.initialise(ui.here)
112 self.assertEqual(0, cmd.execute())111 self.assertEqual(0, cmd.execute())
113 self.assertEqual([('values', [('id', 0), ('tests', 0)])], ui.outputs)112 self.assertEqual(
113 [('results', Wildcard), ('values', [('id', 0), ('tests', 0)])],
114 ui.outputs)
114115
115 def test_load_new_shows_test_summary_per_stream(self):116 def test_load_new_shows_test_summary_per_stream(self):
116 # This may not be the final layout, but for now per-stream stats are117 # This may not be the final layout, but for now per-stream stats are
@@ -122,7 +123,9 @@
122 cmd.repository_factory.initialise(ui.here)123 cmd.repository_factory.initialise(ui.here)
123 self.assertEqual(0, cmd.execute())124 self.assertEqual(0, cmd.execute())
124 self.assertEqual([125 self.assertEqual([
126 ('results', Wildcard),
125 ('values', [('id', 0), ('tests', 0)]),127 ('values', [('id', 0), ('tests', 0)]),
128 ('results', Wildcard),
126 ('values', [('id', 1), ('tests', 0)])],129 ('values', [('id', 1), ('tests', 0)])],
127 ui.outputs)130 ui.outputs)
128131
129132
=== modified file 'testrepository/tests/test_matchers.py'
--- testrepository/tests/test_matchers.py 2010-01-16 00:01:45 +0000
+++ testrepository/tests/test_matchers.py 2010-09-20 17:47:50 +0000
@@ -15,6 +15,7 @@
15"""Tests for matchers used by or for testing testrepository."""15"""Tests for matchers used by or for testing testrepository."""
1616
17import sys17import sys
18from testtools import TestCase
1819
19from testrepository.tests import ResourcedTestCase20from testrepository.tests import ResourcedTestCase
20from testrepository.tests.matchers import MatchesException21from testrepository.tests.matchers import MatchesException
@@ -55,3 +56,19 @@
55 error = sys.exc_info()56 error = sys.exc_info()
56 mismatch = matcher.match(error)57 mismatch = matcher.match(error)
57 self.assertEqual(None, mismatch)58 self.assertEqual(None, mismatch)
59
60
61class TestWildcard(TestCase):
62
63 def test_wildcard_equals_everything(self):
64 from testrepository.tests import Wildcard
65 self.assertTrue(Wildcard == 5)
66 self.assertTrue(Wildcard == 'orange')
67 self.assertTrue('orange' == Wildcard)
68 self.assertTrue(5 == Wildcard)
69
70 def test_wildcard_not_equals_nothing(self):
71 from testrepository.tests import Wildcard
72 self.assertFalse(Wildcard != 5)
73 self.assertFalse(Wildcard != 'orange')
74
5875
=== added file 'testrepository/tests/test_results.py'
--- testrepository/tests/test_results.py 1970-01-01 00:00:00 +0000
+++ testrepository/tests/test_results.py 2010-09-20 17:47:50 +0000
@@ -0,0 +1,28 @@
1#
2# Copyright (c) 2010 Testrepository Contributors
3#
4# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
5# license at the users choice. A copy of both licenses are available in the
6# project source as Apache-2.0 and BSD. You may not use this file except in
7# compliance with one of these two licences.
8#
9# Unless required by applicable law or agreed to in writing, software
10# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
11# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12# license you chose for the specific language governing permissions and
13# limitations under that license.
14
15from testtools import TestCase, TestResult
16
17from testrepository.results import TestResultFilter
18
19
20class ResultFilter(TestCase):
21
22 def test_addSuccess_increases_count(self):
23 result = TestResult()
24 filtered = TestResultFilter(result)
25 filtered.startTest(self)
26 filtered.addSuccess(self)
27 filtered.stopTest(self)
28 self.assertEqual(1, result.testsRun)
029
=== modified file 'testrepository/tests/test_ui.py'
--- testrepository/tests/test_ui.py 2010-09-07 12:37:17 +0000
+++ testrepository/tests/test_ui.py 2010-09-20 17:47:50 +0000
@@ -102,14 +102,6 @@
102 ui = self.get_test_ui()102 ui = self.get_test_ui()
103 ui.output_rest('')103 ui.output_rest('')
104104
105 def test_output_results(self):
106 # output_results can be called and takes a thing that can be 'run'.
107 ui = self.get_test_ui()
108 class Case(ResourcedTestCase):
109 def method(self):
110 pass
111 ui.output_results(Case('method'))
112
113 def test_output_stream(self):105 def test_output_stream(self):
114 # a stream of bytes can be output.106 # a stream of bytes can be output.
115 ui = self.get_test_ui()107 ui = self.get_test_ui()
@@ -192,3 +184,12 @@
192 stderr=subprocess.PIPE)184 stderr=subprocess.PIPE)
193 out, err = proc.communicate()185 out, err = proc.communicate()
194 proc.returncode186 proc.returncode
187
188 def test_make_result(self):
189 # make_result should return a TestResult.
190 ui = self.ui_factory()
191 ui.set_command(commands.Command(ui))
192 result = ui.make_result(lambda: None)
193 result.startTestRun()
194 result.stopTestRun()
195 self.assertEqual(0, result.testsRun)
195196
=== modified file 'testrepository/tests/ui/test_cli.py'
--- testrepository/tests/ui/test_cli.py 2010-09-11 19:56:11 +0000
+++ testrepository/tests/ui/test_cli.py 2010-09-20 17:47:50 +0000
@@ -1,11 +1,11 @@
1#1#
2# Copyright (c) 2009, 2010 Testrepository Contributors2# Copyright (c) 2009, 2010 Testrepository Contributors
3# 3#
4# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause4# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause
5# license at the users choice. A copy of both licenses are available in the5# license at the users choice. A copy of both licenses are available in the
6# project source as Apache-2.0 and BSD. You may not use this file except in6# project source as Apache-2.0 and BSD. You may not use this file except in
7# compliance with one of these two licences.7# compliance with one of these two licences.
8# 8#
9# Unless required by applicable law or agreed to in writing, software9# Unless required by applicable law or agreed to in writing, software
10# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT10# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT
11# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the11# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
@@ -18,10 +18,10 @@
18from cStringIO import StringIO18from cStringIO import StringIO
19import sys19import sys
2020
21from testtools import TestCase
21from testtools.matchers import DocTestMatches22from testtools.matchers import DocTestMatches
2223
23from testrepository import arguments24from testrepository import arguments
24import testrepository.arguments.command
25from testrepository import commands25from testrepository import commands
26from testrepository.ui import cli26from testrepository.ui import cli
27from testrepository.tests import ResourcedTestCase27from testrepository.tests import ResourcedTestCase
@@ -42,7 +42,7 @@
42 stdout = StringIO()42 stdout = StringIO()
43 stdin = StringIO()43 stdin = StringIO()
44 stderr = StringIO()44 stderr = StringIO()
45 ui = cli.UI([], stdin, stdout, stderr)45 cli.UI([], stdin, stdout, stderr)
4646
47 def test_stream_comes_from_stdin(self):47 def test_stream_comes_from_stdin(self):
48 stdout = StringIO()48 stdout = StringIO()
@@ -89,7 +89,8 @@
89 class Case(ResourcedTestCase):89 class Case(ResourcedTestCase):
90 def method(self):90 def method(self):
91 self.fail('quux')91 self.fail('quux')
92 ui.output_results(Case('method'))92 result = ui.make_result(lambda: None)
93 Case('method').run(result)
93 self.assertThat(ui._stdout.getvalue(),DocTestMatches(94 self.assertThat(ui._stdout.getvalue(),DocTestMatches(
94 """======================================================================95 """======================================================================
95FAIL: testrepository.tests.ui.test_cli.Case.method96FAIL: testrepository.tests.ui.test_cli.Case.method
@@ -158,3 +159,60 @@
158 cmd.args = [arguments.string.StringArgument('args', max=None)]159 cmd.args = [arguments.string.StringArgument('args', max=None)]
159 ui.set_command(cmd)160 ui.set_command(cmd)
160 self.assertEqual({'args':['one', '--two', 'three']}, ui.arguments)161 self.assertEqual({'args':['one', '--two', 'three']}, ui.arguments)
162
163
164class TestCLITestResult(TestCase):
165
166 def make_exc_info(self):
167 # Make an exc_info tuple for use in testing.
168 try:
169 1/0
170 except ZeroDivisionError:
171 return sys.exc_info()
172
173 def make_result(self, stream=None):
174 if stream is None:
175 stream = StringIO()
176 ui = cli.UI([], None, stream, None)
177 return ui.make_result(lambda: None)
178
179 def test_initial_stream(self):
180 # CLITestResult.__init__ does not do anything to the stream it is
181 # given.
182 stream = StringIO()
183 cli.CLITestResult(cli.UI(None, None, None, None), stream, lambda: None)
184 self.assertEqual('', stream.getvalue())
185
186 def test_format_error(self):
187 # CLITestResult formats errors by giving them a big fat line, a title
188 # made up of their 'label' and the name of the test, another different
189 # big fat line, and then the actual error itself.
190 result = self.make_result()
191 error = result._format_error('label', self, 'error text')
192 expected = '%s%s: %s\n%s%s' % (
193 result.sep1, 'label', self.id(), result.sep2, 'error text')
194 self.assertThat(error, DocTestMatches(expected))
195
196 def test_addError_outputs_error(self):
197 # CLITestResult.addError outputs the given error immediately to the
198 # stream.
199 stream = StringIO()
200 result = self.make_result(stream)
201 error = self.make_exc_info()
202 error_text = result._err_details_to_string(self, error)
203 result.addError(self, error)
204 self.assertThat(
205 stream.getvalue(),
206 DocTestMatches(result._format_error('ERROR', self, error_text)))
207
208 def test_addFailure_outputs_failure(self):
209 # CLITestResult.addError outputs the given error immediately to the
210 # stream.
211 stream = StringIO()
212 result = self.make_result(stream)
213 error = self.make_exc_info()
214 error_text = result._err_details_to_string(self, error)
215 result.addFailure(self, error)
216 self.assertThat(
217 stream.getvalue(),
218 DocTestMatches(result._format_error('FAIL', self, error_text)))
161219
=== modified file 'testrepository/ui/__init__.py'
--- testrepository/ui/__init__.py 2010-02-28 07:33:06 +0000
+++ testrepository/ui/__init__.py 2010-09-20 17:47:50 +0000
@@ -22,6 +22,9 @@
22for.22for.
23"""23"""
2424
25from testtools import TestResult
26
27
25class AbstractUI(object):28class AbstractUI(object):
26 """The base class for UI objects, this providers helpers and the interface.29 """The base class for UI objects, this providers helpers and the interface.
2730
@@ -82,6 +85,14 @@
82 """Helper for iter_streams which subclasses should implement."""85 """Helper for iter_streams which subclasses should implement."""
83 raise NotImplementedError(self._iter_streams)86 raise NotImplementedError(self._iter_streams)
8487
88 def make_result(self, get_id):
89 """Make a `TestResult` that can be used to display test results.
90
91 :param get_id: A nullary callable that returns the id of the test run
92 when called.
93 """
94 raise NotImplementedError(self.make_result)
95
85 def output_error(self, error_tuple):96 def output_error(self, error_tuple):
86 """Show an error to the user.97 """Show an error to the user.
8798
@@ -102,18 +113,6 @@
102 """113 """
103 raise NotImplementedError(self.output_rest)114 raise NotImplementedError(self.output_rest)
104115
105 def output_results(self, suite_or_test):
106 """Show suite_or_test to the user by 'running' it.
107
108 This expects the run to be fast/cheap.
109
110 :param suite_or_test: A suite or test to show to the user. This should
111 obey the 'TestCase' protocol - it should have a method run(result)
112 that causes all the tests contained in the object to be handed to
113 the result object.
114 """
115 raise NotImplementedError(self.output_results)
116
117 def output_stream(self, stream):116 def output_stream(self, stream):
118 """Show a byte stream to the user.117 """Show a byte stream to the user.
119118
@@ -163,3 +162,41 @@
163 """162 """
164 # This might not be the right place.163 # This might not be the right place.
165 raise NotImplementedError(self.subprocess_Popen)164 raise NotImplementedError(self.subprocess_Popen)
165
166
167class BaseUITestResult(TestResult):
168 """An abstract test result used with the UI.
169
170 AbstractUI.make_result probably wants to return an object like this.
171 """
172
173 def __init__(self, ui, get_id):
174 """Construct an `AbstractUITestResult`.
175
176 :param ui: The UI this result is associated with.
177 :param get_id: A nullary callable that returns the id of the test run.
178 """
179 super(BaseUITestResult, self).__init__()
180 self.ui = ui
181 self.get_id = get_id
182
183 def _output_run(self, run_id):
184 """Output a test run.
185
186 :param run_id: The run id.
187 """
188 if self.ui.options.quiet:
189 return
190 values = [('id', run_id), ('tests', self.testsRun)]
191 failures = len(self.failures) + len(self.errors)
192 if failures:
193 values.append(('failures', failures))
194 skips = sum(map(len, self.skip_reasons.itervalues()))
195 if skips:
196 values.append(('skips', skips))
197 self.ui.output_values(values)
198
199 def stopTestRun(self):
200 super(BaseUITestResult, self).stopTestRun()
201 run_id = self.get_id()
202 self._output_run(run_id)
166203
=== modified file 'testrepository/ui/cli.py'
--- testrepository/ui/cli.py 2010-09-11 19:56:11 +0000
+++ testrepository/ui/cli.py 2010-09-20 17:47:50 +0000
@@ -18,31 +18,34 @@
18import os18import os
19import sys19import sys
2020
21import testtools
22
23from testrepository import ui21from testrepository import ui
2422
25class CLITestResult(testtools.TestResult):23
24class CLITestResult(ui.BaseUITestResult):
26 """A TestResult for the CLI."""25 """A TestResult for the CLI."""
2726
28 def __init__(self, stream):27 def __init__(self, ui, get_id, stream):
29 """Construct a CLITestResult writing to stream."""28 """Construct a CLITestResult writing to stream."""
30 super(CLITestResult, self).__init__()29 super(CLITestResult, self).__init__(ui, get_id)
31 self.stream = stream30 self.stream = stream
32 self.sep1 = '=' * 70 + '\n'31 self.sep1 = '=' * 70 + '\n'
33 self.sep2 = '-' * 70 + '\n'32 self.sep2 = '-' * 70 + '\n'
3433
35 def _show_list(self, label, error_list):34 def _format_error(self, label, test, error_text):
36 for test, output in error_list:35 return ''.join([
37 self.stream.write(self.sep1)36 self.sep1,
38 self.stream.write("%s: %s\n" % (label, test.id()))37 '%s: %s\n' % (label, test.id()),
39 self.stream.write(self.sep2)38 self.sep2,
40 self.stream.write(output)39 error_text,
4140 ])
42 def stopTestRun(self):41
43 self._show_list('ERROR', self.errors)42 def addError(self, test, err=None, details=None):
44 self._show_list('FAIL', self.failures)43 super(CLITestResult, self).addError(test, err=err, details=details)
45 super(CLITestResult, self).stopTestRun()44 self.stream.write(self._format_error('ERROR', *(self.errors[-1])))
45
46 def addFailure(self, test, err=None, details=None):
47 super(CLITestResult, self).addFailure(test, err=err, details=details)
48 self.stream.write(self._format_error('FAIL', *(self.failures[-1])))
4649
4750
48class UI(ui.AbstractUI):51class UI(ui.AbstractUI):
@@ -64,6 +67,9 @@
64 def _iter_streams(self, stream_type):67 def _iter_streams(self, stream_type):
65 yield self._stdin68 yield self._stdin
6669
70 def make_result(self, get_id):
71 return CLITestResult(self, get_id, self._stdout)
72
67 def output_error(self, error_tuple):73 def output_error(self, error_tuple):
68 self._stderr.write(str(error_tuple[1]) + '\n')74 self._stderr.write(str(error_tuple[1]) + '\n')
6975
@@ -72,14 +78,6 @@
72 if not rest_string.endswith('\n'):78 if not rest_string.endswith('\n'):
73 self._stdout.write('\n')79 self._stdout.write('\n')
7480
75 def output_results(self, suite_or_test):
76 result = CLITestResult(self._stdout)
77 result.startTestRun()
78 try:
79 suite_or_test.run(result)
80 finally:
81 result.stopTestRun()
82
83 def output_stream(self, stream):81 def output_stream(self, stream):
84 contents = stream.read(65536)82 contents = stream.read(65536)
85 while contents:83 while contents:
8684
=== modified file 'testrepository/ui/model.py'
--- testrepository/ui/model.py 2010-09-07 12:37:17 +0000
+++ testrepository/ui/model.py 2010-09-20 17:47:50 +0000
@@ -19,6 +19,7 @@
1919
20from testrepository import ui20from testrepository import ui
2121
22
22class ProcessModel(object):23class ProcessModel(object):
23 """A subprocess.Popen test double."""24 """A subprocess.Popen test double."""
2425
@@ -31,6 +32,47 @@
31 return '', ''32 return '', ''
3233
3334
35class TestSuiteModel(object):
36
37 def __init__(self):
38 self._results = []
39
40 def recordResult(self, method, *args):
41 self._results.append((method, args))
42
43 def run(self, result):
44 for method, args in self._results:
45 getattr(result, method)(*args)
46
47
48class TestResultModel(ui.BaseUITestResult):
49
50 def __init__(self, ui, get_id):
51 super(TestResultModel, self).__init__(ui, get_id)
52 self._suite = TestSuiteModel()
53
54 def startTest(self, test):
55 super(TestResultModel, self).startTest(test)
56 self._suite.recordResult('startTest', test)
57
58 def stopTest(self, test):
59 self._suite.recordResult('stopTest', test)
60
61 def addError(self, test, *args):
62 super(TestResultModel, self).addError(test, *args)
63 self._suite.recordResult('addError', test, *args)
64
65 def addFailure(self, test, *args):
66 super(TestResultModel, self).addFailure(test, *args)
67 self._suite.recordResult('addFailure', test, *args)
68
69 def stopTestRun(self):
70 if self.ui.options.quiet:
71 return
72 self.ui.outputs.append(('results', self._suite))
73 return super(TestResultModel, self).stopTestRun()
74
75
34class UI(ui.AbstractUI):76class UI(ui.AbstractUI):
35 """A object based UI.77 """A object based UI.
36 78
@@ -87,15 +129,15 @@
87 for stream_bytes in streams:129 for stream_bytes in streams:
88 yield StringIO(stream_bytes)130 yield StringIO(stream_bytes)
89131
132 def make_result(self, get_id):
133 return TestResultModel(self, get_id)
134
90 def output_error(self, error_tuple):135 def output_error(self, error_tuple):
91 self.outputs.append(('error', error_tuple))136 self.outputs.append(('error', error_tuple))
92137
93 def output_rest(self, rest_string):138 def output_rest(self, rest_string):
94 self.outputs.append(('rest', rest_string))139 self.outputs.append(('rest', rest_string))
95140
96 def output_results(self, suite_or_test):
97 self.outputs.append(('results', suite_or_test))
98
99 def output_stream(self, stream):141 def output_stream(self, stream):
100 self.outputs.append(('stream', stream.read()))142 self.outputs.append(('stream', stream.read()))
101143

Subscribers

People subscribed via source and target branches