Merge lp:~jml/testrepository/show-failures-incrementally-613152 into lp:~testrepository/testrepository/trunk
- show-failures-incrementally-613152
- Merge into trunk
Status: | Merged | ||||
---|---|---|---|---|---|
Merged at revision: | 100 | ||||
Proposed branch: | lp:~jml/testrepository/show-failures-incrementally-613152 | ||||
Merge into: | lp:~testrepository/testrepository/trunk | ||||
Diff against target: |
884 lines (+334/-143) 16 files modified
testrepository/commands/__init__.py (+0/-21) testrepository/commands/failing.py (+21/-22) testrepository/commands/last.py (+7/-15) testrepository/commands/load.py (+15/-15) testrepository/results.py (+13/-0) testrepository/tests/__init__.py (+17/-0) testrepository/tests/commands/test_failing.py (+16/-7) testrepository/tests/commands/test_last.py (+3/-5) testrepository/tests/commands/test_load.py (+9/-6) testrepository/tests/test_matchers.py (+17/-0) testrepository/tests/test_results.py (+28/-0) testrepository/tests/test_ui.py (+9/-8) testrepository/tests/ui/test_cli.py (+63/-5) testrepository/ui/__init__.py (+49/-12) testrepository/ui/cli.py (+22/-24) testrepository/ui/model.py (+45/-3) |
||||
To merge this branch: | bzr merge lp:~jml/testrepository/show-failures-incrementally-613152 | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Robert Collins | Approve | ||
Review via email: mp+31765@code.launchpad.net |
Commit message
Show test failures and errors as we get them in testr load.
Description of the change
In the Wikipedia spirit of editing boldly, I've hacked up testrepository to show failures incrementally. Most of the damage occurs in the UI contract.
Here's what I've done:
* Added UI.make_result to the public interface, requiring a UI
to be responsible for making its object.
* Changed 'failing', 'load' and 'last' to use the result
provided by the UI, rather than making their own staging
result object to capture the stream.
* Change Command.output_run to no longer take an output stream,
as such a thing is no longer required -- the UI's result takes
care of it now.
* Dropped UI.output_results, since there's no longer any actual
use case.
* Gave the CLI UI a TestResult implementation that prints errors
and failures as it gets them.
Changing 'last' and 'failing' wasn't really necessary to fix bug 613152, neither was removing UI.output_results or changing Command.output_run, but it seemed better to me to go the whole hog.
I also cleaned up pyflakes warnings where I saw them.
- 106. By Jonathan Lange
-
Remove output_results, not needed.
Jelmer Vernooij (jelmer) wrote : | # |
Jonathan Lange (jml) wrote : | # |
Still need a review for this.
Robert Collins (lifeless) wrote : | # |
output_run looks like it doesn't output a run anymore: its intent is sufficiently different, I would like it to actually match.
output_run_summary, perhaps.
or
output_result, which seems to be what it does now.
I think things are a little confused in fact; it looks to me that the summary printing at the end of the result wants to happen in the stopRun method of the result resturned from makeResult, and output_run can be wholly deleted.
You've added a TODO which there is an example of doing that TODO elsewhere in your diff.
The rest looks ok.
Thanks for doing this, it is appreciated, I've just been stupidly busy bootstrapping stuff in lp.
- 107. By Jonathan Lange
-
Merge trunk, doing major work to resolve the conflict in the failing command.
- 108. By Jonathan Lange
-
make_result now takes a callable that returns the id of the test run.
Not actually used yet. - 109. By Jonathan Lange
-
Refactor the CLITestResult tests so they don't care so much about how results
are constructed. - 110. By Jonathan Lange
-
Wildcard object equal to everything.
- 111. By Jonathan Lange
-
Use Wildcard to make matching UI output a little nicer.
- 112. By Jonathan Lange
-
Give the UI's TestResult object full responsibility for summing up the result
of the test, - 113. By Jonathan Lange
-
Oops.
- 114. By Jonathan Lange
-
Delete unused output_run.
- 115. By Jonathan Lange
-
Tests for results module.
- 116. By Jonathan Lange
-
Probably not.
Jonathan Lange (jml) wrote : | # |
As indicated on IRC, I didn't add a TODO, I just moved it.
It took me a while to refactor the code to use stopTestRun rather than output_run. I have had to change some of the behaviour to do so.
Specifically:
* 'testr failing' now shows run id, total tests and skip count as well failure count
* 'testr failing' now has return code 1 when there are failing tests
* 'testr load' will show skips
There's probably other stuff, although I tried to minimize it. Big enough that it definitely needs review.
Robert Collins (lifeless) wrote : | # |
+ def _make_result(self, repo, evaluator):
+ if self.ui.
+ return evaluator
this two things seem disconnected; perhaps rather than evaluator you should say list_result or something. I think something originally together has been split out far enough that the parameter needs a better name.
+Wildcard = _Wildcard()
perhaps
wildcard = Wildcard()
would be nicer. Many things (like str, object, etc) are lowercase for instances in Python.
+ def _output_run(self, run_id):
def _output_summary
- I think.
+ return ''.join([
+ self.sep1,
+ '%s: %s\n' % (label, test.id()),
+ self.sep2,
+ error_text,
+ ])
Looks like a lurking UnicodeDecodeError to me; we either need to make this unicode always or manually encode error. One way to test that would be to throw a mixed encoding, non-ascii test outcome at it.
Would you be kind enough to do these tweaks? then its definitely gtg.
Jonathan Lange (jml) wrote : | # |
On Sun, Sep 26, 2010 at 9:46 AM, Robert Collins
<email address hidden> wrote:
> Review: Approve
> + def _make_result(self, repo, evaluator):
> + if self.ui.
> + return evaluator
>
> this two things seem disconnected; perhaps rather than evaluator you should say list_result or something. I think something originally together has been split out far enough that the parameter needs a better name.
Done.
>
>
> +Wildcard = _Wildcard()
>
> perhaps
> wildcard = Wildcard()
>
> would be nicer. Many things (like str, object, etc) are lowercase for instances in Python.
>
Instances of 'type', perhaps. I chose the case to reflect None, True
and False: other singleton constants in Python.
>
> + def _output_run(self, run_id):
>
> def _output_summary
>
> - I think.
>
Changed.
>
> + return ''.join([
> + self.sep1,
> + '%s: %s\n' % (label, test.id()),
> + self.sep2,
> + error_text,
> + ])
>
> Looks like a lurking UnicodeDecodeError to me; we either need to make this unicode always or manually encode error. One way to test that would be to throw a mixed encoding, non-ascii test outcome at it.
Well, it's not a *new* lurking UnicodeDecodeError. It's equivalent to
what was there earlier.
We are always going to be getting the error_text from the base
TestResult. In this case, we are relying on testtools to store
unicode. I've changed all of the literals to be unicode to at least
communicate this more clearly.
I was going to add a test (below), but it seems to be a case of
"Doctor, it hurts when I do this!".
def test_format_
result = self.make_result()
error_text = 'foo' + u'проба'
error = result.
expected = u'%s%s: %s\n%s%s' % (
>
> Would you be kind enough to do these tweaks? then its definitely gtg.
Thanks,
jml
Jelmer Vernooij (jelmer) wrote : | # |
W00t!
Preview Diff
1 | === modified file 'testrepository/commands/__init__.py' |
2 | --- testrepository/commands/__init__.py 2010-02-28 23:02:29 +0000 |
3 | +++ testrepository/commands/__init__.py 2010-09-20 17:47:50 +0000 |
4 | @@ -150,27 +150,6 @@ |
5 | def _init(self): |
6 | """Per command init call, called into by Command.__init__.""" |
7 | |
8 | - def output_run(self, run_id, output, evaluator): |
9 | - """Output a test run. |
10 | - |
11 | - :param run_id: The run id. |
12 | - :param output: A StringIO containing a subunit stream for some portion of the run to show. |
13 | - :param evaluator: A TestResult evaluating the entire run. |
14 | - """ |
15 | - if self.ui.options.quiet: |
16 | - return |
17 | - if output.getvalue(): |
18 | - output.seek(0) |
19 | - self.ui.output_results(subunit.ProtocolTestCase(output)) |
20 | - values = [('id', run_id), ('tests', evaluator.testsRun)] |
21 | - failures = len(evaluator.failures) + len(evaluator.errors) |
22 | - if failures: |
23 | - values.append(('failures', failures)) |
24 | - skips = sum(map(len, evaluator.skip_reasons.itervalues())) |
25 | - if skips: |
26 | - values.append(('skips', skips)) |
27 | - self.ui.output_values(values) |
28 | - |
29 | def run(self): |
30 | """The core logic for this command to be implemented by subclasses.""" |
31 | raise NotImplementedError(self.run) |
32 | |
33 | === modified file 'testrepository/commands/failing.py' |
34 | --- testrepository/commands/failing.py 2010-09-11 19:56:11 +0000 |
35 | +++ testrepository/commands/failing.py 2010-09-20 17:47:50 +0000 |
36 | @@ -14,13 +14,13 @@ |
37 | |
38 | """Show the current failures in the repository.""" |
39 | |
40 | -from cStringIO import StringIO |
41 | import optparse |
42 | |
43 | -import subunit.test_results |
44 | from testtools import MultiTestResult, TestResult |
45 | |
46 | from testrepository.commands import Command |
47 | +from testrepository.results import TestResultFilter |
48 | + |
49 | |
50 | class failing(Command): |
51 | """Show the current failures known by the repository. |
52 | @@ -41,17 +41,31 @@ |
53 | default=False, help="Show only a list of failing tests."), |
54 | ] |
55 | |
56 | + def _list_subunit(self, run): |
57 | + # TODO only failing tests. |
58 | + stream = run.get_subunit_stream() |
59 | + self.ui.output_stream(stream) |
60 | + if stream: |
61 | + return 1 |
62 | + else: |
63 | + return 0 |
64 | + |
65 | + def _make_result(self, repo, evaluator): |
66 | + if self.ui.options.list: |
67 | + return evaluator |
68 | + output_result = self.ui.make_result(repo.latest_id) |
69 | + filtered = TestResultFilter(output_result, filter_skip=True) |
70 | + return MultiTestResult(evaluator, filtered) |
71 | + |
72 | def run(self): |
73 | repo = self.repository_factory.open(self.ui.here) |
74 | run = repo.get_failing() |
75 | + if self.ui.options.subunit: |
76 | + return self._list_subunit(run) |
77 | case = run.get_test() |
78 | failed = False |
79 | evaluator = TestResult() |
80 | - output = StringIO() |
81 | - output_stream = subunit.TestProtocolClient(output) |
82 | - filtered = subunit.test_results.TestResultFilter(output_stream, |
83 | - filter_skip=True) |
84 | - result = MultiTestResult(evaluator, filtered) |
85 | + result = self._make_result(repo, evaluator) |
86 | result.startTestRun() |
87 | try: |
88 | case.run(result) |
89 | @@ -66,19 +80,4 @@ |
90 | failing_tests = [ |
91 | test for test, _ in evaluator.errors + evaluator.failures] |
92 | self.ui.output_tests(failing_tests) |
93 | - return result |
94 | - if self.ui.options.subunit: |
95 | - # TODO only failing tests. |
96 | - self.ui.output_stream(run.get_subunit_stream()) |
97 | - return result |
98 | - if self.ui.options.quiet: |
99 | - return result |
100 | - if output.getvalue(): |
101 | - output.seek(0) |
102 | - self.ui.output_results(subunit.ProtocolTestCase(output)) |
103 | - values = [] |
104 | - failures = len(evaluator.failures) + len(evaluator.errors) |
105 | - if failures: |
106 | - values.append(('failures', failures)) |
107 | - self.ui.output_values(values) |
108 | return result |
109 | |
110 | === modified file 'testrepository/commands/last.py' |
111 | --- testrepository/commands/last.py 2010-01-10 08:52:00 +0000 |
112 | +++ testrepository/commands/last.py 2010-09-20 17:47:50 +0000 |
113 | @@ -5,7 +5,7 @@ |
114 | # license at the users choice. A copy of both licenses are available in the |
115 | # project source as Apache-2.0 and BSD. You may not use this file except in |
116 | # compliance with one of these two licences. |
117 | -# |
118 | +# |
119 | # Unless required by applicable law or agreed to in writing, software |
120 | # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT |
121 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
122 | @@ -14,16 +14,13 @@ |
123 | |
124 | """Show the last run loaded into a repository.""" |
125 | |
126 | -from cStringIO import StringIO |
127 | - |
128 | -import subunit.test_results |
129 | -from testtools import MultiTestResult, TestResult |
130 | - |
131 | from testrepository.commands import Command |
132 | +from testrepository.results import TestResultFilter |
133 | + |
134 | |
135 | class last(Command): |
136 | """Show the last run loaded into a repository. |
137 | - |
138 | + |
139 | Failing tests are shown on the console and a summary of the run is printed |
140 | at the end. |
141 | """ |
142 | @@ -33,19 +30,14 @@ |
143 | run_id = repo.latest_id() |
144 | case = repo.get_test_run(run_id).get_test() |
145 | failed = False |
146 | - evaluator = TestResult() |
147 | - output = StringIO() |
148 | - output_stream = subunit.TestProtocolClient(output) |
149 | - filtered = subunit.test_results.TestResultFilter(output_stream, |
150 | - filter_skip=True) |
151 | - result = MultiTestResult(evaluator, filtered) |
152 | + output_result = self.ui.make_result(lambda: run_id) |
153 | + result = TestResultFilter(output_result, filter_skip=True) |
154 | result.startTestRun() |
155 | try: |
156 | case.run(result) |
157 | finally: |
158 | result.stopTestRun() |
159 | - failed = not evaluator.wasSuccessful() |
160 | - self.output_run(run_id, output, evaluator) |
161 | + failed = not result.wasSuccessful() |
162 | if failed: |
163 | return 1 |
164 | else: |
165 | |
166 | === modified file 'testrepository/commands/load.py' |
167 | --- testrepository/commands/load.py 2010-01-10 08:52:00 +0000 |
168 | +++ testrepository/commands/load.py 2010-09-20 17:47:50 +0000 |
169 | @@ -1,11 +1,11 @@ |
170 | # |
171 | # Copyright (c) 2009 Testrepository Contributors |
172 | -# |
173 | +# |
174 | # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause |
175 | # license at the users choice. A copy of both licenses are available in the |
176 | # project source as Apache-2.0 and BSD. You may not use this file except in |
177 | # compliance with one of these two licences. |
178 | -# |
179 | +# |
180 | # Unless required by applicable law or agreed to in writing, software |
181 | # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT |
182 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
183 | @@ -14,16 +14,16 @@ |
184 | |
185 | """Load data into a repository.""" |
186 | |
187 | -from cStringIO import StringIO |
188 | - |
189 | -import subunit.test_results |
190 | -from testtools import MultiTestResult, TestResult |
191 | +import subunit |
192 | +from testtools import MultiTestResult |
193 | |
194 | from testrepository.commands import Command |
195 | +from testrepository.results import TestResultFilter |
196 | + |
197 | |
198 | class load(Command): |
199 | """Load a subunit stream into a repository. |
200 | - |
201 | + |
202 | Failing tests are shown on the console and a summary of the stream is |
203 | printed at the end. |
204 | """ |
205 | @@ -34,21 +34,21 @@ |
206 | path = self.ui.here |
207 | repo = self.repository_factory.open(path) |
208 | failed = False |
209 | + run_id = None |
210 | for stream in self.ui.iter_streams('subunit'): |
211 | inserter = repo.get_inserter() |
212 | - evaluator = TestResult() |
213 | - output = StringIO() |
214 | - output_stream = subunit.TestProtocolClient(output) |
215 | - filtered = subunit.test_results.TestResultFilter(output_stream, |
216 | - filter_skip=True) |
217 | + output_result = self.ui.make_result(lambda: run_id) |
218 | + # XXX: We want to *count* skips, but not show them. |
219 | + filtered = TestResultFilter(output_result, filter_skip=False) |
220 | case = subunit.ProtocolTestCase(stream) |
221 | + filtered.startTestRun() |
222 | inserter.startTestRun() |
223 | try: |
224 | - case.run(MultiTestResult(inserter, evaluator, filtered)) |
225 | + case.run(MultiTestResult(inserter, filtered)) |
226 | finally: |
227 | run_id = inserter.stopTestRun() |
228 | - failed = failed or not evaluator.wasSuccessful() |
229 | - self.output_run(run_id, output, evaluator) |
230 | + filtered.stopTestRun() |
231 | + failed = failed or not filtered.wasSuccessful() |
232 | if failed: |
233 | return 1 |
234 | else: |
235 | |
236 | === added file 'testrepository/results.py' |
237 | --- testrepository/results.py 1970-01-01 00:00:00 +0000 |
238 | +++ testrepository/results.py 2010-09-20 17:47:50 +0000 |
239 | @@ -0,0 +1,13 @@ |
240 | +from subunit import test_results |
241 | + |
242 | + |
243 | +class TestResultFilter(test_results.TestResultFilter): |
244 | + """Test result filter.""" |
245 | + |
246 | + def _filtered(self): |
247 | + super(TestResultFilter, self)._filtered() |
248 | + # XXX: This is really crappy. It assumes that the test result we |
249 | + # actually care about is decorated twice. Probably the more correct |
250 | + # thing to do is fix subunit so that incrementing 'testsRun' on a test |
251 | + # result increments them on the decorated test result. |
252 | + self.decorated.decorated.testsRun += 1 |
253 | |
254 | === modified file 'testrepository/tests/__init__.py' |
255 | --- testrepository/tests/__init__.py 2010-01-16 00:01:45 +0000 |
256 | +++ testrepository/tests/__init__.py 2010-09-20 17:47:50 +0000 |
257 | @@ -30,6 +30,22 @@ |
258 | self) |
259 | |
260 | |
261 | +class _Wildcard(object): |
262 | + """Object that is equal to everything.""" |
263 | + |
264 | + def __repr__(self): |
265 | + return '*' |
266 | + |
267 | + def __eq__(self, other): |
268 | + return True |
269 | + |
270 | + def __ne__(self, other): |
271 | + return False |
272 | + |
273 | + |
274 | +Wildcard = _Wildcard() |
275 | + |
276 | + |
277 | def test_suite(): |
278 | packages = [ |
279 | 'arguments', |
280 | @@ -43,6 +59,7 @@ |
281 | 'matchers', |
282 | 'monkeypatch', |
283 | 'repository', |
284 | + 'results', |
285 | 'setup', |
286 | 'stubpackage', |
287 | 'testr', |
288 | |
289 | === modified file 'testrepository/tests/commands/test_failing.py' |
290 | --- testrepository/tests/commands/test_failing.py 2010-09-07 12:37:17 +0000 |
291 | +++ testrepository/tests/commands/test_failing.py 2010-09-20 17:47:50 +0000 |
292 | @@ -22,7 +22,7 @@ |
293 | from testrepository.commands import failing |
294 | from testrepository.ui.model import UI |
295 | from testrepository.repository import memory |
296 | -from testrepository.tests import ResourcedTestCase |
297 | +from testrepository.tests import ResourcedTestCase, Wildcard |
298 | |
299 | |
300 | class TestCommand(ResourcedTestCase): |
301 | @@ -48,14 +48,12 @@ |
302 | Cases('ok').run(inserter) |
303 | inserter.stopTestRun() |
304 | self.assertEqual(1, cmd.execute()) |
305 | - self.assertEqual('results', ui.outputs[0][0]) |
306 | - suite = ui.outputs[0][1] |
307 | - ui.outputs[0] = ('results', None) |
308 | # We should have seen test outputs (of the failure) and summary data. |
309 | self.assertEqual([ |
310 | - ('results', None), |
311 | - ('values', [('failures', 1)])], |
312 | + ('results', Wildcard), |
313 | + ('values', [('id', 0), ('tests', 1), ('failures', 1)])], |
314 | ui.outputs) |
315 | + suite = ui.outputs[0][1] |
316 | result = testtools.TestResult() |
317 | result.startTestRun() |
318 | try: |
319 | @@ -116,6 +114,16 @@ |
320 | open = cmd.repository_factory.open |
321 | def decorate_open_with_get_failing(url): |
322 | repo = open(url) |
323 | + inserter = repo.get_inserter() |
324 | + inserter.startTestRun() |
325 | + class Cases(ResourcedTestCase): |
326 | + def failing(self): |
327 | + self.fail('foo') |
328 | + def ok(self): |
329 | + pass |
330 | + Cases('failing').run(inserter) |
331 | + Cases('ok').run(inserter) |
332 | + inserter.stopTestRun() |
333 | orig = repo.get_failing |
334 | def get_failing(): |
335 | calls.append(True) |
336 | @@ -124,5 +132,6 @@ |
337 | return repo |
338 | cmd.repository_factory.open = decorate_open_with_get_failing |
339 | cmd.repository_factory.initialise(ui.here) |
340 | - self.assertEqual(0, cmd.execute()) |
341 | + self.assertEqual(1, cmd.execute()) |
342 | self.assertEqual([True], calls) |
343 | + |
344 | |
345 | === modified file 'testrepository/tests/commands/test_last.py' |
346 | --- testrepository/tests/commands/test_last.py 2010-01-10 08:52:00 +0000 |
347 | +++ testrepository/tests/commands/test_last.py 2010-09-20 17:47:50 +0000 |
348 | @@ -19,7 +19,7 @@ |
349 | from testrepository.commands import last |
350 | from testrepository.ui.model import UI |
351 | from testrepository.repository import memory |
352 | -from testrepository.tests import ResourcedTestCase |
353 | +from testrepository.tests import ResourcedTestCase, Wildcard |
354 | |
355 | |
356 | class TestCommand(ResourcedTestCase): |
357 | @@ -45,14 +45,12 @@ |
358 | Cases('ok').run(inserter) |
359 | id = inserter.stopTestRun() |
360 | self.assertEqual(1, cmd.execute()) |
361 | - self.assertEqual('results', ui.outputs[0][0]) |
362 | - suite = ui.outputs[0][1] |
363 | - ui.outputs[0] = ('results', None) |
364 | # We should have seen test outputs (of the failure) and summary data. |
365 | self.assertEqual([ |
366 | - ('results', None), |
367 | + ('results', Wildcard), |
368 | ('values', [('id', id), ('tests', 2), ('failures', 1)])], |
369 | ui.outputs) |
370 | + suite = ui.outputs[0][1] |
371 | result = testtools.TestResult() |
372 | result.startTestRun() |
373 | try: |
374 | |
375 | === modified file 'testrepository/tests/commands/test_load.py' |
376 | --- testrepository/tests/commands/test_load.py 2010-01-08 12:08:41 +0000 |
377 | +++ testrepository/tests/commands/test_load.py 2010-09-20 17:47:50 +0000 |
378 | @@ -18,7 +18,7 @@ |
379 | |
380 | from testrepository.commands import load |
381 | from testrepository.ui.model import UI |
382 | -from testrepository.tests import ResourcedTestCase |
383 | +from testrepository.tests import ResourcedTestCase, Wildcard |
384 | from testrepository.tests.test_repository import RecordingRepositoryFactory |
385 | from testrepository.repository import memory |
386 | |
387 | @@ -77,10 +77,8 @@ |
388 | cmd.repository_factory.initialise(ui.here) |
389 | self.assertEqual(1, cmd.execute()) |
390 | suite = ui.outputs[0][1] |
391 | - self.assertEqual('results', ui.outputs[0][0]) |
392 | - ui.outputs[0] = ('results', None) |
393 | self.assertEqual([ |
394 | - ('results', None), |
395 | + ('results', Wildcard), |
396 | ('values', [('id', 0), ('tests', 1), ('failures', 1)])], |
397 | ui.outputs) |
398 | result = testtools.TestResult() |
399 | @@ -100,7 +98,8 @@ |
400 | cmd.repository_factory.initialise(ui.here) |
401 | self.assertEqual(0, cmd.execute()) |
402 | self.assertEqual( |
403 | - [('values', [('id', 0), ('tests', 1), ('skips', 1)])], |
404 | + [('results', Wildcard), |
405 | + ('values', [('id', 0), ('tests', 1), ('skips', 1)])], |
406 | ui.outputs) |
407 | |
408 | def test_load_new_shows_test_summary_no_tests(self): |
409 | @@ -110,7 +109,9 @@ |
410 | cmd.repository_factory = memory.RepositoryFactory() |
411 | cmd.repository_factory.initialise(ui.here) |
412 | self.assertEqual(0, cmd.execute()) |
413 | - self.assertEqual([('values', [('id', 0), ('tests', 0)])], ui.outputs) |
414 | + self.assertEqual( |
415 | + [('results', Wildcard), ('values', [('id', 0), ('tests', 0)])], |
416 | + ui.outputs) |
417 | |
418 | def test_load_new_shows_test_summary_per_stream(self): |
419 | # This may not be the final layout, but for now per-stream stats are |
420 | @@ -122,7 +123,9 @@ |
421 | cmd.repository_factory.initialise(ui.here) |
422 | self.assertEqual(0, cmd.execute()) |
423 | self.assertEqual([ |
424 | + ('results', Wildcard), |
425 | ('values', [('id', 0), ('tests', 0)]), |
426 | + ('results', Wildcard), |
427 | ('values', [('id', 1), ('tests', 0)])], |
428 | ui.outputs) |
429 | |
430 | |
431 | === modified file 'testrepository/tests/test_matchers.py' |
432 | --- testrepository/tests/test_matchers.py 2010-01-16 00:01:45 +0000 |
433 | +++ testrepository/tests/test_matchers.py 2010-09-20 17:47:50 +0000 |
434 | @@ -15,6 +15,7 @@ |
435 | """Tests for matchers used by or for testing testrepository.""" |
436 | |
437 | import sys |
438 | +from testtools import TestCase |
439 | |
440 | from testrepository.tests import ResourcedTestCase |
441 | from testrepository.tests.matchers import MatchesException |
442 | @@ -55,3 +56,19 @@ |
443 | error = sys.exc_info() |
444 | mismatch = matcher.match(error) |
445 | self.assertEqual(None, mismatch) |
446 | + |
447 | + |
448 | +class TestWildcard(TestCase): |
449 | + |
450 | + def test_wildcard_equals_everything(self): |
451 | + from testrepository.tests import Wildcard |
452 | + self.assertTrue(Wildcard == 5) |
453 | + self.assertTrue(Wildcard == 'orange') |
454 | + self.assertTrue('orange' == Wildcard) |
455 | + self.assertTrue(5 == Wildcard) |
456 | + |
457 | + def test_wildcard_not_equals_nothing(self): |
458 | + from testrepository.tests import Wildcard |
459 | + self.assertFalse(Wildcard != 5) |
460 | + self.assertFalse(Wildcard != 'orange') |
461 | + |
462 | |
463 | === added file 'testrepository/tests/test_results.py' |
464 | --- testrepository/tests/test_results.py 1970-01-01 00:00:00 +0000 |
465 | +++ testrepository/tests/test_results.py 2010-09-20 17:47:50 +0000 |
466 | @@ -0,0 +1,28 @@ |
467 | +# |
468 | +# Copyright (c) 2010 Testrepository Contributors |
469 | +# |
470 | +# Licensed under either the Apache License, Version 2.0 or the BSD 3-clause |
471 | +# license at the users choice. A copy of both licenses are available in the |
472 | +# project source as Apache-2.0 and BSD. You may not use this file except in |
473 | +# compliance with one of these two licences. |
474 | +# |
475 | +# Unless required by applicable law or agreed to in writing, software |
476 | +# distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT |
477 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
478 | +# license you chose for the specific language governing permissions and |
479 | +# limitations under that license. |
480 | + |
481 | +from testtools import TestCase, TestResult |
482 | + |
483 | +from testrepository.results import TestResultFilter |
484 | + |
485 | + |
486 | +class ResultFilter(TestCase): |
487 | + |
488 | + def test_addSuccess_increases_count(self): |
489 | + result = TestResult() |
490 | + filtered = TestResultFilter(result) |
491 | + filtered.startTest(self) |
492 | + filtered.addSuccess(self) |
493 | + filtered.stopTest(self) |
494 | + self.assertEqual(1, result.testsRun) |
495 | |
496 | === modified file 'testrepository/tests/test_ui.py' |
497 | --- testrepository/tests/test_ui.py 2010-09-07 12:37:17 +0000 |
498 | +++ testrepository/tests/test_ui.py 2010-09-20 17:47:50 +0000 |
499 | @@ -102,14 +102,6 @@ |
500 | ui = self.get_test_ui() |
501 | ui.output_rest('') |
502 | |
503 | - def test_output_results(self): |
504 | - # output_results can be called and takes a thing that can be 'run'. |
505 | - ui = self.get_test_ui() |
506 | - class Case(ResourcedTestCase): |
507 | - def method(self): |
508 | - pass |
509 | - ui.output_results(Case('method')) |
510 | - |
511 | def test_output_stream(self): |
512 | # a stream of bytes can be output. |
513 | ui = self.get_test_ui() |
514 | @@ -192,3 +184,12 @@ |
515 | stderr=subprocess.PIPE) |
516 | out, err = proc.communicate() |
517 | proc.returncode |
518 | + |
519 | + def test_make_result(self): |
520 | + # make_result should return a TestResult. |
521 | + ui = self.ui_factory() |
522 | + ui.set_command(commands.Command(ui)) |
523 | + result = ui.make_result(lambda: None) |
524 | + result.startTestRun() |
525 | + result.stopTestRun() |
526 | + self.assertEqual(0, result.testsRun) |
527 | |
528 | === modified file 'testrepository/tests/ui/test_cli.py' |
529 | --- testrepository/tests/ui/test_cli.py 2010-09-11 19:56:11 +0000 |
530 | +++ testrepository/tests/ui/test_cli.py 2010-09-20 17:47:50 +0000 |
531 | @@ -1,11 +1,11 @@ |
532 | # |
533 | # Copyright (c) 2009, 2010 Testrepository Contributors |
534 | -# |
535 | +# |
536 | # Licensed under either the Apache License, Version 2.0 or the BSD 3-clause |
537 | # license at the users choice. A copy of both licenses are available in the |
538 | # project source as Apache-2.0 and BSD. You may not use this file except in |
539 | # compliance with one of these two licences. |
540 | -# |
541 | +# |
542 | # Unless required by applicable law or agreed to in writing, software |
543 | # distributed under these licenses is distributed on an "AS IS" BASIS, WITHOUT |
544 | # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
545 | @@ -18,10 +18,10 @@ |
546 | from cStringIO import StringIO |
547 | import sys |
548 | |
549 | +from testtools import TestCase |
550 | from testtools.matchers import DocTestMatches |
551 | |
552 | from testrepository import arguments |
553 | -import testrepository.arguments.command |
554 | from testrepository import commands |
555 | from testrepository.ui import cli |
556 | from testrepository.tests import ResourcedTestCase |
557 | @@ -42,7 +42,7 @@ |
558 | stdout = StringIO() |
559 | stdin = StringIO() |
560 | stderr = StringIO() |
561 | - ui = cli.UI([], stdin, stdout, stderr) |
562 | + cli.UI([], stdin, stdout, stderr) |
563 | |
564 | def test_stream_comes_from_stdin(self): |
565 | stdout = StringIO() |
566 | @@ -89,7 +89,8 @@ |
567 | class Case(ResourcedTestCase): |
568 | def method(self): |
569 | self.fail('quux') |
570 | - ui.output_results(Case('method')) |
571 | + result = ui.make_result(lambda: None) |
572 | + Case('method').run(result) |
573 | self.assertThat(ui._stdout.getvalue(),DocTestMatches( |
574 | """====================================================================== |
575 | FAIL: testrepository.tests.ui.test_cli.Case.method |
576 | @@ -158,3 +159,60 @@ |
577 | cmd.args = [arguments.string.StringArgument('args', max=None)] |
578 | ui.set_command(cmd) |
579 | self.assertEqual({'args':['one', '--two', 'three']}, ui.arguments) |
580 | + |
581 | + |
582 | +class TestCLITestResult(TestCase): |
583 | + |
584 | + def make_exc_info(self): |
585 | + # Make an exc_info tuple for use in testing. |
586 | + try: |
587 | + 1/0 |
588 | + except ZeroDivisionError: |
589 | + return sys.exc_info() |
590 | + |
591 | + def make_result(self, stream=None): |
592 | + if stream is None: |
593 | + stream = StringIO() |
594 | + ui = cli.UI([], None, stream, None) |
595 | + return ui.make_result(lambda: None) |
596 | + |
597 | + def test_initial_stream(self): |
598 | + # CLITestResult.__init__ does not do anything to the stream it is |
599 | + # given. |
600 | + stream = StringIO() |
601 | + cli.CLITestResult(cli.UI(None, None, None, None), stream, lambda: None) |
602 | + self.assertEqual('', stream.getvalue()) |
603 | + |
604 | + def test_format_error(self): |
605 | + # CLITestResult formats errors by giving them a big fat line, a title |
606 | + # made up of their 'label' and the name of the test, another different |
607 | + # big fat line, and then the actual error itself. |
608 | + result = self.make_result() |
609 | + error = result._format_error('label', self, 'error text') |
610 | + expected = '%s%s: %s\n%s%s' % ( |
611 | + result.sep1, 'label', self.id(), result.sep2, 'error text') |
612 | + self.assertThat(error, DocTestMatches(expected)) |
613 | + |
614 | + def test_addError_outputs_error(self): |
615 | + # CLITestResult.addError outputs the given error immediately to the |
616 | + # stream. |
617 | + stream = StringIO() |
618 | + result = self.make_result(stream) |
619 | + error = self.make_exc_info() |
620 | + error_text = result._err_details_to_string(self, error) |
621 | + result.addError(self, error) |
622 | + self.assertThat( |
623 | + stream.getvalue(), |
624 | + DocTestMatches(result._format_error('ERROR', self, error_text))) |
625 | + |
626 | + def test_addFailure_outputs_failure(self): |
627 | + # CLITestResult.addError outputs the given error immediately to the |
628 | + # stream. |
629 | + stream = StringIO() |
630 | + result = self.make_result(stream) |
631 | + error = self.make_exc_info() |
632 | + error_text = result._err_details_to_string(self, error) |
633 | + result.addFailure(self, error) |
634 | + self.assertThat( |
635 | + stream.getvalue(), |
636 | + DocTestMatches(result._format_error('FAIL', self, error_text))) |
637 | |
638 | === modified file 'testrepository/ui/__init__.py' |
639 | --- testrepository/ui/__init__.py 2010-02-28 07:33:06 +0000 |
640 | +++ testrepository/ui/__init__.py 2010-09-20 17:47:50 +0000 |
641 | @@ -22,6 +22,9 @@ |
642 | for. |
643 | """ |
644 | |
645 | +from testtools import TestResult |
646 | + |
647 | + |
648 | class AbstractUI(object): |
649 | """The base class for UI objects, this providers helpers and the interface. |
650 | |
651 | @@ -82,6 +85,14 @@ |
652 | """Helper for iter_streams which subclasses should implement.""" |
653 | raise NotImplementedError(self._iter_streams) |
654 | |
655 | + def make_result(self, get_id): |
656 | + """Make a `TestResult` that can be used to display test results. |
657 | + |
658 | + :param get_id: A nullary callable that returns the id of the test run |
659 | + when called. |
660 | + """ |
661 | + raise NotImplementedError(self.make_result) |
662 | + |
663 | def output_error(self, error_tuple): |
664 | """Show an error to the user. |
665 | |
666 | @@ -102,18 +113,6 @@ |
667 | """ |
668 | raise NotImplementedError(self.output_rest) |
669 | |
670 | - def output_results(self, suite_or_test): |
671 | - """Show suite_or_test to the user by 'running' it. |
672 | - |
673 | - This expects the run to be fast/cheap. |
674 | - |
675 | - :param suite_or_test: A suite or test to show to the user. This should |
676 | - obey the 'TestCase' protocol - it should have a method run(result) |
677 | - that causes all the tests contained in the object to be handed to |
678 | - the result object. |
679 | - """ |
680 | - raise NotImplementedError(self.output_results) |
681 | - |
682 | def output_stream(self, stream): |
683 | """Show a byte stream to the user. |
684 | |
685 | @@ -163,3 +162,41 @@ |
686 | """ |
687 | # This might not be the right place. |
688 | raise NotImplementedError(self.subprocess_Popen) |
689 | + |
690 | + |
691 | +class BaseUITestResult(TestResult): |
692 | + """An abstract test result used with the UI. |
693 | + |
694 | + AbstractUI.make_result probably wants to return an object like this. |
695 | + """ |
696 | + |
697 | + def __init__(self, ui, get_id): |
698 | + """Construct an `AbstractUITestResult`. |
699 | + |
700 | + :param ui: The UI this result is associated with. |
701 | + :param get_id: A nullary callable that returns the id of the test run. |
702 | + """ |
703 | + super(BaseUITestResult, self).__init__() |
704 | + self.ui = ui |
705 | + self.get_id = get_id |
706 | + |
707 | + def _output_run(self, run_id): |
708 | + """Output a test run. |
709 | + |
710 | + :param run_id: The run id. |
711 | + """ |
712 | + if self.ui.options.quiet: |
713 | + return |
714 | + values = [('id', run_id), ('tests', self.testsRun)] |
715 | + failures = len(self.failures) + len(self.errors) |
716 | + if failures: |
717 | + values.append(('failures', failures)) |
718 | + skips = sum(map(len, self.skip_reasons.itervalues())) |
719 | + if skips: |
720 | + values.append(('skips', skips)) |
721 | + self.ui.output_values(values) |
722 | + |
723 | + def stopTestRun(self): |
724 | + super(BaseUITestResult, self).stopTestRun() |
725 | + run_id = self.get_id() |
726 | + self._output_run(run_id) |
727 | |
728 | === modified file 'testrepository/ui/cli.py' |
729 | --- testrepository/ui/cli.py 2010-09-11 19:56:11 +0000 |
730 | +++ testrepository/ui/cli.py 2010-09-20 17:47:50 +0000 |
731 | @@ -18,31 +18,34 @@ |
732 | import os |
733 | import sys |
734 | |
735 | -import testtools |
736 | - |
737 | from testrepository import ui |
738 | |
739 | -class CLITestResult(testtools.TestResult): |
740 | + |
741 | +class CLITestResult(ui.BaseUITestResult): |
742 | """A TestResult for the CLI.""" |
743 | |
744 | - def __init__(self, stream): |
745 | + def __init__(self, ui, get_id, stream): |
746 | """Construct a CLITestResult writing to stream.""" |
747 | - super(CLITestResult, self).__init__() |
748 | + super(CLITestResult, self).__init__(ui, get_id) |
749 | self.stream = stream |
750 | self.sep1 = '=' * 70 + '\n' |
751 | self.sep2 = '-' * 70 + '\n' |
752 | |
753 | - def _show_list(self, label, error_list): |
754 | - for test, output in error_list: |
755 | - self.stream.write(self.sep1) |
756 | - self.stream.write("%s: %s\n" % (label, test.id())) |
757 | - self.stream.write(self.sep2) |
758 | - self.stream.write(output) |
759 | - |
760 | - def stopTestRun(self): |
761 | - self._show_list('ERROR', self.errors) |
762 | - self._show_list('FAIL', self.failures) |
763 | - super(CLITestResult, self).stopTestRun() |
764 | + def _format_error(self, label, test, error_text): |
765 | + return ''.join([ |
766 | + self.sep1, |
767 | + '%s: %s\n' % (label, test.id()), |
768 | + self.sep2, |
769 | + error_text, |
770 | + ]) |
771 | + |
772 | + def addError(self, test, err=None, details=None): |
773 | + super(CLITestResult, self).addError(test, err=err, details=details) |
774 | + self.stream.write(self._format_error('ERROR', *(self.errors[-1]))) |
775 | + |
776 | + def addFailure(self, test, err=None, details=None): |
777 | + super(CLITestResult, self).addFailure(test, err=err, details=details) |
778 | + self.stream.write(self._format_error('FAIL', *(self.failures[-1]))) |
779 | |
780 | |
781 | class UI(ui.AbstractUI): |
782 | @@ -64,6 +67,9 @@ |
783 | def _iter_streams(self, stream_type): |
784 | yield self._stdin |
785 | |
786 | + def make_result(self, get_id): |
787 | + return CLITestResult(self, get_id, self._stdout) |
788 | + |
789 | def output_error(self, error_tuple): |
790 | self._stderr.write(str(error_tuple[1]) + '\n') |
791 | |
792 | @@ -72,14 +78,6 @@ |
793 | if not rest_string.endswith('\n'): |
794 | self._stdout.write('\n') |
795 | |
796 | - def output_results(self, suite_or_test): |
797 | - result = CLITestResult(self._stdout) |
798 | - result.startTestRun() |
799 | - try: |
800 | - suite_or_test.run(result) |
801 | - finally: |
802 | - result.stopTestRun() |
803 | - |
804 | def output_stream(self, stream): |
805 | contents = stream.read(65536) |
806 | while contents: |
807 | |
808 | === modified file 'testrepository/ui/model.py' |
809 | --- testrepository/ui/model.py 2010-09-07 12:37:17 +0000 |
810 | +++ testrepository/ui/model.py 2010-09-20 17:47:50 +0000 |
811 | @@ -19,6 +19,7 @@ |
812 | |
813 | from testrepository import ui |
814 | |
815 | + |
816 | class ProcessModel(object): |
817 | """A subprocess.Popen test double.""" |
818 | |
819 | @@ -31,6 +32,47 @@ |
820 | return '', '' |
821 | |
822 | |
823 | +class TestSuiteModel(object): |
824 | + |
825 | + def __init__(self): |
826 | + self._results = [] |
827 | + |
828 | + def recordResult(self, method, *args): |
829 | + self._results.append((method, args)) |
830 | + |
831 | + def run(self, result): |
832 | + for method, args in self._results: |
833 | + getattr(result, method)(*args) |
834 | + |
835 | + |
836 | +class TestResultModel(ui.BaseUITestResult): |
837 | + |
838 | + def __init__(self, ui, get_id): |
839 | + super(TestResultModel, self).__init__(ui, get_id) |
840 | + self._suite = TestSuiteModel() |
841 | + |
842 | + def startTest(self, test): |
843 | + super(TestResultModel, self).startTest(test) |
844 | + self._suite.recordResult('startTest', test) |
845 | + |
846 | + def stopTest(self, test): |
847 | + self._suite.recordResult('stopTest', test) |
848 | + |
849 | + def addError(self, test, *args): |
850 | + super(TestResultModel, self).addError(test, *args) |
851 | + self._suite.recordResult('addError', test, *args) |
852 | + |
853 | + def addFailure(self, test, *args): |
854 | + super(TestResultModel, self).addFailure(test, *args) |
855 | + self._suite.recordResult('addFailure', test, *args) |
856 | + |
857 | + def stopTestRun(self): |
858 | + if self.ui.options.quiet: |
859 | + return |
860 | + self.ui.outputs.append(('results', self._suite)) |
861 | + return super(TestResultModel, self).stopTestRun() |
862 | + |
863 | + |
864 | class UI(ui.AbstractUI): |
865 | """A object based UI. |
866 | |
867 | @@ -87,15 +129,15 @@ |
868 | for stream_bytes in streams: |
869 | yield StringIO(stream_bytes) |
870 | |
871 | + def make_result(self, get_id): |
872 | + return TestResultModel(self, get_id) |
873 | + |
874 | def output_error(self, error_tuple): |
875 | self.outputs.append(('error', error_tuple)) |
876 | |
877 | def output_rest(self, rest_string): |
878 | self.outputs.append(('rest', rest_string)) |
879 | |
880 | - def output_results(self, suite_or_test): |
881 | - self.outputs.append(('results', suite_or_test)) |
882 | - |
883 | def output_stream(self, stream): |
884 | self.outputs.append(('stream', stream.read())) |
885 |
I can't find the "I like this" button.