Merge lp:~jameinel/bzr/2.3-filter-tests into lp:bzr
- 2.3-filter-tests
- Merge into bzr.dev
Status: | Merged |
---|---|
Approved by: | Martin Pool |
Approved revision: | no longer in the source branch. |
Merged at revision: | 5445 |
Proposed branch: | lp:~jameinel/bzr/2.3-filter-tests |
Merge into: | lp:bzr |
Diff against target: |
477 lines (+254/-17) 17 files modified
NEWS (+4/-0) bzrlib/lazy_import.py (+1/-1) bzrlib/shelf.py (+1/-1) bzrlib/strace.py (+1/-1) bzrlib/tests/__init__.py (+55/-1) bzrlib/tests/per_bzrdir/__init__.py (+1/-1) bzrlib/tests/per_bzrdir/test_bzrdir.py (+1/-1) bzrlib/tests/test_bugtracker.py (+1/-1) bzrlib/tests/test_lazy_import.py (+1/-1) bzrlib/tests/test_rio.py (+1/-1) bzrlib/tests/test_selftest.py (+181/-2) bzrlib/tests/test_setup.py (+1/-1) bzrlib/tests/test_status.py (+1/-1) bzrlib/tests/test_strace.py (+1/-1) bzrlib/tests/test_tuned_gzip.py (+1/-1) bzrlib/tests/testui.py (+1/-1) bzrlib/tuned_gzip.py (+1/-1) |
To merge this branch: | bzr merge lp:~jameinel/bzr/2.3-filter-tests |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Martin Packman (community) | Approve | ||
Martin Pool | Approve | ||
Review via email: mp+33938@code.launchpad.net |
Commit message
Remove 'log' information from "successful" tests
Description of the change
This changes TestCase so that it no longer includes the log information for what we consider to be non-failure cases. Specifically skip, xfail, and n/a.
This should shrink the pqm output considerably (and I'd be willing to backport it to 2.0 if we want it.)
It also should reduce some of the noise when running selftest -v. It doesn't change addSkip to report the 'reason' as a single line like we used to, though I would consider working on that if people would like it. (basically, if the details dict only contains reason, just display it, rather that doing the full formatting dance.)
For a specific example, before:
...tp.BoundSFTP
Text attachment: log
------------
3.557 creating repository in file://
3.599 creating branch <bzrlib.
einel/appdata/
3.721 opening working tree 'C:/users/
3.831 opening working tree 'C:/users/
------------
Text attachment: reason
------------
Not a local URL
------------
after:
...tp.BoundSFTP
Text attachment: reason
------------
Not a local URL
------------
And, of course, it used to be:
...tp.BoundSFTP
Not a local URL
I went about it in this fashion, because we add the log (lazily) from setUp. Unfortunately, testtools itself doesn't provide anything like 'discardDetail()'.
The other option would be to figure out how to only add the detail once add(Failure/Error) was called. However, experience from addSkip is that there seem to be race conditions where testtools will have evaluated getDetails before we get a chance to do anything in our code.
I also chose this route so that it works with all possible result classes (--subunit and regular), rather that only patching our TestResult class. (I think if we added 'details' to our addSkip() we could do our own filtering.)
Robert Collins (lifeless) wrote : | # |
John A Meinel (jameinel) wrote : | # |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 8/29/2010 3:34 AM, Robert Collins wrote:
> so for skip I think keeping the log is important, it may well have
> hints to why something gets skipped beyond the reason; for xfail this
> makes sense to me.
>
> Looking at the implementation, I'd rather see discardDetail in
> testtools upstream - no need to do it in bzr at all.
>
> But, I don't think its correct, you say that the log becomes part of
> the reason, but that doesn't make sense at all.
>
> Whats the size issue with PQM? We don't skip all that many tests...
>
> -Rob
We skip 2200 tests, and it causes the log file to become 7.2MB in size.
I would consider that both "many tests" and a "large output".
John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://
iEYEARECAAYFAkx
4SkAn0SKZCP/
=lx8l
-----END PGP SIGNATURE-----
Martin Packman (gz) wrote : | # |
> It also should reduce some of the noise when running selftest -v. It doesn't
> change addSkip to report the 'reason' as a single line like we used to,
> though I would consider working on that if people would like it. (basically,
> if the details dict only contains reason, just display it, rather that doing
> the full formatting dance.)
The added noise is a straight-up regression from the switch to the testtools api, which I finally got round to filing as bug 625597 a couple of days back. That wants fixing regardless of whether we want to selectively clear logs like this, and mostly means submitting to an ever more testtools oriented style.
I'm not certain that clearing logs is the right thing. Generally I'd think it a bug if skipped tests have long log files. Some tests take a second to run, leak a thread, and just skip anyway.
Overall I think I'd prefer this happened in subunit. Passing tests also have logs, but I'm guessing subunit doesn't show their details at all.
John A Meinel (jameinel) wrote : | # |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 8/29/2010 3:34 AM, Robert Collins wrote:
> so for skip I think keeping the log is important, it may well have
> hints to why something gets skipped beyond the reason; for xfail this
> makes sense to me.
>
> Looking at the implementation, I'd rather see discardDetail in
> testtools upstream - no need to do it in bzr at all.
>
> But, I don't think its correct, you say that the log becomes part of
> the reason, but that doesn't make sense at all.
try:
except TypeError:
# have to convert
John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://
iEYEARECAAYFAkx
dosAnjvRBFQ8zZ4
=FHMt
-----END PGP SIGNATURE-----
Robert Collins (lifeless) wrote : | # |
So, 2200 tests is 10% of the total, the other 90% still gather all the logs.
That try:finally: demonstrates that bzr *just needs to update its API*
- its going via the compatibility code path.
Vincent Ladeuil (vila) wrote : | # |
As a data point, I still can't get any email from PQM on failures.
I just had to reproduce a problem and the raw selftest.log containing the uncompressed,
unfiltered, selftest.log is 47M.
I'm not quite sure about which stream is sent on failure but I suspect it is this one.
So, for debugging purposes, I don't like removing stuff.
But I currently get *nothing* so can we step back a bit and addresses this
problem once and for all ?
Should we send a very short mail on failures with an url to get the whole stream for example ?
John A Meinel (jameinel) wrote : | # |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 8/29/2010 2:58 PM, Robert Collins wrote:
> So, 2200 tests is 10% of the total, the other 90% still gather all the logs.
>
> That try:finally: demonstrates that bzr *just needs to update its API*
> - its going via the compatibility code path.
except doesn't subunit as well...
But you bring up a good point. I didn't realize that success was also
generating log output, which I would also like to suppress.
It is still 7MB as an email, which means that we are failing to get
failure emails sometimes.
John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://
iEYEARECAAYFAkx
fVAAoIO8aM4md+
=BX5H
-----END PGP SIGNATURE-----
John A Meinel (jameinel) wrote : | # |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 8/30/2010 9:53 AM, Vincent Ladeuil wrote:
> As a data point, I still can't get any email from PQM on failures.
>
> I just had to reproduce a problem and the raw selftest.log containing the uncompressed,
> unfiltered, selftest.log is 47M.
>
> I'm not quite sure about which stream is sent on failure but I suspect it is this one.
>
> So, for debugging purposes, I don't like removing stuff.
> But I currently get *nothing* so can we step back a bit and addresses this
> problem once and for all ?
>
> Should we send a very short mail on failures with an url to get the whole stream for example ?
Do you really feel that including the log output for xfail, skip, and
success is adding useful info? *I* feel it is just adding a lot of noise
with almost 0 signal...
I guess we've reached the point of
a) Bringing it to the main list.
b) Intelligent people disagreeing, so we need a vote/proclamation to
resolve.
John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://
iEYEARECAAYFAkx
bhwAoMQCynk7MyV
=aQpv
-----END PGP SIGNATURE-----
Vincent Ladeuil (vila) wrote : | # |
>>>>> John Arbash Meinel <email address hidden> writes:
> On 8/30/2010 9:53 AM, Vincent Ladeuil wrote:
>> As a data point, I still can't get any email from PQM on failures.
>>
>> I just had to reproduce a problem and the raw selftest.log containing the uncompressed,
>> unfiltered, selftest.log is 47M.
>>
>> I'm not quite sure about which stream is sent on failure but I suspect it is this one.
>>
>> So, for debugging purposes, I don't like removing stuff.
>> But I currently get *nothing* so can we step back a bit and addresses this
>> problem once and for all ?
>>
>> Should we send a very short mail on failures with an url to get the whole stream for example ?
> Do you really feel that including the log output for xfail, skip, and
> success is adding useful info?
No. But you forgot 'not applicable' :-P
> *I* feel it is just adding a lot of noise with almost 0 signal...
Right, so ISTM that we agree about two things:
- we want the list of tests and their status (pass/fail/skip etc.)
- we care about log only for failures.
Don't we get that by simply *not* using subunit on pqm ?
Or is pqm.bazaar-cvs.org *requiring* subunit ?
Robert Collins (lifeless) wrote : | # |
we did two simultaneous things a while back:
- turned off -1, so that we get *all* failures
- turned on subunit, so that the failures are machine driven
The emails are meant to be compressed, so you should in principle be
able get it reliably.
If we're actually running into mailer limits, we can do a few things:
- turn off subunit (will reduce some of the overhead - lots in a
success case, but in a it-all-
hit the limit and whatever undiagnosed and unfixed issue is biting you
now, will bite you then.
- diagnose and fix the mailer path issue so that you receive the mails reliably
- stop using email to send the results
- filter the stream at source so success and other results that you
consider uninteresting are not included in the 'raw' stream. Note that
again, this will *leave the failure mode in place so when it goes bad
you will stop getting the responses*
Now, I'm not directly experiencing this anymore, so you need to do
what makes sense to you.
If subunit's API is out of date and messing up the reason formatting,
I'll happily fix it - I'm going to go peek and see if its addSkip is
stale right after mailing this.
To stop using email to send the results, we either need to design a
new thing and change PQM to do it, or reenable the LP API support,
which Tim requested we disable.
We previously *haven't* filtered at source as a defensive measure
against bugs. Python 2 is riddled with default-encoding pitfalls and
they made subunit + bzr very unreliable at one point. Possibly totally
fixed thanks to Martin[gz].
Personally, I would not take any action that leaves the problem fallow
waiting for a serious test failure to provoke it: that just means that
when you need it most it will fail. However, as I say above: you're
using this system, I'm rarely using it : do what makes sense to you. I
do strongly suggest that you change things in PQM itself if you want
filtering, rather than bzr. That will at least preserve the detailed
output for babune if you do need it in future.
As for the value of logs on success, xfail, skip etc : *if* they pass
incorrectly, I think the log will be invaluable, but you won't need it
till you need it.
-Rob
Vincent Ladeuil (vila) wrote : | # |
>>>>> Robert Collins <email address hidden> writes:
> we did two simultaneous things a while back:
> - turned off -1, so that we get *all* failures
+1
> - turned on subunit, so that the failures are machine driven
ECANTPARSE
> The emails are meant to be compressed, so you should in principle
> be able get it reliably.
In practice I only get failures for merge errors, any error in the test
suite and... it's not delivered.
> If we're actually running into mailer limits, we can do a few
> things:
> - turn off subunit (will reduce some of the overhead - lots in a
> success case, but in a it-all-
> still hit the limit and whatever undiagnosed and unfixed issue is
> biting you now, will bite you then.
Yup, exactly my feeling.
> - diagnose and fix the mailer path issue so that you receive the
> mails reliably
As a data point, John got an email for a failure and *I* didn't get it
for the same failure (or so very closely the same failure that the
difference doesn't matter).
This means the problem is on my side but I don't know where exactly. I
use fetchmail and there is *nothing* in the logs so that also rules out
the delivery part. That leaves my ISP mail server... and I don't control
it. That's me, but it could apply to anybody else, we just don't know.
> - stop using email to send the results
Or store the results somewhere (deleting them after a day/week/month
whatever).
Or send two mails:
- one indicating the failure
- one with the stream attached.
At least with that I will *know* there was a failure.
> - filter the stream at source so success and other results that
> you consider uninteresting are not included in the 'raw'
> stream. Note that again, this will *leave the failure mode in
> place so when it goes bad you will stop getting the responses*
Another idea would be to have a different pqm instance used only for
known failures and make the output for the regular pqm includes only
*failures* and get rid of *all* the rest.
> Now, I'm not directly experiencing this anymore, so you need to do
> what makes sense to you.
Thanks for your thoughts anyway.
> If subunit's API is out of date and messing up the reason
> formatting, I'll happily fix it - I'm going to go peek and see if
> its addSkip is stale right after mailing this.
Cool, any feedback ?
> To stop using email to send the results, we either need to design
> a new thing and change PQM to do it, or reenable the LP API
> support, which Tim requested we disable.
And what would that give us ? Failed run stream attached to the mp as an
attachment ?
> We previously *haven't* filtered at source as a defensive measure
> against bugs. Python 2 is riddled with default-encoding pitfalls
> and they made subunit + bzr very unreliable at one point. Possibly
> totally fixed thanks to Martin[gz].
I still expect a few (tiny but annoying) bugs to be fixed there...
> Personally, I would not take any action that leaves the problem
> fallow waiting for a serious test failure to provoke it: that just
> means tha...
John A Meinel (jameinel) wrote : | # |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
...
> > - turn off subunit (will reduce some of the overhead - lots in a
> > success case, but in a it-all-
> > still hit the limit and whatever undiagnosed and unfixed issue is
> > biting you now, will bite you then.
>
> Yup, exactly my feeling.
>
> > - diagnose and fix the mailer path issue so that you receive the
> > mails reliably
>
> As a data point, John got an email for a failure and *I* didn't get it
> for the same failure (or so very closely the same failure that the
> difference doesn't matter).
Right. For that 'failure' there was a single test failing. And the size
of the compressed log is 3.2MB, which makes it about 7MB according to
Thunderbird. (Figure base64 encoding, and ?).
It is true that if you have a huge 'everything fails' case, you'll still
get into that range. Though I did have one of those recently that broke
the 10MB barrier and caused the mail to fail to reach me. Which was,
indeed, much harder for me to track down and fix.
However, getting rid of 7MB (of compressed+base64) success content would
get a lot more headroom for when a few tests fail (or even a few hundred
fail).
...
> > To stop using email to send the results, we either need to design
> > a new thing and change PQM to do it, or reenable the LP API
> > support, which Tim requested we disable.
>
> And what would that give us ? Failed run stream attached to the mp as an
> attachment ?
I don't think you can attach stuff to an MP. I think it would actually
try to set it as a message...
John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://
iEYEARECAAYFAkx
3BwAnR8AFECkruc
=jceX
-----END PGP SIGNATURE-----
Martin Pool (mbp) wrote : | # |
This is now a couple of weeks old and I'd like to wrap it up.
I think it would be reasonable to merge it, and at least see what we think about it. I cannot recall looking at the logs attached to an xfail, skip or notapplicable test for a long time. The normal and common thing is that they'll have a fairly selfexplanatory message about being either known broken, or skipped because of a missing dependency, or whatever. In the source, the code that raises that exception is normally pretty obvious. If the test is being wrongly skipped, then the logs probably won't help and you need to dig in with pdb or something.
There is some unhelpful data logged by bzr and we could push down on that too.
On the whole I'm ok to merge this; it seems like it's just a couple of lines to turn it back on if we need to.
Martin Packman (gz) wrote : | # |
If this is going to land, there are some things that should really be fixed first.
+ self._suppress_log ()
No space.
+ def _report_skip(self, result, e):
Rather than adding this, put the call in _do_skip instead?
ExtendedTestRes
Remove that and add your new version instead?
Would really like some kind of test.
Martin Packman (gz) wrote : | # |
On 22/09/2010, John Arbash Meinel <email address hidden> wrote:
>>
>> + def _report_skip(self, result, e):
>>
>> Rather than adding this, put the call in _do_skip instead?
>
> IIRC, there was a reason to put it in _report_skip. I don't remember
> exactly, but I think it was that _report_skip gets called for all code
> paths, but _do_skip doesn't. (NotAvailable, vs Skip, vs XFAIL?)
I think this is true for fallbacks, but we shouldn't be using any? If
it does need doing this way explaining why in the docstring would be
good.
>> ExtendedTestRes
>> different way:
>>
>> test._log_contents = ''
>>
>> Remove that and add your new version instead?
>
> Given that that one doesn't actually seem to work, I guess that would be
> reasonable.
Removing the log from successes would be the biggest win as you've noted.
>> Would really like some kind of test.
>
> I think I can work something out in test_selftest, about what details
> are present in the final result. I'll give it a poke and see what I get.
>
> I'd certainly also like to suppress the log details for success cases.
>
> John
> =:->
John A Meinel (jameinel) wrote : | # |
Please re-review. I've:
a) Added tests about details, etc, when running locally
b) Added tests for what is contained in the subunit stream
c) Hopefully updated the docs sufficiently for Martin [gz].
Martin Packman (gz) wrote : | # |
Let's land this. The tests are worth having even when the bzrlib ExtendedTestResult class is fixed to print less junk on the console or subunit starts being pickier about what it does with success results.
A couple of notes. Thanks for the reasoning for _report_skip, makes it clear the method can probably be removed when the bzrlib/testtools boundary starts treating exceptions properly. The new tests that use testtools.
John A Meinel (jameinel) wrote : | # |
I did a quick test run with an artificially failing test, and I saw that it did help, a lot. 7.3MB down to 1.3MB for the email. (Email size, not size of .gz not uncompressed size)
John A Meinel (jameinel) wrote : | # |
sent to pqm by email
Preview Diff
1 | === modified file 'NEWS' |
2 | --- NEWS 2010-09-24 12:53:00 +0000 |
3 | +++ NEWS 2010-09-26 14:05:09 +0000 |
4 | @@ -42,6 +42,10 @@ |
5 | Internals |
6 | ********* |
7 | |
8 | +* When running ``bzr selftest --subunit`` the subunit stream will no |
9 | + longer include the "log" information for tests which are considered to |
10 | + be 'successes' (success, xfail, skip, etc) (John Arbash Meinel) |
11 | + |
12 | Testing |
13 | ******* |
14 | |
15 | |
16 | === modified file 'bzrlib/lazy_import.py' |
17 | --- bzrlib/lazy_import.py 2010-09-05 17:15:46 +0000 |
18 | +++ bzrlib/lazy_import.py 2010-09-26 14:05:09 +0000 |
19 | @@ -1,4 +1,4 @@ |
20 | -# Copyright (C) 2006 Canonical Ltd |
21 | +# Copyright (C) 2006-2010 Canonical Ltd |
22 | # |
23 | # This program is free software; you can redistribute it and/or modify |
24 | # it under the terms of the GNU General Public License as published by |
25 | |
26 | === modified file 'bzrlib/shelf.py' |
27 | --- bzrlib/shelf.py 2010-09-24 15:11:57 +0000 |
28 | +++ bzrlib/shelf.py 2010-09-26 14:05:09 +0000 |
29 | @@ -1,4 +1,4 @@ |
30 | -# Copyright (C) 2008 Canonical Ltd |
31 | +# Copyright (C) 2008, 2009, 2010 Canonical Ltd |
32 | # |
33 | # This program is free software; you can redistribute it and/or modify |
34 | # it under the terms of the GNU General Public License as published by |
35 | |
36 | === modified file 'bzrlib/strace.py' |
37 | --- bzrlib/strace.py 2010-09-01 19:38:55 +0000 |
38 | +++ bzrlib/strace.py 2010-09-26 14:05:09 +0000 |
39 | @@ -1,4 +1,4 @@ |
40 | -# Copyright (C) 2007 Canonical Ltd |
41 | +# Copyright (C) 2007, 2009, 2010 Canonical Ltd |
42 | # Authors: Robert Collins <robert.collins@canonical.com> |
43 | # |
44 | # This program is free software; you can redistribute it and/or modify |
45 | |
46 | === modified file 'bzrlib/tests/__init__.py' |
47 | --- bzrlib/tests/__init__.py 2010-09-17 14:27:27 +0000 |
48 | +++ bzrlib/tests/__init__.py 2010-09-26 14:05:09 +0000 |
49 | @@ -846,6 +846,22 @@ |
50 | import pdb |
51 | pdb.Pdb().set_trace(sys._getframe().f_back) |
52 | |
53 | + def discardDetail(self, name): |
54 | + """Extend the addDetail, getDetails api so we can remove a detail. |
55 | + |
56 | + eg. bzr always adds the 'log' detail at startup, but we don't want to |
57 | + include it for skipped, xfail, etc tests. |
58 | + |
59 | + It is safe to call this for a detail that doesn't exist, in case this |
60 | + gets called multiple times. |
61 | + """ |
62 | + # We cheat. details is stored in __details which means we shouldn't |
63 | + # touch it. but getDetails() returns the dict directly, so we can |
64 | + # mutate it. |
65 | + details = self.getDetails() |
66 | + if name in details: |
67 | + del details[name] |
68 | + |
69 | def _clear_debug_flags(self): |
70 | """Prevent externally set debug flags affecting tests. |
71 | |
72 | @@ -1569,7 +1585,12 @@ |
73 | """This test has failed for some known reason.""" |
74 | raise KnownFailure(reason) |
75 | |
76 | + def _suppress_log(self): |
77 | + """Remove the log info from details.""" |
78 | + self.discardDetail('log') |
79 | + |
80 | def _do_skip(self, result, reason): |
81 | + self._suppress_log() |
82 | addSkip = getattr(result, 'addSkip', None) |
83 | if not callable(addSkip): |
84 | result.addSuccess(result) |
85 | @@ -1578,6 +1599,7 @@ |
86 | |
87 | @staticmethod |
88 | def _do_known_failure(self, result, e): |
89 | + self._suppress_log() |
90 | err = sys.exc_info() |
91 | addExpectedFailure = getattr(result, 'addExpectedFailure', None) |
92 | if addExpectedFailure is not None: |
93 | @@ -1591,6 +1613,7 @@ |
94 | reason = 'No reason given' |
95 | else: |
96 | reason = e.args[0] |
97 | + self._suppress_log () |
98 | addNotApplicable = getattr(result, 'addNotApplicable', None) |
99 | if addNotApplicable is not None: |
100 | result.addNotApplicable(self, reason) |
101 | @@ -1598,8 +1621,29 @@ |
102 | self._do_skip(result, reason) |
103 | |
104 | @staticmethod |
105 | + def _report_skip(self, result, err): |
106 | + """Override the default _report_skip. |
107 | + |
108 | + We want to strip the 'log' detail. If we waint until _do_skip, it has |
109 | + already been formatted into the 'reason' string, and we can't pull it |
110 | + out again. |
111 | + """ |
112 | + self._suppress_log() |
113 | + super(TestCase, self)._report_skip(self, result, err) |
114 | + |
115 | + @staticmethod |
116 | + def _report_expected_failure(self, result, err): |
117 | + """Strip the log. |
118 | + |
119 | + See _report_skip for motivation. |
120 | + """ |
121 | + self._suppress_log() |
122 | + super(TestCase, self)._report_expected_failure(self, result, err) |
123 | + |
124 | + @staticmethod |
125 | def _do_unsupported_or_skip(self, result, e): |
126 | reason = e.args[0] |
127 | + self._suppress_log() |
128 | addNotSupported = getattr(result, 'addNotSupported', None) |
129 | if addNotSupported is not None: |
130 | result.addNotSupported(self, reason) |
131 | @@ -4459,10 +4503,20 @@ |
132 | try: |
133 | from subunit import TestProtocolClient |
134 | from subunit.test_results import AutoTimingTestResultDecorator |
135 | + class SubUnitBzrProtocolClient(TestProtocolClient): |
136 | + |
137 | + def addSuccess(self, test, details=None): |
138 | + # The subunit client always includes the details in the subunit |
139 | + # stream, but we don't want to include it in ours. |
140 | + if details is not None and 'log' in details: |
141 | + del details['log'] |
142 | + return super(SubUnitBzrProtocolClient, self).addSuccess( |
143 | + test, details) |
144 | + |
145 | class SubUnitBzrRunner(TextTestRunner): |
146 | def run(self, test): |
147 | result = AutoTimingTestResultDecorator( |
148 | - TestProtocolClient(self.stream)) |
149 | + SubUnitBzrProtocolClient(self.stream)) |
150 | test.run(result) |
151 | return result |
152 | except ImportError: |
153 | |
154 | === modified file 'bzrlib/tests/per_bzrdir/__init__.py' |
155 | --- bzrlib/tests/per_bzrdir/__init__.py 2010-08-29 14:37:51 +0000 |
156 | +++ bzrlib/tests/per_bzrdir/__init__.py 2010-09-26 14:05:09 +0000 |
157 | @@ -1,4 +1,4 @@ |
158 | -# Copyright (C) 2006-2010 Canonical Ltd |
159 | +# Copyright (C) 2010 Canonical Ltd |
160 | # Authors: Robert Collins <robert.collins@canonical.com> |
161 | # Jelmer Vernooij <jelmer.vernooij@canonical.com> |
162 | # -*- coding: utf-8 -*- |
163 | |
164 | === modified file 'bzrlib/tests/per_bzrdir/test_bzrdir.py' |
165 | --- bzrlib/tests/per_bzrdir/test_bzrdir.py 2010-08-29 15:03:22 +0000 |
166 | +++ bzrlib/tests/per_bzrdir/test_bzrdir.py 2010-09-26 14:05:09 +0000 |
167 | @@ -1,4 +1,4 @@ |
168 | -# Copyright (C) 2006-2010 Canonical Ltd |
169 | +# Copyright (C) 2010 Canonical Ltd |
170 | # |
171 | # This program is free software; you can redistribute it and/or modify |
172 | # it under the terms of the GNU General Public License as published by |
173 | |
174 | === modified file 'bzrlib/tests/test_bugtracker.py' |
175 | --- bzrlib/tests/test_bugtracker.py 2010-09-06 19:12:52 +0000 |
176 | +++ bzrlib/tests/test_bugtracker.py 2010-09-26 14:05:09 +0000 |
177 | @@ -1,4 +1,4 @@ |
178 | -# Copyright (C) 2007 Canonical Ltd |
179 | +# Copyright (C) 2007-2010 Canonical Ltd |
180 | # |
181 | # This program is free software; you can redistribute it and/or modify |
182 | # it under the terms of the GNU General Public License as published by |
183 | |
184 | === modified file 'bzrlib/tests/test_lazy_import.py' |
185 | --- bzrlib/tests/test_lazy_import.py 2010-09-05 17:15:46 +0000 |
186 | +++ bzrlib/tests/test_lazy_import.py 2010-09-26 14:05:09 +0000 |
187 | @@ -1,4 +1,4 @@ |
188 | -# Copyright (C) 2006 Canonical Ltd |
189 | +# Copyright (C) 2006-2010 Canonical Ltd |
190 | # |
191 | # This program is free software; you can redistribute it and/or modify |
192 | # it under the terms of the GNU General Public License as published by |
193 | |
194 | === modified file 'bzrlib/tests/test_rio.py' |
195 | --- bzrlib/tests/test_rio.py 2010-09-06 06:13:52 +0000 |
196 | +++ bzrlib/tests/test_rio.py 2010-09-26 14:05:09 +0000 |
197 | @@ -1,4 +1,4 @@ |
198 | -# Copyright (C) 2005, 2006 Canonical Ltd |
199 | +# Copyright (C) 2005, 2006, 2007, 2009, 2010 Canonical Ltd |
200 | # |
201 | # This program is free software; you can redistribute it and/or modify |
202 | # it under the terms of the GNU General Public License as published by |
203 | |
204 | === modified file 'bzrlib/tests/test_selftest.py' |
205 | --- bzrlib/tests/test_selftest.py 2010-09-17 14:27:27 +0000 |
206 | +++ bzrlib/tests/test_selftest.py 2010-09-26 14:05:09 +0000 |
207 | @@ -68,7 +68,7 @@ |
208 | test_sftp_transport, |
209 | TestUtil, |
210 | ) |
211 | -from bzrlib.trace import note |
212 | +from bzrlib.trace import note, mutter |
213 | from bzrlib.transport import memory |
214 | from bzrlib.version import _get_bzr_source_tree |
215 | |
216 | @@ -1690,6 +1690,108 @@ |
217 | self.assertEqual('original', obj.test_attr) |
218 | |
219 | |
220 | +class _MissingFeature(tests.Feature): |
221 | + def _probe(self): |
222 | + return False |
223 | +missing_feature = _MissingFeature() |
224 | + |
225 | + |
226 | +def _get_test(name): |
227 | + """Get an instance of a specific example test. |
228 | + |
229 | + We protect this in a function so that they don't auto-run in the test |
230 | + suite. |
231 | + """ |
232 | + |
233 | + class ExampleTests(tests.TestCase): |
234 | + |
235 | + def test_fail(self): |
236 | + mutter('this was a failing test') |
237 | + self.fail('this test will fail') |
238 | + |
239 | + def test_error(self): |
240 | + mutter('this test errored') |
241 | + raise RuntimeError('gotcha') |
242 | + |
243 | + def test_missing_feature(self): |
244 | + mutter('missing the feature') |
245 | + self.requireFeature(missing_feature) |
246 | + |
247 | + def test_skip(self): |
248 | + mutter('this test will be skipped') |
249 | + raise tests.TestSkipped('reason') |
250 | + |
251 | + def test_success(self): |
252 | + mutter('this test succeeds') |
253 | + |
254 | + def test_xfail(self): |
255 | + mutter('test with expected failure') |
256 | + self.knownFailure('this_fails') |
257 | + |
258 | + def test_unexpected_success(self): |
259 | + mutter('test with unexpected success') |
260 | + self.expectFailure('should_fail', lambda: None) |
261 | + |
262 | + return ExampleTests(name) |
263 | + |
264 | + |
265 | +class TestTestCaseLogDetails(tests.TestCase): |
266 | + |
267 | + def _run_test(self, test_name): |
268 | + test = _get_test(test_name) |
269 | + result = testtools.TestResult() |
270 | + test.run(result) |
271 | + return result |
272 | + |
273 | + def test_fail_has_log(self): |
274 | + result = self._run_test('test_fail') |
275 | + self.assertEqual(1, len(result.failures)) |
276 | + result_content = result.failures[0][1] |
277 | + self.assertContainsRe(result_content, 'Text attachment: log') |
278 | + self.assertContainsRe(result_content, 'this was a failing test') |
279 | + |
280 | + def test_error_has_log(self): |
281 | + result = self._run_test('test_error') |
282 | + self.assertEqual(1, len(result.errors)) |
283 | + result_content = result.errors[0][1] |
284 | + self.assertContainsRe(result_content, 'Text attachment: log') |
285 | + self.assertContainsRe(result_content, 'this test errored') |
286 | + |
287 | + def test_skip_has_no_log(self): |
288 | + result = self._run_test('test_skip') |
289 | + self.assertEqual(['reason'], result.skip_reasons.keys()) |
290 | + skips = result.skip_reasons['reason'] |
291 | + self.assertEqual(1, len(skips)) |
292 | + test = skips[0] |
293 | + self.assertFalse('log' in test.getDetails()) |
294 | + |
295 | + def test_missing_feature_has_no_log(self): |
296 | + # testtools doesn't know about addNotSupported, so it just gets |
297 | + # considered as a skip |
298 | + result = self._run_test('test_missing_feature') |
299 | + self.assertEqual([missing_feature], result.skip_reasons.keys()) |
300 | + skips = result.skip_reasons[missing_feature] |
301 | + self.assertEqual(1, len(skips)) |
302 | + test = skips[0] |
303 | + self.assertFalse('log' in test.getDetails()) |
304 | + |
305 | + def test_xfail_has_no_log(self): |
306 | + result = self._run_test('test_xfail') |
307 | + self.assertEqual(1, len(result.expectedFailures)) |
308 | + result_content = result.expectedFailures[0][1] |
309 | + self.assertNotContainsRe(result_content, 'Text attachment: log') |
310 | + self.assertNotContainsRe(result_content, 'test with expected failure') |
311 | + |
312 | + def test_unexpected_success_has_log(self): |
313 | + result = self._run_test('test_unexpected_success') |
314 | + self.assertEqual(1, len(result.unexpectedSuccesses)) |
315 | + # Inconsistency, unexpectedSuccesses is a list of tests, |
316 | + # expectedFailures is a list of reasons? |
317 | + test = result.unexpectedSuccesses[0] |
318 | + details = test.getDetails() |
319 | + self.assertTrue('log' in details) |
320 | + |
321 | + |
322 | class TestTestCloning(tests.TestCase): |
323 | """Tests that test cloning of TestCases (as used by multiply_tests).""" |
324 | |
325 | @@ -1883,7 +1985,7 @@ |
326 | tree.branch.repository.bzrdir.root_transport) |
327 | |
328 | |
329 | -class SelfTestHelper: |
330 | +class SelfTestHelper(object): |
331 | |
332 | def run_selftest(self, **kwargs): |
333 | """Run selftest returning its output.""" |
334 | @@ -2040,6 +2142,83 @@ |
335 | load_list='missing file name', list_only=True) |
336 | |
337 | |
338 | +class TestSubunitLogDetails(tests.TestCase, SelfTestHelper): |
339 | + |
340 | + _test_needs_features = [features.subunit] |
341 | + |
342 | + def run_subunit_stream(self, test_name): |
343 | + from subunit import ProtocolTestCase |
344 | + def factory(): |
345 | + return TestUtil.TestSuite([_get_test(test_name)]) |
346 | + stream = self.run_selftest(runner_class=tests.SubUnitBzrRunner, |
347 | + test_suite_factory=factory) |
348 | + test = ProtocolTestCase(stream) |
349 | + result = testtools.TestResult() |
350 | + test.run(result) |
351 | + content = stream.getvalue() |
352 | + return content, result |
353 | + |
354 | + def test_fail_has_log(self): |
355 | + content, result = self.run_subunit_stream('test_fail') |
356 | + self.assertEqual(1, len(result.failures)) |
357 | + self.assertContainsRe(content, '(?m)^log$') |
358 | + self.assertContainsRe(content, 'this test will fail') |
359 | + |
360 | + def test_error_has_log(self): |
361 | + content, result = self.run_subunit_stream('test_error') |
362 | + self.assertContainsRe(content, '(?m)^log$') |
363 | + self.assertContainsRe(content, 'this test errored') |
364 | + |
365 | + def test_skip_has_no_log(self): |
366 | + content, result = self.run_subunit_stream('test_skip') |
367 | + self.assertNotContainsRe(content, '(?m)^log$') |
368 | + self.assertNotContainsRe(content, 'this test will be skipped') |
369 | + self.assertEqual(['reason'], result.skip_reasons.keys()) |
370 | + skips = result.skip_reasons['reason'] |
371 | + self.assertEqual(1, len(skips)) |
372 | + test = skips[0] |
373 | + # RemotedTestCase doesn't preserve the "details" |
374 | + ## self.assertFalse('log' in test.getDetails()) |
375 | + |
376 | + def test_missing_feature_has_no_log(self): |
377 | + content, result = self.run_subunit_stream('test_missing_feature') |
378 | + self.assertNotContainsRe(content, '(?m)^log$') |
379 | + self.assertNotContainsRe(content, 'missing the feature') |
380 | + self.assertEqual(['_MissingFeature\n'], result.skip_reasons.keys()) |
381 | + skips = result.skip_reasons['_MissingFeature\n'] |
382 | + self.assertEqual(1, len(skips)) |
383 | + test = skips[0] |
384 | + # RemotedTestCase doesn't preserve the "details" |
385 | + ## self.assertFalse('log' in test.getDetails()) |
386 | + |
387 | + def test_xfail_has_no_log(self): |
388 | + content, result = self.run_subunit_stream('test_xfail') |
389 | + self.assertNotContainsRe(content, '(?m)^log$') |
390 | + self.assertNotContainsRe(content, 'test with expected failure') |
391 | + self.assertEqual(1, len(result.expectedFailures)) |
392 | + result_content = result.expectedFailures[0][1] |
393 | + self.assertNotContainsRe(result_content, 'Text attachment: log') |
394 | + self.assertNotContainsRe(result_content, 'test with expected failure') |
395 | + |
396 | + def test_unexpected_success_has_log(self): |
397 | + content, result = self.run_subunit_stream('test_unexpected_success') |
398 | + self.assertContainsRe(content, '(?m)^log$') |
399 | + self.assertContainsRe(content, 'test with unexpected success') |
400 | + self.expectFailure('subunit treats "unexpectedSuccess"' |
401 | + ' as a plain success', |
402 | + self.assertEqual, 1, len(result.unexpectedSuccesses)) |
403 | + self.assertEqual(1, len(result.unexpectedSuccesses)) |
404 | + test = result.unexpectedSuccesses[0] |
405 | + # RemotedTestCase doesn't preserve the "details" |
406 | + ## self.assertTrue('log' in test.getDetails()) |
407 | + |
408 | + def test_success_has_no_log(self): |
409 | + content, result = self.run_subunit_stream('test_success') |
410 | + self.assertEqual(1, result.testsRun) |
411 | + self.assertNotContainsRe(content, '(?m)^log$') |
412 | + self.assertNotContainsRe(content, 'this test succeeds') |
413 | + |
414 | + |
415 | class TestRunBzr(tests.TestCase): |
416 | |
417 | out = '' |
418 | |
419 | === modified file 'bzrlib/tests/test_setup.py' |
420 | --- bzrlib/tests/test_setup.py 2010-09-02 00:49:06 +0000 |
421 | +++ bzrlib/tests/test_setup.py 2010-09-26 14:05:09 +0000 |
422 | @@ -1,4 +1,4 @@ |
423 | -# Copyright (C) 2005, 2006 Canonical Ltd |
424 | +# Copyright (C) 2005, 2006, 2008, 2009, 2010 Canonical Ltd |
425 | # |
426 | # This program is free software; you can redistribute it and/or modify |
427 | # it under the terms of the GNU General Public License as published by |
428 | |
429 | === modified file 'bzrlib/tests/test_status.py' |
430 | --- bzrlib/tests/test_status.py 2010-08-31 13:49:48 +0000 |
431 | +++ bzrlib/tests/test_status.py 2010-09-26 14:05:09 +0000 |
432 | @@ -1,4 +1,4 @@ |
433 | -# Copyright (C) 2005 Canonical Ltd |
434 | +# Copyright (C) 2006-2010 Canonical Ltd |
435 | # |
436 | # This program is free software; you can redistribute it and/or modify |
437 | # it under the terms of the GNU General Public License as published by |
438 | |
439 | === modified file 'bzrlib/tests/test_strace.py' |
440 | --- bzrlib/tests/test_strace.py 2010-09-01 19:38:55 +0000 |
441 | +++ bzrlib/tests/test_strace.py 2010-09-26 14:05:09 +0000 |
442 | @@ -1,4 +1,4 @@ |
443 | -# Copyright (C) 2007, 2010 Canonical Ltd |
444 | +# Copyright (C) 2007-2010 Canonical Ltd |
445 | # |
446 | # This program is free software; you can redistribute it and/or modify |
447 | # it under the terms of the GNU General Public License as published by |
448 | |
449 | === modified file 'bzrlib/tests/test_tuned_gzip.py' |
450 | --- bzrlib/tests/test_tuned_gzip.py 2010-08-29 18:29:22 +0000 |
451 | +++ bzrlib/tests/test_tuned_gzip.py 2010-09-26 14:05:09 +0000 |
452 | @@ -1,4 +1,4 @@ |
453 | -# Copyright (C) 2006 Canonical Ltd |
454 | +# Copyright (C) 2006, 2009, 2010 Canonical Ltd |
455 | # |
456 | # This program is free software; you can redistribute it and/or modify |
457 | # it under the terms of the GNU General Public License as published by |
458 | |
459 | === modified file 'bzrlib/tests/testui.py' |
460 | --- bzrlib/tests/testui.py 2010-09-14 08:14:22 +0000 |
461 | +++ bzrlib/tests/testui.py 2010-09-26 14:05:09 +0000 |
462 | @@ -1,4 +1,4 @@ |
463 | -# Copyright (C) 2006-2010 Canonical Ltd |
464 | +# Copyright (C) 2010 Canonical Ltd |
465 | # |
466 | # This program is free software; you can redistribute it and/or modify |
467 | # it under the terms of the GNU General Public License as published by |
468 | |
469 | === modified file 'bzrlib/tuned_gzip.py' |
470 | --- bzrlib/tuned_gzip.py 2010-08-29 18:30:45 +0000 |
471 | +++ bzrlib/tuned_gzip.py 2010-09-26 14:05:09 +0000 |
472 | @@ -1,4 +1,4 @@ |
473 | -# Copyright (C) 2005, 2006 Canonical Ltd |
474 | +# Copyright (C) 2006-2010 Canonical Ltd |
475 | # Written by Robert Collins <robert.collins@canonical.com> |
476 | # |
477 | # This program is free software; you can redistribute it and/or modify |
so for skip I think keeping the log is important, it may well have
hints to why something gets skipped beyond the reason; for xfail this
makes sense to me.
Looking at the implementation, I'd rather see discardDetail in
testtools upstream - no need to do it in bzr at all.
But, I don't think its correct, you say that the log becomes part of
the reason, but that doesn't make sense at all.
Whats the size issue with PQM? We don't skip all that many tests...
-Rob