Merge lp:~jameinel/bzr/2.3-filter-tests into lp:bzr

Proposed by John A Meinel
Status: Merged
Approved by: Martin Pool
Approved revision: no longer in the source branch.
Merged at revision: 5445
Proposed branch: lp:~jameinel/bzr/2.3-filter-tests
Merge into: lp:bzr
Diff against target: 477 lines (+254/-17)
17 files modified
NEWS (+4/-0)
bzrlib/lazy_import.py (+1/-1)
bzrlib/shelf.py (+1/-1)
bzrlib/strace.py (+1/-1)
bzrlib/tests/__init__.py (+55/-1)
bzrlib/tests/per_bzrdir/__init__.py (+1/-1)
bzrlib/tests/per_bzrdir/test_bzrdir.py (+1/-1)
bzrlib/tests/test_bugtracker.py (+1/-1)
bzrlib/tests/test_lazy_import.py (+1/-1)
bzrlib/tests/test_rio.py (+1/-1)
bzrlib/tests/test_selftest.py (+181/-2)
bzrlib/tests/test_setup.py (+1/-1)
bzrlib/tests/test_status.py (+1/-1)
bzrlib/tests/test_strace.py (+1/-1)
bzrlib/tests/test_tuned_gzip.py (+1/-1)
bzrlib/tests/testui.py (+1/-1)
bzrlib/tuned_gzip.py (+1/-1)
To merge this branch: bzr merge lp:~jameinel/bzr/2.3-filter-tests
Reviewer Review Type Date Requested Status
Martin Packman (community) Approve
Martin Pool Approve
Review via email: mp+33938@code.launchpad.net

Commit message

Remove 'log' information from "successful" tests

Description of the change

This changes TestCase so that it no longer includes the log information for what we consider to be non-failure cases. Specifically skip, xfail, and n/a.

This should shrink the pqm output considerably (and I'd be willing to backport it to 2.0 if we want it.)

It also should reduce some of the noise when running selftest -v. It doesn't change addSkip to report the 'reason' as a single line like we used to, though I would consider working on that if people would like it. (basically, if the details dict only contains reason, just display it, rather that doing the full formatting dance.)

For a specific example, before:
...tp.BoundSFTPBranch.test_bind_child_ahead_preserves_child(RemoteBranchFormat-v2) SKIP 967ms
Text attachment: log
------------
3.557 creating repository in file:///C:/users/jameinel/appdata/local/temp/testbzr-wo1wlh.tmp/.bzr/.

3.599 creating branch <bzrlib.branch.BzrBranchFormat7 object at 0x02E82690> in file:///C:/users/jam
einel/appdata/local/temp/testbzr-wo1wlh.tmp/
3.721 opening working tree 'C:/users/jameinel/appdata/local/temp/testbzr-wo1wlh.tmp'
3.831 opening working tree 'C:/users/jameinel/appdata/local/temp/testbzr-wo1wlh.tmp'
------------
Text attachment: reason
------------
Not a local URL
------------

after:
...tp.BoundSFTPBranch.test_bind_child_ahead_preserves_child(RemoteBranchFormat-v2) SKIP 200ms
Text attachment: reason
------------
Not a local URL
------------

And, of course, it used to be:
...tp.BoundSFTPBranch.test_bind_child_ahead_preserves_child(RemoteBranchFormat-v2) SKIP 200ms
Not a local URL

I went about it in this fashion, because we add the log (lazily) from setUp. Unfortunately, testtools itself doesn't provide anything like 'discardDetail()'.

The other option would be to figure out how to only add the detail once add(Failure/Error) was called. However, experience from addSkip is that there seem to be race conditions where testtools will have evaluated getDetails before we get a chance to do anything in our code.

I also chose this route so that it works with all possible result classes (--subunit and regular), rather that only patching our TestResult class. (I think if we added 'details' to our addSkip() we could do our own filtering.)

To post a comment you must log in.
Revision history for this message
Robert Collins (lifeless) wrote :

so for skip I think keeping the log is important, it may well have
hints to why something gets skipped beyond the reason; for xfail this
makes sense to me.

Looking at the implementation, I'd rather see discardDetail in
testtools upstream - no need to do it in bzr at all.

But, I don't think its correct, you say that the log becomes part of
the reason, but that doesn't make sense at all.

Whats the size issue with PQM? We don't skip all that many tests...

-Rob

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 8/29/2010 3:34 AM, Robert Collins wrote:
> so for skip I think keeping the log is important, it may well have
> hints to why something gets skipped beyond the reason; for xfail this
> makes sense to me.
>
> Looking at the implementation, I'd rather see discardDetail in
> testtools upstream - no need to do it in bzr at all.
>
> But, I don't think its correct, you say that the log becomes part of
> the reason, but that doesn't make sense at all.
>
> Whats the size issue with PQM? We don't skip all that many tests...
>
> -Rob

We skip 2200 tests, and it causes the log file to become 7.2MB in size.
I would consider that both "many tests" and a "large output".

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkx6YGsACgkQJdeBCYSNAAPqZwCZAUQod44FAGp2akiiR5mS+5mV
4SkAn0SKZCP/szWfI8gXQbyLE8mWk4p9
=lx8l
-----END PGP SIGNATURE-----

Revision history for this message
Martin Packman (gz) wrote :

> It also should reduce some of the noise when running selftest -v. It doesn't
> change addSkip to report the 'reason' as a single line like we used to,
> though I would consider working on that if people would like it. (basically,
> if the details dict only contains reason, just display it, rather that doing
> the full formatting dance.)

The added noise is a straight-up regression from the switch to the testtools api, which I finally got round to filing as bug 625597 a couple of days back. That wants fixing regardless of whether we want to selectively clear logs like this, and mostly means submitting to an ever more testtools oriented style.

I'm not certain that clearing logs is the right thing. Generally I'd think it a bug if skipped tests have long log files. Some tests take a second to run, leak a thread, and just skip anyway.

Overall I think I'd prefer this happened in subunit. Passing tests also have logs, but I'm guessing subunit doesn't show their details at all.

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 8/29/2010 3:34 AM, Robert Collins wrote:
> so for skip I think keeping the log is important, it may well have
> hints to why something gets skipped beyond the reason; for xfail this
> makes sense to me.
>
> Looking at the implementation, I'd rather see discardDetail in
> testtools upstream - no need to do it in bzr at all.
>
> But, I don't think its correct, you say that the log becomes part of
> the reason, but that doesn't make sense at all.

            try:
                return addSkip(test, details=details)
            except TypeError:
                # have to convert
                reason = _details_to_str(details)

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkx6nNUACgkQJdeBCYSNAAP+aACg1hSD+eXY7BtAHVZkNSgz4sOp
dosAnjvRBFQ8zZ4FNiVwVSMq3HQIk8Cf
=FHMt
-----END PGP SIGNATURE-----

Revision history for this message
Robert Collins (lifeless) wrote :

So, 2200 tests is 10% of the total, the other 90% still gather all the logs.

That try:finally: demonstrates that bzr *just needs to update its API*
- its going via the compatibility code path.

Revision history for this message
Vincent Ladeuil (vila) wrote :

As a data point, I still can't get any email from PQM on failures.

I just had to reproduce a problem and the raw selftest.log containing the uncompressed,
unfiltered, selftest.log is 47M.

I'm not quite sure about which stream is sent on failure but I suspect it is this one.

So, for debugging purposes, I don't like removing stuff.
But I currently get *nothing* so can we step back a bit and addresses this
problem once and for all ?

Should we send a very short mail on failures with an url to get the whole stream for example ?

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 8/29/2010 2:58 PM, Robert Collins wrote:
> So, 2200 tests is 10% of the total, the other 90% still gather all the logs.
>
> That try:finally: demonstrates that bzr *just needs to update its API*
> - its going via the compatibility code path.

except doesn't subunit as well...

But you bring up a good point. I didn't realize that success was also
generating log output, which I would also like to suppress.

It is still 7MB as an email, which means that we are failing to get
failure emails sometimes.

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkx71LgACgkQJdeBCYSNAAPeMACgvRlxuJqN+rktJhYl4OhAdRou
fVAAoIO8aM4md+G5J6BkM7BzML9Iapry
=BX5H
-----END PGP SIGNATURE-----

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 8/30/2010 9:53 AM, Vincent Ladeuil wrote:
> As a data point, I still can't get any email from PQM on failures.
>
> I just had to reproduce a problem and the raw selftest.log containing the uncompressed,
> unfiltered, selftest.log is 47M.
>
> I'm not quite sure about which stream is sent on failure but I suspect it is this one.
>
> So, for debugging purposes, I don't like removing stuff.
> But I currently get *nothing* so can we step back a bit and addresses this
> problem once and for all ?
>
> Should we send a very short mail on failures with an url to get the whole stream for example ?

Do you really feel that including the log output for xfail, skip, and
success is adding useful info? *I* feel it is just adding a lot of noise
with almost 0 signal...

I guess we've reached the point of
a) Bringing it to the main list.
b) Intelligent people disagreeing, so we need a vote/proclamation to
resolve.

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkx71QQACgkQJdeBCYSNAAN00ACgg+SJtQXxn82gzddD4lwbC7oR
bhwAoMQCynk7MyVt/YTocoGwePfBVxAz
=aQpv
-----END PGP SIGNATURE-----

Revision history for this message
Vincent Ladeuil (vila) wrote :

>>>>> John Arbash Meinel <email address hidden> writes:

    > On 8/30/2010 9:53 AM, Vincent Ladeuil wrote:
    >> As a data point, I still can't get any email from PQM on failures.
    >>
    >> I just had to reproduce a problem and the raw selftest.log containing the uncompressed,
    >> unfiltered, selftest.log is 47M.
    >>
    >> I'm not quite sure about which stream is sent on failure but I suspect it is this one.
    >>
    >> So, for debugging purposes, I don't like removing stuff.
    >> But I currently get *nothing* so can we step back a bit and addresses this
    >> problem once and for all ?
    >>
    >> Should we send a very short mail on failures with an url to get the whole stream for example ?

    > Do you really feel that including the log output for xfail, skip, and
    > success is adding useful info?

No. But you forgot 'not applicable' :-P

    > *I* feel it is just adding a lot of noise with almost 0 signal...

Right, so ISTM that we agree about two things:

- we want the list of tests and their status (pass/fail/skip etc.)
- we care about log only for failures.

Don't we get that by simply *not* using subunit on pqm ?

Or is pqm.bazaar-cvs.org *requiring* subunit ?

Revision history for this message
Robert Collins (lifeless) wrote :

we did two simultaneous things a while back:
 - turned off -1, so that we get *all* failures
 - turned on subunit, so that the failures are machine driven

The emails are meant to be compressed, so you should in principle be
able get it reliably.

If we're actually running into mailer limits, we can do a few things:
 - turn off subunit (will reduce some of the overhead - lots in a
success case, but in a it-all-goes-pear-shaped failure, you'll still
hit the limit and whatever undiagnosed and unfixed issue is biting you
now, will bite you then.
 - diagnose and fix the mailer path issue so that you receive the mails reliably
 - stop using email to send the results
 - filter the stream at source so success and other results that you
consider uninteresting are not included in the 'raw' stream. Note that
again, this will *leave the failure mode in place so when it goes bad
you will stop getting the responses*

Now, I'm not directly experiencing this anymore, so you need to do
what makes sense to you.

If subunit's API is out of date and messing up the reason formatting,
I'll happily fix it - I'm going to go peek and see if its addSkip is
stale right after mailing this.

To stop using email to send the results, we either need to design a
new thing and change PQM to do it, or reenable the LP API support,
which Tim requested we disable.

We previously *haven't* filtered at source as a defensive measure
against bugs. Python 2 is riddled with default-encoding pitfalls and
they made subunit + bzr very unreliable at one point. Possibly totally
fixed thanks to Martin[gz].

Personally, I would not take any action that leaves the problem fallow
waiting for a serious test failure to provoke it: that just means that
when you need it most it will fail. However, as I say above: you're
using this system, I'm rarely using it : do what makes sense to you. I
do strongly suggest that you change things in PQM itself if you want
filtering, rather than bzr. That will at least preserve the detailed
output for babune if you do need it in future.

As for the value of logs on success, xfail, skip etc : *if* they pass
incorrectly, I think the log will be invaluable, but you won't need it
till you need it.

-Rob

Revision history for this message
Vincent Ladeuil (vila) wrote :
Download full text (4.2 KiB)

>>>>> Robert Collins <email address hidden> writes:

    > we did two simultaneous things a while back:
    > - turned off -1, so that we get *all* failures

+1

    > - turned on subunit, so that the failures are machine driven

ECANTPARSE

    > The emails are meant to be compressed, so you should in principle
    > be able get it reliably.

In practice I only get failures for merge errors, any error in the test
suite and... it's not delivered.

    > If we're actually running into mailer limits, we can do a few
    > things:

    > - turn off subunit (will reduce some of the overhead - lots in a
    > success case, but in a it-all-goes-pear-shaped failure, you'll
    > still hit the limit and whatever undiagnosed and unfixed issue is
    > biting you now, will bite you then.

Yup, exactly my feeling.

    > - diagnose and fix the mailer path issue so that you receive the
    > mails reliably

As a data point, John got an email for a failure and *I* didn't get it
for the same failure (or so very closely the same failure that the
difference doesn't matter).

This means the problem is on my side but I don't know where exactly. I
use fetchmail and there is *nothing* in the logs so that also rules out
the delivery part. That leaves my ISP mail server... and I don't control
it. That's me, but it could apply to anybody else, we just don't know.

    > - stop using email to send the results

Or store the results somewhere (deleting them after a day/week/month
whatever).

Or send two mails:
- one indicating the failure
- one with the stream attached.

At least with that I will *know* there was a failure.

    > - filter the stream at source so success and other results that
    > you consider uninteresting are not included in the 'raw'
    > stream. Note that again, this will *leave the failure mode in
    > place so when it goes bad you will stop getting the responses*

Another idea would be to have a different pqm instance used only for
known failures and make the output for the regular pqm includes only
*failures* and get rid of *all* the rest.

    > Now, I'm not directly experiencing this anymore, so you need to do
    > what makes sense to you.

Thanks for your thoughts anyway.

    > If subunit's API is out of date and messing up the reason
    > formatting, I'll happily fix it - I'm going to go peek and see if
    > its addSkip is stale right after mailing this.

Cool, any feedback ?

    > To stop using email to send the results, we either need to design
    > a new thing and change PQM to do it, or reenable the LP API
    > support, which Tim requested we disable.

And what would that give us ? Failed run stream attached to the mp as an
attachment ?

    > We previously *haven't* filtered at source as a defensive measure
    > against bugs. Python 2 is riddled with default-encoding pitfalls
    > and they made subunit + bzr very unreliable at one point. Possibly
    > totally fixed thanks to Martin[gz].

I still expect a few (tiny but annoying) bugs to be fixed there...

    > Personally, I would not take any action that leaves the problem
    > fallow waiting for a serious test failure to provoke it: that just
    > means tha...

Read more...

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

...

> > - turn off subunit (will reduce some of the overhead - lots in a
> > success case, but in a it-all-goes-pear-shaped failure, you'll
> > still hit the limit and whatever undiagnosed and unfixed issue is
> > biting you now, will bite you then.
>
> Yup, exactly my feeling.
>
> > - diagnose and fix the mailer path issue so that you receive the
> > mails reliably
>
> As a data point, John got an email for a failure and *I* didn't get it
> for the same failure (or so very closely the same failure that the
> difference doesn't matter).

Right. For that 'failure' there was a single test failing. And the size
of the compressed log is 3.2MB, which makes it about 7MB according to
Thunderbird. (Figure base64 encoding, and ?).

It is true that if you have a huge 'everything fails' case, you'll still
get into that range. Though I did have one of those recently that broke
the 10MB barrier and caused the mail to fail to reach me. Which was,
indeed, much harder for me to track down and fix.

However, getting rid of 7MB (of compressed+base64) success content would
get a lot more headroom for when a few tests fail (or even a few hundred
fail).

...

> > To stop using email to send the results, we either need to design
> > a new thing and change PQM to do it, or reenable the LP API
> > support, which Tim requested we disable.
>
> And what would that give us ? Failed run stream attached to the mp as an
> attachment ?

I don't think you can attach stuff to an MP. I think it would actually
try to set it as a message...

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkx9JG0ACgkQJdeBCYSNAAMWcQCgrPHen1zB4dpr7AR1vJToTKxc
3BwAnR8AFECkruco07VCTPhTpFYFu8Xt
=jceX
-----END PGP SIGNATURE-----

Revision history for this message
Martin Pool (mbp) wrote :

This is now a couple of weeks old and I'd like to wrap it up.

I think it would be reasonable to merge it, and at least see what we think about it. I cannot recall looking at the logs attached to an xfail, skip or notapplicable test for a long time. The normal and common thing is that they'll have a fairly selfexplanatory message about being either known broken, or skipped because of a missing dependency, or whatever. In the source, the code that raises that exception is normally pretty obvious. If the test is being wrongly skipped, then the logs probably won't help and you need to dig in with pdb or something.

There is some unhelpful data logged by bzr and we could push down on that too.

On the whole I'm ok to merge this; it seems like it's just a couple of lines to turn it back on if we need to.

review: Approve
Revision history for this message
Martin Packman (gz) wrote :

If this is going to land, there are some things that should really be fixed first.

+ self._suppress_log ()

No space.

+ def _report_skip(self, result, e):

Rather than adding this, put the call in _do_skip instead?

ExtendedTestResult.addSuccess appears to have a similar hack spelt in a different way:

        test._log_contents = ''

Remove that and add your new version instead?

Would really like some kind of test.

review: Needs Fixing
Revision history for this message
Martin Packman (gz) wrote :

On 22/09/2010, John Arbash Meinel <email address hidden> wrote:
>>
>> + def _report_skip(self, result, e):
>>
>> Rather than adding this, put the call in _do_skip instead?
>
> IIRC, there was a reason to put it in _report_skip. I don't remember
> exactly, but I think it was that _report_skip gets called for all code
> paths, but _do_skip doesn't. (NotAvailable, vs Skip, vs XFAIL?)

I think this is true for fallbacks, but we shouldn't be using any? If
it does need doing this way explaining why in the docstring would be
good.

>> ExtendedTestResult.addSuccess appears to have a similar hack spelt in a
>> different way:
>>
>> test._log_contents = ''
>>
>> Remove that and add your new version instead?
>
> Given that that one doesn't actually seem to work, I guess that would be
> reasonable.

Removing the log from successes would be the biggest win as you've noted.

>> Would really like some kind of test.
>
> I think I can work something out in test_selftest, about what details
> are present in the final result. I'll give it a poke and see what I get.
>
> I'd certainly also like to suppress the log details for success cases.
>
> John
> =:->

Revision history for this message
John A Meinel (jameinel) wrote :

Please re-review. I've:

a) Added tests about details, etc, when running locally
b) Added tests for what is contained in the subunit stream
c) Hopefully updated the docs sufficiently for Martin [gz].

Revision history for this message
Martin Packman (gz) wrote :

Let's land this. The tests are worth having even when the bzrlib ExtendedTestResult class is fixed to print less junk on the console or subunit starts being pickier about what it does with success results.

A couple of notes. Thanks for the reasoning for _report_skip, makes it clear the method can probably be removed when the bzrlib/testtools boundary starts treating exceptions properly. The new tests that use testtools.TestResult rather than ExtendedTestResult check some things that would fail if the class switched because the behaviours are still quite different in a number of areas, but they should probably be converging anyway.

review: Approve
Revision history for this message
John A Meinel (jameinel) wrote :

I did a quick test run with an artificially failing test, and I saw that it did help, a lot. 7.3MB down to 1.3MB for the email. (Email size, not size of .gz not uncompressed size)

Revision history for this message
John A Meinel (jameinel) wrote :

sent to pqm by email

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'NEWS'
--- NEWS 2010-09-24 12:53:00 +0000
+++ NEWS 2010-09-26 14:05:09 +0000
@@ -42,6 +42,10 @@
42Internals42Internals
43*********43*********
4444
45* When running ``bzr selftest --subunit`` the subunit stream will no
46 longer include the "log" information for tests which are considered to
47 be 'successes' (success, xfail, skip, etc) (John Arbash Meinel)
48
45Testing49Testing
46*******50*******
4751
4852
=== modified file 'bzrlib/lazy_import.py'
--- bzrlib/lazy_import.py 2010-09-05 17:15:46 +0000
+++ bzrlib/lazy_import.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2006 Canonical Ltd1# Copyright (C) 2006-2010 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
55
=== modified file 'bzrlib/shelf.py'
--- bzrlib/shelf.py 2010-09-24 15:11:57 +0000
+++ bzrlib/shelf.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2008 Canonical Ltd1# Copyright (C) 2008, 2009, 2010 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
55
=== modified file 'bzrlib/strace.py'
--- bzrlib/strace.py 2010-09-01 19:38:55 +0000
+++ bzrlib/strace.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2007 Canonical Ltd1# Copyright (C) 2007, 2009, 2010 Canonical Ltd
2# Authors: Robert Collins <robert.collins@canonical.com>2# Authors: Robert Collins <robert.collins@canonical.com>
3#3#
4# This program is free software; you can redistribute it and/or modify4# This program is free software; you can redistribute it and/or modify
55
=== modified file 'bzrlib/tests/__init__.py'
--- bzrlib/tests/__init__.py 2010-09-17 14:27:27 +0000
+++ bzrlib/tests/__init__.py 2010-09-26 14:05:09 +0000
@@ -846,6 +846,22 @@
846 import pdb846 import pdb
847 pdb.Pdb().set_trace(sys._getframe().f_back)847 pdb.Pdb().set_trace(sys._getframe().f_back)
848848
849 def discardDetail(self, name):
850 """Extend the addDetail, getDetails api so we can remove a detail.
851
852 eg. bzr always adds the 'log' detail at startup, but we don't want to
853 include it for skipped, xfail, etc tests.
854
855 It is safe to call this for a detail that doesn't exist, in case this
856 gets called multiple times.
857 """
858 # We cheat. details is stored in __details which means we shouldn't
859 # touch it. but getDetails() returns the dict directly, so we can
860 # mutate it.
861 details = self.getDetails()
862 if name in details:
863 del details[name]
864
849 def _clear_debug_flags(self):865 def _clear_debug_flags(self):
850 """Prevent externally set debug flags affecting tests.866 """Prevent externally set debug flags affecting tests.
851867
@@ -1569,7 +1585,12 @@
1569 """This test has failed for some known reason."""1585 """This test has failed for some known reason."""
1570 raise KnownFailure(reason)1586 raise KnownFailure(reason)
15711587
1588 def _suppress_log(self):
1589 """Remove the log info from details."""
1590 self.discardDetail('log')
1591
1572 def _do_skip(self, result, reason):1592 def _do_skip(self, result, reason):
1593 self._suppress_log()
1573 addSkip = getattr(result, 'addSkip', None)1594 addSkip = getattr(result, 'addSkip', None)
1574 if not callable(addSkip):1595 if not callable(addSkip):
1575 result.addSuccess(result)1596 result.addSuccess(result)
@@ -1578,6 +1599,7 @@
15781599
1579 @staticmethod1600 @staticmethod
1580 def _do_known_failure(self, result, e):1601 def _do_known_failure(self, result, e):
1602 self._suppress_log()
1581 err = sys.exc_info()1603 err = sys.exc_info()
1582 addExpectedFailure = getattr(result, 'addExpectedFailure', None)1604 addExpectedFailure = getattr(result, 'addExpectedFailure', None)
1583 if addExpectedFailure is not None:1605 if addExpectedFailure is not None:
@@ -1591,6 +1613,7 @@
1591 reason = 'No reason given'1613 reason = 'No reason given'
1592 else:1614 else:
1593 reason = e.args[0]1615 reason = e.args[0]
1616 self._suppress_log ()
1594 addNotApplicable = getattr(result, 'addNotApplicable', None)1617 addNotApplicable = getattr(result, 'addNotApplicable', None)
1595 if addNotApplicable is not None:1618 if addNotApplicable is not None:
1596 result.addNotApplicable(self, reason)1619 result.addNotApplicable(self, reason)
@@ -1598,8 +1621,29 @@
1598 self._do_skip(result, reason)1621 self._do_skip(result, reason)
15991622
1600 @staticmethod1623 @staticmethod
1624 def _report_skip(self, result, err):
1625 """Override the default _report_skip.
1626
1627 We want to strip the 'log' detail. If we waint until _do_skip, it has
1628 already been formatted into the 'reason' string, and we can't pull it
1629 out again.
1630 """
1631 self._suppress_log()
1632 super(TestCase, self)._report_skip(self, result, err)
1633
1634 @staticmethod
1635 def _report_expected_failure(self, result, err):
1636 """Strip the log.
1637
1638 See _report_skip for motivation.
1639 """
1640 self._suppress_log()
1641 super(TestCase, self)._report_expected_failure(self, result, err)
1642
1643 @staticmethod
1601 def _do_unsupported_or_skip(self, result, e):1644 def _do_unsupported_or_skip(self, result, e):
1602 reason = e.args[0]1645 reason = e.args[0]
1646 self._suppress_log()
1603 addNotSupported = getattr(result, 'addNotSupported', None)1647 addNotSupported = getattr(result, 'addNotSupported', None)
1604 if addNotSupported is not None:1648 if addNotSupported is not None:
1605 result.addNotSupported(self, reason)1649 result.addNotSupported(self, reason)
@@ -4459,10 +4503,20 @@
4459try:4503try:
4460 from subunit import TestProtocolClient4504 from subunit import TestProtocolClient
4461 from subunit.test_results import AutoTimingTestResultDecorator4505 from subunit.test_results import AutoTimingTestResultDecorator
4506 class SubUnitBzrProtocolClient(TestProtocolClient):
4507
4508 def addSuccess(self, test, details=None):
4509 # The subunit client always includes the details in the subunit
4510 # stream, but we don't want to include it in ours.
4511 if details is not None and 'log' in details:
4512 del details['log']
4513 return super(SubUnitBzrProtocolClient, self).addSuccess(
4514 test, details)
4515
4462 class SubUnitBzrRunner(TextTestRunner):4516 class SubUnitBzrRunner(TextTestRunner):
4463 def run(self, test):4517 def run(self, test):
4464 result = AutoTimingTestResultDecorator(4518 result = AutoTimingTestResultDecorator(
4465 TestProtocolClient(self.stream))4519 SubUnitBzrProtocolClient(self.stream))
4466 test.run(result)4520 test.run(result)
4467 return result4521 return result
4468except ImportError:4522except ImportError:
44694523
=== modified file 'bzrlib/tests/per_bzrdir/__init__.py'
--- bzrlib/tests/per_bzrdir/__init__.py 2010-08-29 14:37:51 +0000
+++ bzrlib/tests/per_bzrdir/__init__.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2006-2010 Canonical Ltd1# Copyright (C) 2010 Canonical Ltd
2# Authors: Robert Collins <robert.collins@canonical.com>2# Authors: Robert Collins <robert.collins@canonical.com>
3# Jelmer Vernooij <jelmer.vernooij@canonical.com>3# Jelmer Vernooij <jelmer.vernooij@canonical.com>
4# -*- coding: utf-8 -*-4# -*- coding: utf-8 -*-
55
=== modified file 'bzrlib/tests/per_bzrdir/test_bzrdir.py'
--- bzrlib/tests/per_bzrdir/test_bzrdir.py 2010-08-29 15:03:22 +0000
+++ bzrlib/tests/per_bzrdir/test_bzrdir.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2006-2010 Canonical Ltd1# Copyright (C) 2010 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
55
=== modified file 'bzrlib/tests/test_bugtracker.py'
--- bzrlib/tests/test_bugtracker.py 2010-09-06 19:12:52 +0000
+++ bzrlib/tests/test_bugtracker.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2007 Canonical Ltd1# Copyright (C) 2007-2010 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
55
=== modified file 'bzrlib/tests/test_lazy_import.py'
--- bzrlib/tests/test_lazy_import.py 2010-09-05 17:15:46 +0000
+++ bzrlib/tests/test_lazy_import.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2006 Canonical Ltd1# Copyright (C) 2006-2010 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
55
=== modified file 'bzrlib/tests/test_rio.py'
--- bzrlib/tests/test_rio.py 2010-09-06 06:13:52 +0000
+++ bzrlib/tests/test_rio.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2005, 2006 Canonical Ltd1# Copyright (C) 2005, 2006, 2007, 2009, 2010 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
55
=== modified file 'bzrlib/tests/test_selftest.py'
--- bzrlib/tests/test_selftest.py 2010-09-17 14:27:27 +0000
+++ bzrlib/tests/test_selftest.py 2010-09-26 14:05:09 +0000
@@ -68,7 +68,7 @@
68 test_sftp_transport,68 test_sftp_transport,
69 TestUtil,69 TestUtil,
70 )70 )
71from bzrlib.trace import note71from bzrlib.trace import note, mutter
72from bzrlib.transport import memory72from bzrlib.transport import memory
73from bzrlib.version import _get_bzr_source_tree73from bzrlib.version import _get_bzr_source_tree
7474
@@ -1690,6 +1690,108 @@
1690 self.assertEqual('original', obj.test_attr)1690 self.assertEqual('original', obj.test_attr)
16911691
16921692
1693class _MissingFeature(tests.Feature):
1694 def _probe(self):
1695 return False
1696missing_feature = _MissingFeature()
1697
1698
1699def _get_test(name):
1700 """Get an instance of a specific example test.
1701
1702 We protect this in a function so that they don't auto-run in the test
1703 suite.
1704 """
1705
1706 class ExampleTests(tests.TestCase):
1707
1708 def test_fail(self):
1709 mutter('this was a failing test')
1710 self.fail('this test will fail')
1711
1712 def test_error(self):
1713 mutter('this test errored')
1714 raise RuntimeError('gotcha')
1715
1716 def test_missing_feature(self):
1717 mutter('missing the feature')
1718 self.requireFeature(missing_feature)
1719
1720 def test_skip(self):
1721 mutter('this test will be skipped')
1722 raise tests.TestSkipped('reason')
1723
1724 def test_success(self):
1725 mutter('this test succeeds')
1726
1727 def test_xfail(self):
1728 mutter('test with expected failure')
1729 self.knownFailure('this_fails')
1730
1731 def test_unexpected_success(self):
1732 mutter('test with unexpected success')
1733 self.expectFailure('should_fail', lambda: None)
1734
1735 return ExampleTests(name)
1736
1737
1738class TestTestCaseLogDetails(tests.TestCase):
1739
1740 def _run_test(self, test_name):
1741 test = _get_test(test_name)
1742 result = testtools.TestResult()
1743 test.run(result)
1744 return result
1745
1746 def test_fail_has_log(self):
1747 result = self._run_test('test_fail')
1748 self.assertEqual(1, len(result.failures))
1749 result_content = result.failures[0][1]
1750 self.assertContainsRe(result_content, 'Text attachment: log')
1751 self.assertContainsRe(result_content, 'this was a failing test')
1752
1753 def test_error_has_log(self):
1754 result = self._run_test('test_error')
1755 self.assertEqual(1, len(result.errors))
1756 result_content = result.errors[0][1]
1757 self.assertContainsRe(result_content, 'Text attachment: log')
1758 self.assertContainsRe(result_content, 'this test errored')
1759
1760 def test_skip_has_no_log(self):
1761 result = self._run_test('test_skip')
1762 self.assertEqual(['reason'], result.skip_reasons.keys())
1763 skips = result.skip_reasons['reason']
1764 self.assertEqual(1, len(skips))
1765 test = skips[0]
1766 self.assertFalse('log' in test.getDetails())
1767
1768 def test_missing_feature_has_no_log(self):
1769 # testtools doesn't know about addNotSupported, so it just gets
1770 # considered as a skip
1771 result = self._run_test('test_missing_feature')
1772 self.assertEqual([missing_feature], result.skip_reasons.keys())
1773 skips = result.skip_reasons[missing_feature]
1774 self.assertEqual(1, len(skips))
1775 test = skips[0]
1776 self.assertFalse('log' in test.getDetails())
1777
1778 def test_xfail_has_no_log(self):
1779 result = self._run_test('test_xfail')
1780 self.assertEqual(1, len(result.expectedFailures))
1781 result_content = result.expectedFailures[0][1]
1782 self.assertNotContainsRe(result_content, 'Text attachment: log')
1783 self.assertNotContainsRe(result_content, 'test with expected failure')
1784
1785 def test_unexpected_success_has_log(self):
1786 result = self._run_test('test_unexpected_success')
1787 self.assertEqual(1, len(result.unexpectedSuccesses))
1788 # Inconsistency, unexpectedSuccesses is a list of tests,
1789 # expectedFailures is a list of reasons?
1790 test = result.unexpectedSuccesses[0]
1791 details = test.getDetails()
1792 self.assertTrue('log' in details)
1793
1794
1693class TestTestCloning(tests.TestCase):1795class TestTestCloning(tests.TestCase):
1694 """Tests that test cloning of TestCases (as used by multiply_tests)."""1796 """Tests that test cloning of TestCases (as used by multiply_tests)."""
16951797
@@ -1883,7 +1985,7 @@
1883 tree.branch.repository.bzrdir.root_transport)1985 tree.branch.repository.bzrdir.root_transport)
18841986
18851987
1886class SelfTestHelper:1988class SelfTestHelper(object):
18871989
1888 def run_selftest(self, **kwargs):1990 def run_selftest(self, **kwargs):
1889 """Run selftest returning its output."""1991 """Run selftest returning its output."""
@@ -2040,6 +2142,83 @@
2040 load_list='missing file name', list_only=True)2142 load_list='missing file name', list_only=True)
20412143
20422144
2145class TestSubunitLogDetails(tests.TestCase, SelfTestHelper):
2146
2147 _test_needs_features = [features.subunit]
2148
2149 def run_subunit_stream(self, test_name):
2150 from subunit import ProtocolTestCase
2151 def factory():
2152 return TestUtil.TestSuite([_get_test(test_name)])
2153 stream = self.run_selftest(runner_class=tests.SubUnitBzrRunner,
2154 test_suite_factory=factory)
2155 test = ProtocolTestCase(stream)
2156 result = testtools.TestResult()
2157 test.run(result)
2158 content = stream.getvalue()
2159 return content, result
2160
2161 def test_fail_has_log(self):
2162 content, result = self.run_subunit_stream('test_fail')
2163 self.assertEqual(1, len(result.failures))
2164 self.assertContainsRe(content, '(?m)^log$')
2165 self.assertContainsRe(content, 'this test will fail')
2166
2167 def test_error_has_log(self):
2168 content, result = self.run_subunit_stream('test_error')
2169 self.assertContainsRe(content, '(?m)^log$')
2170 self.assertContainsRe(content, 'this test errored')
2171
2172 def test_skip_has_no_log(self):
2173 content, result = self.run_subunit_stream('test_skip')
2174 self.assertNotContainsRe(content, '(?m)^log$')
2175 self.assertNotContainsRe(content, 'this test will be skipped')
2176 self.assertEqual(['reason'], result.skip_reasons.keys())
2177 skips = result.skip_reasons['reason']
2178 self.assertEqual(1, len(skips))
2179 test = skips[0]
2180 # RemotedTestCase doesn't preserve the "details"
2181 ## self.assertFalse('log' in test.getDetails())
2182
2183 def test_missing_feature_has_no_log(self):
2184 content, result = self.run_subunit_stream('test_missing_feature')
2185 self.assertNotContainsRe(content, '(?m)^log$')
2186 self.assertNotContainsRe(content, 'missing the feature')
2187 self.assertEqual(['_MissingFeature\n'], result.skip_reasons.keys())
2188 skips = result.skip_reasons['_MissingFeature\n']
2189 self.assertEqual(1, len(skips))
2190 test = skips[0]
2191 # RemotedTestCase doesn't preserve the "details"
2192 ## self.assertFalse('log' in test.getDetails())
2193
2194 def test_xfail_has_no_log(self):
2195 content, result = self.run_subunit_stream('test_xfail')
2196 self.assertNotContainsRe(content, '(?m)^log$')
2197 self.assertNotContainsRe(content, 'test with expected failure')
2198 self.assertEqual(1, len(result.expectedFailures))
2199 result_content = result.expectedFailures[0][1]
2200 self.assertNotContainsRe(result_content, 'Text attachment: log')
2201 self.assertNotContainsRe(result_content, 'test with expected failure')
2202
2203 def test_unexpected_success_has_log(self):
2204 content, result = self.run_subunit_stream('test_unexpected_success')
2205 self.assertContainsRe(content, '(?m)^log$')
2206 self.assertContainsRe(content, 'test with unexpected success')
2207 self.expectFailure('subunit treats "unexpectedSuccess"'
2208 ' as a plain success',
2209 self.assertEqual, 1, len(result.unexpectedSuccesses))
2210 self.assertEqual(1, len(result.unexpectedSuccesses))
2211 test = result.unexpectedSuccesses[0]
2212 # RemotedTestCase doesn't preserve the "details"
2213 ## self.assertTrue('log' in test.getDetails())
2214
2215 def test_success_has_no_log(self):
2216 content, result = self.run_subunit_stream('test_success')
2217 self.assertEqual(1, result.testsRun)
2218 self.assertNotContainsRe(content, '(?m)^log$')
2219 self.assertNotContainsRe(content, 'this test succeeds')
2220
2221
2043class TestRunBzr(tests.TestCase):2222class TestRunBzr(tests.TestCase):
20442223
2045 out = ''2224 out = ''
20462225
=== modified file 'bzrlib/tests/test_setup.py'
--- bzrlib/tests/test_setup.py 2010-09-02 00:49:06 +0000
+++ bzrlib/tests/test_setup.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2005, 2006 Canonical Ltd1# Copyright (C) 2005, 2006, 2008, 2009, 2010 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
55
=== modified file 'bzrlib/tests/test_status.py'
--- bzrlib/tests/test_status.py 2010-08-31 13:49:48 +0000
+++ bzrlib/tests/test_status.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2005 Canonical Ltd1# Copyright (C) 2006-2010 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
55
=== modified file 'bzrlib/tests/test_strace.py'
--- bzrlib/tests/test_strace.py 2010-09-01 19:38:55 +0000
+++ bzrlib/tests/test_strace.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2007, 2010 Canonical Ltd1# Copyright (C) 2007-2010 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
55
=== modified file 'bzrlib/tests/test_tuned_gzip.py'
--- bzrlib/tests/test_tuned_gzip.py 2010-08-29 18:29:22 +0000
+++ bzrlib/tests/test_tuned_gzip.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2006 Canonical Ltd1# Copyright (C) 2006, 2009, 2010 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
55
=== modified file 'bzrlib/tests/testui.py'
--- bzrlib/tests/testui.py 2010-09-14 08:14:22 +0000
+++ bzrlib/tests/testui.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2006-2010 Canonical Ltd1# Copyright (C) 2010 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
55
=== modified file 'bzrlib/tuned_gzip.py'
--- bzrlib/tuned_gzip.py 2010-08-29 18:30:45 +0000
+++ bzrlib/tuned_gzip.py 2010-09-26 14:05:09 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2005, 2006 Canonical Ltd1# Copyright (C) 2006-2010 Canonical Ltd
2# Written by Robert Collins <robert.collins@canonical.com>2# Written by Robert Collins <robert.collins@canonical.com>
3#3#
4# This program is free software; you can redistribute it and/or modify4# This program is free software; you can redistribute it and/or modify