Merge lp:~parthm/bzr/538868-message-for-heavy-checkout into lp:bzr

Proposed by Parth Malwankar
Status: Superseded
Proposed branch: lp:~parthm/bzr/538868-message-for-heavy-checkout
Merge into: lp:bzr
Diff against target: 336 lines (+139/-23)
8 files modified
NEWS (+3/-3)
bzrlib/builtins.py (+0/-5)
bzrlib/recordcounter.py (+65/-0)
bzrlib/remote.py (+2/-1)
bzrlib/repofmt/groupcompress_repo.py (+23/-4)
bzrlib/repository.py (+6/-4)
bzrlib/smart/repository.py (+40/-4)
bzrlib/tests/blackbox/test_checkout.py (+0/-2)
To merge this branch: bzr merge lp:~parthm/bzr/538868-message-for-heavy-checkout
Reviewer Review Type Date Requested Status
John A Meinel Needs Resubmitting
Martin Pool 2nd review Needs Information
Vincent Ladeuil Approve
Gary van der Merwe Approve
Review via email: mp+24483@code.launchpad.net

This proposal has been superseded by a proposal from 2010-05-14.

Commit message

(parthm) heavyweight checkout now indicates that history is being copied.

Description of the change

=== Fixes Bug #538868 ===
For heavyweight checkout show a message showing that history is being copied and it may take some time.

Sample output:

[tmp]% ~/src/bzr.dev/538868-message-for-heavy-checkout/bzr --no-plugins checkout ~/src/bzr.dev/trunk foobar
Copying history to "foobar". This may take some time.
bzr: interrupted
[tmp]% ~/src/bzr.dev/538868-message-for-heavy-checkout/bzr --no-plugins checkout ~/src/bzr.dev/trunk
Copying history to "trunk". This may take some time.
bzr: interrupted

The only ugliness I see is in the off case that to_location already exists. In this case the output is:

[tmp]% ~/src/bzr.dev/538868-message-for-heavy-checkout/bzr --no-plugins checkout ~/src/bzr.dev/trunk
Copying history to "trunk". This may take some time.
bzr: ERROR: File exists: u'/home/parthm/tmp/trunk/.bzr': [Errno 17] File exists: '/home/parthm/tmp/trunk/.bzr'

It would be ideal if the "copying history" message is not shown. I suppose thats not too bad though. I had a early failure fix for this but haven't put it in considering that bzr works across multiple transports.

+ # Fail early if to_location/.bzr exists. We don't want to
+ # give a message "Copying history ..." and then fail
+ # saying to_location/.bzr exists.
+ to_loc_bzr = osutils.joinpath([to_location, '.bzr'])
+ if osutils.lexists(to_loc_bzr):
+ raise errors.BzrCommandError('"%s" exists.' % to_loc_bzr)
+

To post a comment you must log in.
Revision history for this message
Martin Pool (mbp) wrote :

Thanks, this is a very nice bug to fix.

I would prefer the message came out through trace or the ui factory
than directly to self.outf, because that will make it easier to
refactor out of the cmd implementation, and it's more likely to
automatically respect --quiet. You might then be able to test more
cleanly through TestUIFactory.

Revision history for this message
Gary van der Merwe (garyvdm) :
review: Approve
Revision history for this message
Vincent Ladeuil (vila) wrote :

Apart from the message tweaks mentioned on IRC, that's good to land !

review: Approve
Revision history for this message
Robert Collins (lifeless) wrote :

I realise this has gone through, so I'd like to just request some more
stuff if you have time; if not please file a bug.

The message will show up when doing a heavy checking in a repository;
that's just annoying - no history is being copied, so no message
should appear. Recommended fix: move the notification into the core,
out of builtins.py.

Secondly, if its worth telling people we're copying [a lot] of history
for checkout, I think its worth telling them about it for branch and
merge too. Perhaps lets set some sort of heuristic (e.g. 100 or more
revisions) and have the warning trigger on that?

-Rob

Revision history for this message
Martin Pool (mbp) wrote :

On 6 May 2010 04:28, Robert Collins <email address hidden> wrote:
> I realise this has gone through, so I'd like to just request some more
> stuff if you have time; if not please file a bug.
>
> The message will show up when doing a heavy checking in a repository;
> that's just annoying - no history is being copied, so no message
> should appear. Recommended fix: move the notification into the core,
> out of builtins.py.

+1

perhaps just showing it from fetch would be best

> Secondly, if its worth telling people we're copying [a lot] of history
> for checkout, I think its worth telling them about it for branch and
> merge too. Perhaps lets set some sort of heuristic (e.g. 100 or more
> revisions) and have the warning trigger on that?

-½ on that, because it will create questions about "but it worked
before, what changed?" If we want that kind of approach we should
make sure there's a clear progress bar message, so that it's visible
only while the slow operation is taking place.

--
Martin <http://launchpad.net/~mbp/>

Revision history for this message
Parth Malwankar (parthm) wrote :

On Thu, May 6, 2010 at 8:58 AM, Robert Collins
<email address hidden> wrote:
> I realise this has gone through, so I'd like to just request some more
> stuff if you have time; if not please file a bug.
>
> The message will show up when doing a heavy checking in a repository;
> that's just annoying - no history is being copied, so no message
> should appear. Recommended fix: move the notification into the core,
> out of builtins.py.
>
> Secondly, if its worth telling people we're copying [a lot] of history
> for checkout, I think its worth telling them about it for branch and
> merge too. Perhaps lets set some sort of heuristic (e.g. 100 or more
> revisions) and have the warning trigger on that?
>

Good points. Thanks for the review.
As discussed on the IRC I will work on fixing this.
I don't have a good solution yet. Will propose something taking into
account Martin Pool recommendation.

Revision history for this message
Parth Malwankar (parthm) wrote :

So I updated this patch to skip the message when checkout is done in a shared repo. However, there is an interesting case below.

[tmp]% bzr init-repo foo
Shared repository with trees (format: 2a)
Location:
  shared repository: foo
[tmp]% cd foo
[foo]% /home/parthm/src/bzr.dev/538868-message-for-heavy-checkout/bzr checkout ~/src/bzr.dev/trunk foo
[foo]%

In this case, the entire history is copied so it does take time. I am wondering if we should just stick to the simpler earlier patch. Alternatively, if there is a way to know how many changes need to be pulled we could show the message based on this.

This is still checkout specific and doesn't touch other operations.

Revision history for this message
Robert Collins (lifeless) wrote :

Well the main point for me is that the issue - lots of history being
copied - is separate from the commands. So I guess I'm really saying
'do it more broadly please'.

-Rob

Revision history for this message
Martin Pool (mbp) wrote :

test

Revision history for this message
Martin Pool (mbp) wrote :

test

review: Needs Information (2nd review)
Revision history for this message
John A Meinel (jameinel) wrote :

23 pb = ui.ui_factory.nested_progress_bar()
24 + key_count = len(search.get_keys())
25 try:

^- We've discussed that this is a fairly unfortunate regression, as it requires polling the remote server for the list of revisions rather than just having it stream them out.

I'm pretty sure Parth is already looking at how to fix this.

review: Needs Resubmitting
Revision history for this message
Parth Malwankar (parthm) wrote :

With a lot of help from John, this patch is is a good enough shape for review.
Its evolved from a fix for bug #538868 to a fix for bug #374740

The intent is to show users an _estimate_ of the amount of work pending in branch/push/pull/checkout (remote-local, local-remote, remote-remote) operations. This is done by showing the number of records pending.

E.g.

[tmp]% ~/src/bzr.dev/edge/bzr checkout ~/src/bzr.dev/trunk pqr
- Fetching revisions:Inserting stream:Estimate 106429/320381

As the number of records are proportional to the number of revisions to be fetched, for remote operations, this count is not known and the progress bar starts with "Estimating.. X" where X goes from 0 to revs-to-fetch, following this the progress bar changes to whats shown above. For the local ops, we know the count upfront so the progress starts at 0/N.

A RecordCounter object has been added to maintain current, max, key_count and to encapsulate the estimation algorithm. An instance of this is added to StreamSource which is then used among the various sub-streams to show progress. The wrap_and_count generator wraps existing sub-streams with the progress bar printer.

Revision history for this message
Parth Malwankar (parthm) wrote :

Just to add. This progress is seen during the "Inserting stream" phase with is the big time consumer. There is still room for improvement with "Getting stream" and "Inserting missing keys" phase but that can probably be a separate bug.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'NEWS'
--- NEWS 2010-05-14 09:02:35 +0000
+++ NEWS 2010-05-14 13:38:33 +0000
@@ -96,9 +96,9 @@
96 versions before 1.6.96 versions before 1.6.
97 (Andrew Bennetts, #528041)97 (Andrew Bennetts, #528041)
9898
99* Heavyweight checkout operation now shows a message to the user indicating99* Improved progress bar for fetch. Bazaar now shows an estimate of the
100 history is being copied.100 number of records to be fetched vs actually fetched.
101 (Parth Malwankar, #538868)101 (Parth Malwankar, #374740, #538868)
102102
103* Reduce peak memory by one copy of compressed text.103* Reduce peak memory by one copy of compressed text.
104 (John Arbash Meinel, #566940)104 (John Arbash Meinel, #566940)
105105
=== modified file 'bzrlib/builtins.py'
--- bzrlib/builtins.py 2010-05-14 09:20:34 +0000
+++ bzrlib/builtins.py 2010-05-14 13:38:33 +0000
@@ -1336,11 +1336,6 @@
1336 except errors.NoWorkingTree:1336 except errors.NoWorkingTree:
1337 source.bzrdir.create_workingtree(revision_id)1337 source.bzrdir.create_workingtree(revision_id)
1338 return1338 return
1339
1340 if not lightweight:
1341 message = ('Copying history to "%s". '
1342 'To checkout without local history use --lightweight.' % to_location)
1343 ui.ui_factory.show_message(message)
1344 source.create_checkout(to_location, revision_id, lightweight,1339 source.create_checkout(to_location, revision_id, lightweight,
1345 accelerator_tree, hardlink)1340 accelerator_tree, hardlink)
13461341
13471342
=== added file 'bzrlib/recordcounter.py'
--- bzrlib/recordcounter.py 1970-01-01 00:00:00 +0000
+++ bzrlib/recordcounter.py 2010-05-14 13:38:33 +0000
@@ -0,0 +1,65 @@
1# Copyright (C) 2006-2010 Canonical Ltd
2#
3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by
5# the Free Software Foundation; either version 2 of the License, or
6# (at your option) any later version.
7#
8# This program is distributed in the hope that it will be useful,
9# but WITHOUT ANY WARRANTY; without even the implied warranty of
10# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
11# GNU General Public License for more details.
12#
13# You should have received a copy of the GNU General Public License
14# along with this program; if not, write to the Free Software
15# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
16"""Record counting support for showing progress of revision fetch."""
17
18class RecordCounter(object):
19 """Container for maintains estimates of work requires for fetch.
20
21 Instance of this class is used along with a progress bar to provide
22 the user an estimate of the amount of work pending for a fetch (push,
23 pull, branch, checkout) operation.
24 """
25 def __init__(self):
26 self.initialized = False
27 self.current = 0
28 self.key_count = 0
29 self.max = 0
30 self.STEP = 71
31
32 def is_initialized(self):
33 return self.initialized
34
35 def _estimate_max(self, key_count):
36 """Estimate the maximum amount of 'inserting stream' work.
37
38 This is just an estimate.
39 """
40 # Note: The magic number below is based of empirical data
41 # based on 3 seperate projects. Estimatation can probably
42 # be improved but this should work well for most cases.
43 return int(key_count * 10.3)
44
45 def setup(self, key_count, current=0):
46 """Setup RecordCounter with basic estimate of work pending.
47
48 Setup self.max and self.current to reflect the amount of work
49 pending for a fetch.
50 """
51 self.current = current
52 self.key_count = key_count
53 self.max = self._estimate_max(key_count)
54 self.initialized = True
55
56 def increment(self, count):
57 """Increment self.current by count.
58
59 Apart from incrementing self.current by count, also ensure
60 that self.max > self.current.
61 """
62 self.current += count
63 if self.current > self.max:
64 self.max += self.key_count
65
066
=== modified file 'bzrlib/remote.py'
--- bzrlib/remote.py 2010-05-13 16:17:54 +0000
+++ bzrlib/remote.py 2010-05-14 13:38:33 +0000
@@ -1980,7 +1980,8 @@
1980 if response_tuple[0] != 'ok':1980 if response_tuple[0] != 'ok':
1981 raise errors.UnexpectedSmartServerResponse(response_tuple)1981 raise errors.UnexpectedSmartServerResponse(response_tuple)
1982 byte_stream = response_handler.read_streamed_body()1982 byte_stream = response_handler.read_streamed_body()
1983 src_format, stream = smart_repo._byte_stream_to_stream(byte_stream)1983 src_format, stream = smart_repo._byte_stream_to_stream(byte_stream,
1984 self._record_counter)
1984 if src_format.network_name() != repo._format.network_name():1985 if src_format.network_name() != repo._format.network_name():
1985 raise AssertionError(1986 raise AssertionError(
1986 "Mismatched RemoteRepository and stream src %r, %r" % (1987 "Mismatched RemoteRepository and stream src %r, %r" % (
19871988
=== modified file 'bzrlib/repofmt/groupcompress_repo.py'
--- bzrlib/repofmt/groupcompress_repo.py 2010-05-13 18:52:58 +0000
+++ bzrlib/repofmt/groupcompress_repo.py 2010-05-14 13:38:33 +0000
@@ -1108,13 +1108,29 @@
1108 yield 'chk_bytes', _get_parent_id_basename_to_file_id_pages()1108 yield 'chk_bytes', _get_parent_id_basename_to_file_id_pages()
11091109
1110 def get_stream(self, search):1110 def get_stream(self, search):
1111 def wrap_and_count(pb, rc, stream):
1112 """Yield records from stream while showing progress."""
1113 count = 0
1114 for record in stream:
1115 if count == rc.STEP:
1116 rc.increment(count)
1117 pb.update('Estimate', rc.current, rc.max)
1118 count = 0
1119 count += 1
1120 yield record
1121
1111 revision_ids = search.get_keys()1122 revision_ids = search.get_keys()
1123 pb = ui.ui_factory.nested_progress_bar()
1124 rc = self._record_counter
1125 self._record_counter.setup(len(revision_ids))
1112 for stream_info in self._fetch_revision_texts(revision_ids):1126 for stream_info in self._fetch_revision_texts(revision_ids):
1113 yield stream_info1127 yield (stream_info[0],
1128 wrap_and_count(pb, rc, stream_info[1]))
1114 self._revision_keys = [(rev_id,) for rev_id in revision_ids]1129 self._revision_keys = [(rev_id,) for rev_id in revision_ids]
1115 self.from_repository.revisions.clear_cache()1130 self.from_repository.revisions.clear_cache()
1116 self.from_repository.signatures.clear_cache()1131 self.from_repository.signatures.clear_cache()
1117 yield self._get_inventory_stream(self._revision_keys)1132 s = self._get_inventory_stream(self._revision_keys)
1133 yield (s[0], wrap_and_count(pb, rc, s[1]))
1118 self.from_repository.inventories.clear_cache()1134 self.from_repository.inventories.clear_cache()
1119 # TODO: The keys to exclude might be part of the search recipe1135 # TODO: The keys to exclude might be part of the search recipe
1120 # For now, exclude all parents that are at the edge of ancestry, for1136 # For now, exclude all parents that are at the edge of ancestry, for
@@ -1123,10 +1139,13 @@
1123 parent_keys = from_repo._find_parent_keys_of_revisions(1139 parent_keys = from_repo._find_parent_keys_of_revisions(
1124 self._revision_keys)1140 self._revision_keys)
1125 for stream_info in self._get_filtered_chk_streams(parent_keys):1141 for stream_info in self._get_filtered_chk_streams(parent_keys):
1126 yield stream_info1142 yield (stream_info[0], wrap_and_count(pb, rc, stream_info[1]))
1127 self.from_repository.chk_bytes.clear_cache()1143 self.from_repository.chk_bytes.clear_cache()
1128 yield self._get_text_stream()1144 s = self._get_text_stream()
1145 yield (s[0], wrap_and_count(pb, rc, s[1]))
1129 self.from_repository.texts.clear_cache()1146 self.from_repository.texts.clear_cache()
1147 pb.update('Done', rc.max, rc.max)
1148 pb.finished()
11301149
1131 def get_stream_for_missing_keys(self, missing_keys):1150 def get_stream_for_missing_keys(self, missing_keys):
1132 # missing keys can only occur when we are byte copying and not1151 # missing keys can only occur when we are byte copying and not
11331152
=== modified file 'bzrlib/repository.py'
--- bzrlib/repository.py 2010-05-13 18:52:58 +0000
+++ bzrlib/repository.py 2010-05-14 13:38:33 +0000
@@ -43,7 +43,6 @@
43 symbol_versioning,43 symbol_versioning,
44 trace,44 trace,
45 tsort,45 tsort,
46 ui,
47 versionedfile,46 versionedfile,
48 )47 )
49from bzrlib.bundle import serializer48from bzrlib.bundle import serializer
@@ -55,6 +54,7 @@
55from bzrlib import (54from bzrlib import (
56 errors,55 errors,
57 registry,56 registry,
57 ui,
58 )58 )
59from bzrlib.decorators import needs_read_lock, needs_write_lock, only_raises59from bzrlib.decorators import needs_read_lock, needs_write_lock, only_raises
60from bzrlib.inter import InterObject60from bzrlib.inter import InterObject
@@ -64,6 +64,7 @@
64 ROOT_ID,64 ROOT_ID,
65 entry_factory,65 entry_factory,
66 )66 )
67from bzrlib.recordcounter import RecordCounter
67from bzrlib.lock import _RelockDebugMixin, LogicalLockResult68from bzrlib.lock import _RelockDebugMixin, LogicalLockResult
68from bzrlib.trace import (69from bzrlib.trace import (
69 log_exception_quietly, note, mutter, mutter_callsite, warning)70 log_exception_quietly, note, mutter, mutter_callsite, warning)
@@ -4283,7 +4284,8 @@
4283 is_resume = False4284 is_resume = False
4284 try:4285 try:
4285 # locked_insert_stream performs a commit|suspend.4286 # locked_insert_stream performs a commit|suspend.
4286 return self._locked_insert_stream(stream, src_format, is_resume)4287 return self._locked_insert_stream(stream, src_format,
4288 is_resume)
4287 except:4289 except:
4288 self.target_repo.abort_write_group(suppress_errors=True)4290 self.target_repo.abort_write_group(suppress_errors=True)
4289 raise4291 raise
@@ -4336,8 +4338,7 @@
4336 # required if the serializers are different only in terms of4338 # required if the serializers are different only in terms of
4337 # the inventory.4339 # the inventory.
4338 if src_serializer == to_serializer:4340 if src_serializer == to_serializer:
4339 self.target_repo.revisions.insert_record_stream(4341 self.target_repo.revisions.insert_record_stream(substream)
4340 substream)
4341 else:4342 else:
4342 self._extract_and_insert_revisions(substream,4343 self._extract_and_insert_revisions(substream,
4343 src_serializer)4344 src_serializer)
@@ -4451,6 +4452,7 @@
4451 """Create a StreamSource streaming from from_repository."""4452 """Create a StreamSource streaming from from_repository."""
4452 self.from_repository = from_repository4453 self.from_repository = from_repository
4453 self.to_format = to_format4454 self.to_format = to_format
4455 self._record_counter = RecordCounter()
44544456
4455 def delta_on_metadata(self):4457 def delta_on_metadata(self):
4456 """Return True if delta's are permitted on metadata streams.4458 """Return True if delta's are permitted on metadata streams.
44574459
=== modified file 'bzrlib/smart/repository.py'
--- bzrlib/smart/repository.py 2010-05-06 23:41:35 +0000
+++ bzrlib/smart/repository.py 2010-05-14 13:38:33 +0000
@@ -39,6 +39,7 @@
39 SuccessfulSmartServerResponse,39 SuccessfulSmartServerResponse,
40 )40 )
41from bzrlib.repository import _strip_NULL_ghosts, network_format_registry41from bzrlib.repository import _strip_NULL_ghosts, network_format_registry
42from bzrlib.recordcounter import RecordCounter
42from bzrlib import revision as _mod_revision43from bzrlib import revision as _mod_revision
43from bzrlib.versionedfile import (44from bzrlib.versionedfile import (
44 NetworkRecordStream,45 NetworkRecordStream,
@@ -544,12 +545,14 @@
544 :ivar first_bytes: The first bytes to give the next NetworkRecordStream.545 :ivar first_bytes: The first bytes to give the next NetworkRecordStream.
545 """546 """
546547
547 def __init__(self, byte_stream):548 def __init__(self, byte_stream, record_counter):
548 """Create a _ByteStreamDecoder."""549 """Create a _ByteStreamDecoder."""
549 self.stream_decoder = pack.ContainerPushParser()550 self.stream_decoder = pack.ContainerPushParser()
550 self.current_type = None551 self.current_type = None
551 self.first_bytes = None552 self.first_bytes = None
552 self.byte_stream = byte_stream553 self.byte_stream = byte_stream
554 self._record_counter = record_counter
555 self.key_count = 0
553556
554 def iter_stream_decoder(self):557 def iter_stream_decoder(self):
555 """Iterate the contents of the pack from stream_decoder."""558 """Iterate the contents of the pack from stream_decoder."""
@@ -580,13 +583,46 @@
580583
581 def record_stream(self):584 def record_stream(self):
582 """Yield substream_type, substream from the byte stream."""585 """Yield substream_type, substream from the byte stream."""
586 def wrap_and_count(pb, rc, substream):
587 """Yield records from stream while showing progress."""
588 counter = 0
589 if rc:
590 if self.current_type != 'revisions' and self.key_count != 0:
591 # As we know the number of revisions now (in self.key_count)
592 # we can setup and use record_counter (rc).
593 if not rc.is_initialized():
594 rc.setup(self.key_count, self.key_count)
595 for record in substream.read():
596 if rc:
597 if rc.is_initialized() and counter == rc.STEP:
598 rc.increment(counter)
599 pb.update('Estimate', rc.current, rc.max)
600 counter = 0
601 if self.current_type == 'revisions':
602 # Total records is proportional to number of revs
603 # to fetch. With remote, we used self.key_count to
604 # track the number of revs. Once we have the revs
605 # counts in self.key_count, the progress bar changes
606 # from 'Estimating..' to 'Estimate' above.
607 self.key_count += 1
608 if counter == rc.STEP:
609 pb.update('Estimating..', self.key_count)
610 counter = 0
611 counter += 1
612 yield record
613
583 self.seed_state()614 self.seed_state()
615 pb = ui.ui_factory.nested_progress_bar()
616 rc = self._record_counter
584 # Make and consume sub generators, one per substream type:617 # Make and consume sub generators, one per substream type:
585 while self.first_bytes is not None:618 while self.first_bytes is not None:
586 substream = NetworkRecordStream(self.iter_substream_bytes())619 substream = NetworkRecordStream(self.iter_substream_bytes())
587 # after substream is fully consumed, self.current_type is set to620 # after substream is fully consumed, self.current_type is set to
588 # the next type, and self.first_bytes is set to the matching bytes.621 # the next type, and self.first_bytes is set to the matching bytes.
589 yield self.current_type, substream.read()622 yield self.current_type, wrap_and_count(pb, rc, substream)
623 if rc:
624 pb.update('Done', rc.max, rc.max)
625 pb.finished()
590626
591 def seed_state(self):627 def seed_state(self):
592 """Prepare the _ByteStreamDecoder to decode from the pack stream."""628 """Prepare the _ByteStreamDecoder to decode from the pack stream."""
@@ -597,13 +633,13 @@
597 list(self.iter_substream_bytes())633 list(self.iter_substream_bytes())
598634
599635
600def _byte_stream_to_stream(byte_stream):636def _byte_stream_to_stream(byte_stream, record_counter=None):
601 """Convert a byte stream into a format and a stream.637 """Convert a byte stream into a format and a stream.
602638
603 :param byte_stream: A bytes iterator, as output by _stream_to_byte_stream.639 :param byte_stream: A bytes iterator, as output by _stream_to_byte_stream.
604 :return: (RepositoryFormat, stream_generator)640 :return: (RepositoryFormat, stream_generator)
605 """641 """
606 decoder = _ByteStreamDecoder(byte_stream)642 decoder = _ByteStreamDecoder(byte_stream, record_counter)
607 for bytes in byte_stream:643 for bytes in byte_stream:
608 decoder.stream_decoder.accept_bytes(bytes)644 decoder.stream_decoder.accept_bytes(bytes)
609 for record in decoder.stream_decoder.read_pending_records(max=1):645 for record in decoder.stream_decoder.read_pending_records(max=1):
610646
=== modified file 'bzrlib/tests/blackbox/test_checkout.py'
--- bzrlib/tests/blackbox/test_checkout.py 2010-04-30 09:52:08 +0000
+++ bzrlib/tests/blackbox/test_checkout.py 2010-05-14 13:38:33 +0000
@@ -65,7 +65,6 @@
6565
66 def test_checkout_dash_r(self):66 def test_checkout_dash_r(self):
67 out, err = self.run_bzr(['checkout', '-r', '-2', 'branch', 'checkout'])67 out, err = self.run_bzr(['checkout', '-r', '-2', 'branch', 'checkout'])
68 self.assertContainsRe(out, 'Copying history to "checkout".')
69 # the working tree should now be at revision '1' with the content68 # the working tree should now be at revision '1' with the content
70 # from 1.69 # from 1.
71 result = bzrdir.BzrDir.open('checkout')70 result = bzrdir.BzrDir.open('checkout')
@@ -75,7 +74,6 @@
75 def test_checkout_light_dash_r(self):74 def test_checkout_light_dash_r(self):
76 out, err = self.run_bzr(['checkout','--lightweight', '-r', '-2',75 out, err = self.run_bzr(['checkout','--lightweight', '-r', '-2',
77 'branch', 'checkout'])76 'branch', 'checkout'])
78 self.assertNotContainsRe(out, 'Copying history')
79 # the working tree should now be at revision '1' with the content77 # the working tree should now be at revision '1' with the content
80 # from 1.78 # from 1.
81 result = bzrdir.BzrDir.open('checkout')79 result = bzrdir.BzrDir.open('checkout')