Merge lp:~parthm/bzr/538868-message-for-heavy-checkout into lp:bzr
- 538868-message-for-heavy-checkout
- Merge into bzr.dev
Status: | Superseded | ||||||||
---|---|---|---|---|---|---|---|---|---|
Proposed branch: | lp:~parthm/bzr/538868-message-for-heavy-checkout | ||||||||
Merge into: | lp:bzr | ||||||||
Diff against target: |
336 lines (+139/-23) 8 files modified
NEWS (+3/-3) bzrlib/builtins.py (+0/-5) bzrlib/recordcounter.py (+65/-0) bzrlib/remote.py (+2/-1) bzrlib/repofmt/groupcompress_repo.py (+23/-4) bzrlib/repository.py (+6/-4) bzrlib/smart/repository.py (+40/-4) bzrlib/tests/blackbox/test_checkout.py (+0/-2) |
||||||||
To merge this branch: | bzr merge lp:~parthm/bzr/538868-message-for-heavy-checkout | ||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
John A Meinel | Needs Resubmitting | ||
Martin Pool | 2nd review | Needs Information | |
Vincent Ladeuil | Approve | ||
Gary van der Merwe | Approve | ||
Review via email: mp+24483@code.launchpad.net |
This proposal has been superseded by a proposal from 2010-05-14.
Commit message
(parthm) heavyweight checkout now indicates that history is being copied.
Description of the change
=== Fixes Bug #538868 ===
For heavyweight checkout show a message showing that history is being copied and it may take some time.
Sample output:
[tmp]% ~/src/bzr.
Copying history to "foobar". This may take some time.
bzr: interrupted
[tmp]% ~/src/bzr.
Copying history to "trunk". This may take some time.
bzr: interrupted
The only ugliness I see is in the off case that to_location already exists. In this case the output is:
[tmp]% ~/src/bzr.
Copying history to "trunk". This may take some time.
bzr: ERROR: File exists: u'/home/
It would be ideal if the "copying history" message is not shown. I suppose thats not too bad though. I had a early failure fix for this but haven't put it in considering that bzr works across multiple transports.
+ # Fail early if to_location/.bzr exists. We don't want to
+ # give a message "Copying history ..." and then fail
+ # saying to_location/.bzr exists.
+ to_loc_bzr = osutils.
+ if osutils.
+ raise errors.
+
Martin Pool (mbp) wrote : | # |
Gary van der Merwe (garyvdm) : | # |
Vincent Ladeuil (vila) wrote : | # |
Apart from the message tweaks mentioned on IRC, that's good to land !
Robert Collins (lifeless) wrote : | # |
I realise this has gone through, so I'd like to just request some more
stuff if you have time; if not please file a bug.
The message will show up when doing a heavy checking in a repository;
that's just annoying - no history is being copied, so no message
should appear. Recommended fix: move the notification into the core,
out of builtins.py.
Secondly, if its worth telling people we're copying [a lot] of history
for checkout, I think its worth telling them about it for branch and
merge too. Perhaps lets set some sort of heuristic (e.g. 100 or more
revisions) and have the warning trigger on that?
-Rob
Martin Pool (mbp) wrote : | # |
On 6 May 2010 04:28, Robert Collins <email address hidden> wrote:
> I realise this has gone through, so I'd like to just request some more
> stuff if you have time; if not please file a bug.
>
> The message will show up when doing a heavy checking in a repository;
> that's just annoying - no history is being copied, so no message
> should appear. Recommended fix: move the notification into the core,
> out of builtins.py.
+1
perhaps just showing it from fetch would be best
> Secondly, if its worth telling people we're copying [a lot] of history
> for checkout, I think its worth telling them about it for branch and
> merge too. Perhaps lets set some sort of heuristic (e.g. 100 or more
> revisions) and have the warning trigger on that?
-½ on that, because it will create questions about "but it worked
before, what changed?" If we want that kind of approach we should
make sure there's a clear progress bar message, so that it's visible
only while the slow operation is taking place.
--
Martin <http://
Parth Malwankar (parthm) wrote : | # |
On Thu, May 6, 2010 at 8:58 AM, Robert Collins
<email address hidden> wrote:
> I realise this has gone through, so I'd like to just request some more
> stuff if you have time; if not please file a bug.
>
> The message will show up when doing a heavy checking in a repository;
> that's just annoying - no history is being copied, so no message
> should appear. Recommended fix: move the notification into the core,
> out of builtins.py.
>
> Secondly, if its worth telling people we're copying [a lot] of history
> for checkout, I think its worth telling them about it for branch and
> merge too. Perhaps lets set some sort of heuristic (e.g. 100 or more
> revisions) and have the warning trigger on that?
>
Good points. Thanks for the review.
As discussed on the IRC I will work on fixing this.
I don't have a good solution yet. Will propose something taking into
account Martin Pool recommendation.
Parth Malwankar (parthm) wrote : | # |
So I updated this patch to skip the message when checkout is done in a shared repo. However, there is an interesting case below.
[tmp]% bzr init-repo foo
Shared repository with trees (format: 2a)
Location:
shared repository: foo
[tmp]% cd foo
[foo]% /home/parthm/
[foo]%
In this case, the entire history is copied so it does take time. I am wondering if we should just stick to the simpler earlier patch. Alternatively, if there is a way to know how many changes need to be pulled we could show the message based on this.
This is still checkout specific and doesn't touch other operations.
Robert Collins (lifeless) wrote : | # |
Well the main point for me is that the issue - lots of history being
copied - is separate from the commands. So I guess I'm really saying
'do it more broadly please'.
-Rob
Martin Pool (mbp) wrote : | # |
test
Martin Pool (mbp) wrote : | # |
test
John A Meinel (jameinel) wrote : | # |
23 pb = ui.ui_factory.
24 + key_count = len(search.
25 try:
^- We've discussed that this is a fairly unfortunate regression, as it requires polling the remote server for the list of revisions rather than just having it stream them out.
I'm pretty sure Parth is already looking at how to fix this.
Parth Malwankar (parthm) wrote : | # |
With a lot of help from John, this patch is is a good enough shape for review.
Its evolved from a fix for bug #538868 to a fix for bug #374740
The intent is to show users an _estimate_ of the amount of work pending in branch/
E.g.
[tmp]% ~/src/bzr.
- Fetching revisions:Inserting stream:Estimate 106429/320381
As the number of records are proportional to the number of revisions to be fetched, for remote operations, this count is not known and the progress bar starts with "Estimating.. X" where X goes from 0 to revs-to-fetch, following this the progress bar changes to whats shown above. For the local ops, we know the count upfront so the progress starts at 0/N.
A RecordCounter object has been added to maintain current, max, key_count and to encapsulate the estimation algorithm. An instance of this is added to StreamSource which is then used among the various sub-streams to show progress. The wrap_and_count generator wraps existing sub-streams with the progress bar printer.
Parth Malwankar (parthm) wrote : | # |
Just to add. This progress is seen during the "Inserting stream" phase with is the big time consumer. There is still room for improvement with "Getting stream" and "Inserting missing keys" phase but that can probably be a separate bug.
Preview Diff
1 | === modified file 'NEWS' | |||
2 | --- NEWS 2010-05-14 09:02:35 +0000 | |||
3 | +++ NEWS 2010-05-14 13:38:33 +0000 | |||
4 | @@ -96,9 +96,9 @@ | |||
5 | 96 | versions before 1.6. | 96 | versions before 1.6. |
6 | 97 | (Andrew Bennetts, #528041) | 97 | (Andrew Bennetts, #528041) |
7 | 98 | 98 | ||
11 | 99 | * Heavyweight checkout operation now shows a message to the user indicating | 99 | * Improved progress bar for fetch. Bazaar now shows an estimate of the |
12 | 100 | history is being copied. | 100 | number of records to be fetched vs actually fetched. |
13 | 101 | (Parth Malwankar, #538868) | 101 | (Parth Malwankar, #374740, #538868) |
14 | 102 | 102 | ||
15 | 103 | * Reduce peak memory by one copy of compressed text. | 103 | * Reduce peak memory by one copy of compressed text. |
16 | 104 | (John Arbash Meinel, #566940) | 104 | (John Arbash Meinel, #566940) |
17 | 105 | 105 | ||
18 | === modified file 'bzrlib/builtins.py' | |||
19 | --- bzrlib/builtins.py 2010-05-14 09:20:34 +0000 | |||
20 | +++ bzrlib/builtins.py 2010-05-14 13:38:33 +0000 | |||
21 | @@ -1336,11 +1336,6 @@ | |||
22 | 1336 | except errors.NoWorkingTree: | 1336 | except errors.NoWorkingTree: |
23 | 1337 | source.bzrdir.create_workingtree(revision_id) | 1337 | source.bzrdir.create_workingtree(revision_id) |
24 | 1338 | return | 1338 | return |
25 | 1339 | |||
26 | 1340 | if not lightweight: | ||
27 | 1341 | message = ('Copying history to "%s". ' | ||
28 | 1342 | 'To checkout without local history use --lightweight.' % to_location) | ||
29 | 1343 | ui.ui_factory.show_message(message) | ||
30 | 1344 | source.create_checkout(to_location, revision_id, lightweight, | 1339 | source.create_checkout(to_location, revision_id, lightweight, |
31 | 1345 | accelerator_tree, hardlink) | 1340 | accelerator_tree, hardlink) |
32 | 1346 | 1341 | ||
33 | 1347 | 1342 | ||
34 | === added file 'bzrlib/recordcounter.py' | |||
35 | --- bzrlib/recordcounter.py 1970-01-01 00:00:00 +0000 | |||
36 | +++ bzrlib/recordcounter.py 2010-05-14 13:38:33 +0000 | |||
37 | @@ -0,0 +1,65 @@ | |||
38 | 1 | # Copyright (C) 2006-2010 Canonical Ltd | ||
39 | 2 | # | ||
40 | 3 | # This program is free software; you can redistribute it and/or modify | ||
41 | 4 | # it under the terms of the GNU General Public License as published by | ||
42 | 5 | # the Free Software Foundation; either version 2 of the License, or | ||
43 | 6 | # (at your option) any later version. | ||
44 | 7 | # | ||
45 | 8 | # This program is distributed in the hope that it will be useful, | ||
46 | 9 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
47 | 10 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
48 | 11 | # GNU General Public License for more details. | ||
49 | 12 | # | ||
50 | 13 | # You should have received a copy of the GNU General Public License | ||
51 | 14 | # along with this program; if not, write to the Free Software | ||
52 | 15 | # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA | ||
53 | 16 | """Record counting support for showing progress of revision fetch.""" | ||
54 | 17 | |||
55 | 18 | class RecordCounter(object): | ||
56 | 19 | """Container for maintains estimates of work requires for fetch. | ||
57 | 20 | |||
58 | 21 | Instance of this class is used along with a progress bar to provide | ||
59 | 22 | the user an estimate of the amount of work pending for a fetch (push, | ||
60 | 23 | pull, branch, checkout) operation. | ||
61 | 24 | """ | ||
62 | 25 | def __init__(self): | ||
63 | 26 | self.initialized = False | ||
64 | 27 | self.current = 0 | ||
65 | 28 | self.key_count = 0 | ||
66 | 29 | self.max = 0 | ||
67 | 30 | self.STEP = 71 | ||
68 | 31 | |||
69 | 32 | def is_initialized(self): | ||
70 | 33 | return self.initialized | ||
71 | 34 | |||
72 | 35 | def _estimate_max(self, key_count): | ||
73 | 36 | """Estimate the maximum amount of 'inserting stream' work. | ||
74 | 37 | |||
75 | 38 | This is just an estimate. | ||
76 | 39 | """ | ||
77 | 40 | # Note: The magic number below is based of empirical data | ||
78 | 41 | # based on 3 seperate projects. Estimatation can probably | ||
79 | 42 | # be improved but this should work well for most cases. | ||
80 | 43 | return int(key_count * 10.3) | ||
81 | 44 | |||
82 | 45 | def setup(self, key_count, current=0): | ||
83 | 46 | """Setup RecordCounter with basic estimate of work pending. | ||
84 | 47 | |||
85 | 48 | Setup self.max and self.current to reflect the amount of work | ||
86 | 49 | pending for a fetch. | ||
87 | 50 | """ | ||
88 | 51 | self.current = current | ||
89 | 52 | self.key_count = key_count | ||
90 | 53 | self.max = self._estimate_max(key_count) | ||
91 | 54 | self.initialized = True | ||
92 | 55 | |||
93 | 56 | def increment(self, count): | ||
94 | 57 | """Increment self.current by count. | ||
95 | 58 | |||
96 | 59 | Apart from incrementing self.current by count, also ensure | ||
97 | 60 | that self.max > self.current. | ||
98 | 61 | """ | ||
99 | 62 | self.current += count | ||
100 | 63 | if self.current > self.max: | ||
101 | 64 | self.max += self.key_count | ||
102 | 65 | |||
103 | 0 | 66 | ||
104 | === modified file 'bzrlib/remote.py' | |||
105 | --- bzrlib/remote.py 2010-05-13 16:17:54 +0000 | |||
106 | +++ bzrlib/remote.py 2010-05-14 13:38:33 +0000 | |||
107 | @@ -1980,7 +1980,8 @@ | |||
108 | 1980 | if response_tuple[0] != 'ok': | 1980 | if response_tuple[0] != 'ok': |
109 | 1981 | raise errors.UnexpectedSmartServerResponse(response_tuple) | 1981 | raise errors.UnexpectedSmartServerResponse(response_tuple) |
110 | 1982 | byte_stream = response_handler.read_streamed_body() | 1982 | byte_stream = response_handler.read_streamed_body() |
112 | 1983 | src_format, stream = smart_repo._byte_stream_to_stream(byte_stream) | 1983 | src_format, stream = smart_repo._byte_stream_to_stream(byte_stream, |
113 | 1984 | self._record_counter) | ||
114 | 1984 | if src_format.network_name() != repo._format.network_name(): | 1985 | if src_format.network_name() != repo._format.network_name(): |
115 | 1985 | raise AssertionError( | 1986 | raise AssertionError( |
116 | 1986 | "Mismatched RemoteRepository and stream src %r, %r" % ( | 1987 | "Mismatched RemoteRepository and stream src %r, %r" % ( |
117 | 1987 | 1988 | ||
118 | === modified file 'bzrlib/repofmt/groupcompress_repo.py' | |||
119 | --- bzrlib/repofmt/groupcompress_repo.py 2010-05-13 18:52:58 +0000 | |||
120 | +++ bzrlib/repofmt/groupcompress_repo.py 2010-05-14 13:38:33 +0000 | |||
121 | @@ -1108,13 +1108,29 @@ | |||
122 | 1108 | yield 'chk_bytes', _get_parent_id_basename_to_file_id_pages() | 1108 | yield 'chk_bytes', _get_parent_id_basename_to_file_id_pages() |
123 | 1109 | 1109 | ||
124 | 1110 | def get_stream(self, search): | 1110 | def get_stream(self, search): |
125 | 1111 | def wrap_and_count(pb, rc, stream): | ||
126 | 1112 | """Yield records from stream while showing progress.""" | ||
127 | 1113 | count = 0 | ||
128 | 1114 | for record in stream: | ||
129 | 1115 | if count == rc.STEP: | ||
130 | 1116 | rc.increment(count) | ||
131 | 1117 | pb.update('Estimate', rc.current, rc.max) | ||
132 | 1118 | count = 0 | ||
133 | 1119 | count += 1 | ||
134 | 1120 | yield record | ||
135 | 1121 | |||
136 | 1111 | revision_ids = search.get_keys() | 1122 | revision_ids = search.get_keys() |
137 | 1123 | pb = ui.ui_factory.nested_progress_bar() | ||
138 | 1124 | rc = self._record_counter | ||
139 | 1125 | self._record_counter.setup(len(revision_ids)) | ||
140 | 1112 | for stream_info in self._fetch_revision_texts(revision_ids): | 1126 | for stream_info in self._fetch_revision_texts(revision_ids): |
142 | 1113 | yield stream_info | 1127 | yield (stream_info[0], |
143 | 1128 | wrap_and_count(pb, rc, stream_info[1])) | ||
144 | 1114 | self._revision_keys = [(rev_id,) for rev_id in revision_ids] | 1129 | self._revision_keys = [(rev_id,) for rev_id in revision_ids] |
145 | 1115 | self.from_repository.revisions.clear_cache() | 1130 | self.from_repository.revisions.clear_cache() |
146 | 1116 | self.from_repository.signatures.clear_cache() | 1131 | self.from_repository.signatures.clear_cache() |
148 | 1117 | yield self._get_inventory_stream(self._revision_keys) | 1132 | s = self._get_inventory_stream(self._revision_keys) |
149 | 1133 | yield (s[0], wrap_and_count(pb, rc, s[1])) | ||
150 | 1118 | self.from_repository.inventories.clear_cache() | 1134 | self.from_repository.inventories.clear_cache() |
151 | 1119 | # TODO: The keys to exclude might be part of the search recipe | 1135 | # TODO: The keys to exclude might be part of the search recipe |
152 | 1120 | # For now, exclude all parents that are at the edge of ancestry, for | 1136 | # For now, exclude all parents that are at the edge of ancestry, for |
153 | @@ -1123,10 +1139,13 @@ | |||
154 | 1123 | parent_keys = from_repo._find_parent_keys_of_revisions( | 1139 | parent_keys = from_repo._find_parent_keys_of_revisions( |
155 | 1124 | self._revision_keys) | 1140 | self._revision_keys) |
156 | 1125 | for stream_info in self._get_filtered_chk_streams(parent_keys): | 1141 | for stream_info in self._get_filtered_chk_streams(parent_keys): |
158 | 1126 | yield stream_info | 1142 | yield (stream_info[0], wrap_and_count(pb, rc, stream_info[1])) |
159 | 1127 | self.from_repository.chk_bytes.clear_cache() | 1143 | self.from_repository.chk_bytes.clear_cache() |
161 | 1128 | yield self._get_text_stream() | 1144 | s = self._get_text_stream() |
162 | 1145 | yield (s[0], wrap_and_count(pb, rc, s[1])) | ||
163 | 1129 | self.from_repository.texts.clear_cache() | 1146 | self.from_repository.texts.clear_cache() |
164 | 1147 | pb.update('Done', rc.max, rc.max) | ||
165 | 1148 | pb.finished() | ||
166 | 1130 | 1149 | ||
167 | 1131 | def get_stream_for_missing_keys(self, missing_keys): | 1150 | def get_stream_for_missing_keys(self, missing_keys): |
168 | 1132 | # missing keys can only occur when we are byte copying and not | 1151 | # missing keys can only occur when we are byte copying and not |
169 | 1133 | 1152 | ||
170 | === modified file 'bzrlib/repository.py' | |||
171 | --- bzrlib/repository.py 2010-05-13 18:52:58 +0000 | |||
172 | +++ bzrlib/repository.py 2010-05-14 13:38:33 +0000 | |||
173 | @@ -43,7 +43,6 @@ | |||
174 | 43 | symbol_versioning, | 43 | symbol_versioning, |
175 | 44 | trace, | 44 | trace, |
176 | 45 | tsort, | 45 | tsort, |
177 | 46 | ui, | ||
178 | 47 | versionedfile, | 46 | versionedfile, |
179 | 48 | ) | 47 | ) |
180 | 49 | from bzrlib.bundle import serializer | 48 | from bzrlib.bundle import serializer |
181 | @@ -55,6 +54,7 @@ | |||
182 | 55 | from bzrlib import ( | 54 | from bzrlib import ( |
183 | 56 | errors, | 55 | errors, |
184 | 57 | registry, | 56 | registry, |
185 | 57 | ui, | ||
186 | 58 | ) | 58 | ) |
187 | 59 | from bzrlib.decorators import needs_read_lock, needs_write_lock, only_raises | 59 | from bzrlib.decorators import needs_read_lock, needs_write_lock, only_raises |
188 | 60 | from bzrlib.inter import InterObject | 60 | from bzrlib.inter import InterObject |
189 | @@ -64,6 +64,7 @@ | |||
190 | 64 | ROOT_ID, | 64 | ROOT_ID, |
191 | 65 | entry_factory, | 65 | entry_factory, |
192 | 66 | ) | 66 | ) |
193 | 67 | from bzrlib.recordcounter import RecordCounter | ||
194 | 67 | from bzrlib.lock import _RelockDebugMixin, LogicalLockResult | 68 | from bzrlib.lock import _RelockDebugMixin, LogicalLockResult |
195 | 68 | from bzrlib.trace import ( | 69 | from bzrlib.trace import ( |
196 | 69 | log_exception_quietly, note, mutter, mutter_callsite, warning) | 70 | log_exception_quietly, note, mutter, mutter_callsite, warning) |
197 | @@ -4283,7 +4284,8 @@ | |||
198 | 4283 | is_resume = False | 4284 | is_resume = False |
199 | 4284 | try: | 4285 | try: |
200 | 4285 | # locked_insert_stream performs a commit|suspend. | 4286 | # locked_insert_stream performs a commit|suspend. |
202 | 4286 | return self._locked_insert_stream(stream, src_format, is_resume) | 4287 | return self._locked_insert_stream(stream, src_format, |
203 | 4288 | is_resume) | ||
204 | 4287 | except: | 4289 | except: |
205 | 4288 | self.target_repo.abort_write_group(suppress_errors=True) | 4290 | self.target_repo.abort_write_group(suppress_errors=True) |
206 | 4289 | raise | 4291 | raise |
207 | @@ -4336,8 +4338,7 @@ | |||
208 | 4336 | # required if the serializers are different only in terms of | 4338 | # required if the serializers are different only in terms of |
209 | 4337 | # the inventory. | 4339 | # the inventory. |
210 | 4338 | if src_serializer == to_serializer: | 4340 | if src_serializer == to_serializer: |
213 | 4339 | self.target_repo.revisions.insert_record_stream( | 4341 | self.target_repo.revisions.insert_record_stream(substream) |
212 | 4340 | substream) | ||
214 | 4341 | else: | 4342 | else: |
215 | 4342 | self._extract_and_insert_revisions(substream, | 4343 | self._extract_and_insert_revisions(substream, |
216 | 4343 | src_serializer) | 4344 | src_serializer) |
217 | @@ -4451,6 +4452,7 @@ | |||
218 | 4451 | """Create a StreamSource streaming from from_repository.""" | 4452 | """Create a StreamSource streaming from from_repository.""" |
219 | 4452 | self.from_repository = from_repository | 4453 | self.from_repository = from_repository |
220 | 4453 | self.to_format = to_format | 4454 | self.to_format = to_format |
221 | 4455 | self._record_counter = RecordCounter() | ||
222 | 4454 | 4456 | ||
223 | 4455 | def delta_on_metadata(self): | 4457 | def delta_on_metadata(self): |
224 | 4456 | """Return True if delta's are permitted on metadata streams. | 4458 | """Return True if delta's are permitted on metadata streams. |
225 | 4457 | 4459 | ||
226 | === modified file 'bzrlib/smart/repository.py' | |||
227 | --- bzrlib/smart/repository.py 2010-05-06 23:41:35 +0000 | |||
228 | +++ bzrlib/smart/repository.py 2010-05-14 13:38:33 +0000 | |||
229 | @@ -39,6 +39,7 @@ | |||
230 | 39 | SuccessfulSmartServerResponse, | 39 | SuccessfulSmartServerResponse, |
231 | 40 | ) | 40 | ) |
232 | 41 | from bzrlib.repository import _strip_NULL_ghosts, network_format_registry | 41 | from bzrlib.repository import _strip_NULL_ghosts, network_format_registry |
233 | 42 | from bzrlib.recordcounter import RecordCounter | ||
234 | 42 | from bzrlib import revision as _mod_revision | 43 | from bzrlib import revision as _mod_revision |
235 | 43 | from bzrlib.versionedfile import ( | 44 | from bzrlib.versionedfile import ( |
236 | 44 | NetworkRecordStream, | 45 | NetworkRecordStream, |
237 | @@ -544,12 +545,14 @@ | |||
238 | 544 | :ivar first_bytes: The first bytes to give the next NetworkRecordStream. | 545 | :ivar first_bytes: The first bytes to give the next NetworkRecordStream. |
239 | 545 | """ | 546 | """ |
240 | 546 | 547 | ||
242 | 547 | def __init__(self, byte_stream): | 548 | def __init__(self, byte_stream, record_counter): |
243 | 548 | """Create a _ByteStreamDecoder.""" | 549 | """Create a _ByteStreamDecoder.""" |
244 | 549 | self.stream_decoder = pack.ContainerPushParser() | 550 | self.stream_decoder = pack.ContainerPushParser() |
245 | 550 | self.current_type = None | 551 | self.current_type = None |
246 | 551 | self.first_bytes = None | 552 | self.first_bytes = None |
247 | 552 | self.byte_stream = byte_stream | 553 | self.byte_stream = byte_stream |
248 | 554 | self._record_counter = record_counter | ||
249 | 555 | self.key_count = 0 | ||
250 | 553 | 556 | ||
251 | 554 | def iter_stream_decoder(self): | 557 | def iter_stream_decoder(self): |
252 | 555 | """Iterate the contents of the pack from stream_decoder.""" | 558 | """Iterate the contents of the pack from stream_decoder.""" |
253 | @@ -580,13 +583,46 @@ | |||
254 | 580 | 583 | ||
255 | 581 | def record_stream(self): | 584 | def record_stream(self): |
256 | 582 | """Yield substream_type, substream from the byte stream.""" | 585 | """Yield substream_type, substream from the byte stream.""" |
257 | 586 | def wrap_and_count(pb, rc, substream): | ||
258 | 587 | """Yield records from stream while showing progress.""" | ||
259 | 588 | counter = 0 | ||
260 | 589 | if rc: | ||
261 | 590 | if self.current_type != 'revisions' and self.key_count != 0: | ||
262 | 591 | # As we know the number of revisions now (in self.key_count) | ||
263 | 592 | # we can setup and use record_counter (rc). | ||
264 | 593 | if not rc.is_initialized(): | ||
265 | 594 | rc.setup(self.key_count, self.key_count) | ||
266 | 595 | for record in substream.read(): | ||
267 | 596 | if rc: | ||
268 | 597 | if rc.is_initialized() and counter == rc.STEP: | ||
269 | 598 | rc.increment(counter) | ||
270 | 599 | pb.update('Estimate', rc.current, rc.max) | ||
271 | 600 | counter = 0 | ||
272 | 601 | if self.current_type == 'revisions': | ||
273 | 602 | # Total records is proportional to number of revs | ||
274 | 603 | # to fetch. With remote, we used self.key_count to | ||
275 | 604 | # track the number of revs. Once we have the revs | ||
276 | 605 | # counts in self.key_count, the progress bar changes | ||
277 | 606 | # from 'Estimating..' to 'Estimate' above. | ||
278 | 607 | self.key_count += 1 | ||
279 | 608 | if counter == rc.STEP: | ||
280 | 609 | pb.update('Estimating..', self.key_count) | ||
281 | 610 | counter = 0 | ||
282 | 611 | counter += 1 | ||
283 | 612 | yield record | ||
284 | 613 | |||
285 | 583 | self.seed_state() | 614 | self.seed_state() |
286 | 615 | pb = ui.ui_factory.nested_progress_bar() | ||
287 | 616 | rc = self._record_counter | ||
288 | 584 | # Make and consume sub generators, one per substream type: | 617 | # Make and consume sub generators, one per substream type: |
289 | 585 | while self.first_bytes is not None: | 618 | while self.first_bytes is not None: |
290 | 586 | substream = NetworkRecordStream(self.iter_substream_bytes()) | 619 | substream = NetworkRecordStream(self.iter_substream_bytes()) |
291 | 587 | # after substream is fully consumed, self.current_type is set to | 620 | # after substream is fully consumed, self.current_type is set to |
292 | 588 | # the next type, and self.first_bytes is set to the matching bytes. | 621 | # the next type, and self.first_bytes is set to the matching bytes. |
294 | 589 | yield self.current_type, substream.read() | 622 | yield self.current_type, wrap_and_count(pb, rc, substream) |
295 | 623 | if rc: | ||
296 | 624 | pb.update('Done', rc.max, rc.max) | ||
297 | 625 | pb.finished() | ||
298 | 590 | 626 | ||
299 | 591 | def seed_state(self): | 627 | def seed_state(self): |
300 | 592 | """Prepare the _ByteStreamDecoder to decode from the pack stream.""" | 628 | """Prepare the _ByteStreamDecoder to decode from the pack stream.""" |
301 | @@ -597,13 +633,13 @@ | |||
302 | 597 | list(self.iter_substream_bytes()) | 633 | list(self.iter_substream_bytes()) |
303 | 598 | 634 | ||
304 | 599 | 635 | ||
306 | 600 | def _byte_stream_to_stream(byte_stream): | 636 | def _byte_stream_to_stream(byte_stream, record_counter=None): |
307 | 601 | """Convert a byte stream into a format and a stream. | 637 | """Convert a byte stream into a format and a stream. |
308 | 602 | 638 | ||
309 | 603 | :param byte_stream: A bytes iterator, as output by _stream_to_byte_stream. | 639 | :param byte_stream: A bytes iterator, as output by _stream_to_byte_stream. |
310 | 604 | :return: (RepositoryFormat, stream_generator) | 640 | :return: (RepositoryFormat, stream_generator) |
311 | 605 | """ | 641 | """ |
313 | 606 | decoder = _ByteStreamDecoder(byte_stream) | 642 | decoder = _ByteStreamDecoder(byte_stream, record_counter) |
314 | 607 | for bytes in byte_stream: | 643 | for bytes in byte_stream: |
315 | 608 | decoder.stream_decoder.accept_bytes(bytes) | 644 | decoder.stream_decoder.accept_bytes(bytes) |
316 | 609 | for record in decoder.stream_decoder.read_pending_records(max=1): | 645 | for record in decoder.stream_decoder.read_pending_records(max=1): |
317 | 610 | 646 | ||
318 | === modified file 'bzrlib/tests/blackbox/test_checkout.py' | |||
319 | --- bzrlib/tests/blackbox/test_checkout.py 2010-04-30 09:52:08 +0000 | |||
320 | +++ bzrlib/tests/blackbox/test_checkout.py 2010-05-14 13:38:33 +0000 | |||
321 | @@ -65,7 +65,6 @@ | |||
322 | 65 | 65 | ||
323 | 66 | def test_checkout_dash_r(self): | 66 | def test_checkout_dash_r(self): |
324 | 67 | out, err = self.run_bzr(['checkout', '-r', '-2', 'branch', 'checkout']) | 67 | out, err = self.run_bzr(['checkout', '-r', '-2', 'branch', 'checkout']) |
325 | 68 | self.assertContainsRe(out, 'Copying history to "checkout".') | ||
326 | 69 | # the working tree should now be at revision '1' with the content | 68 | # the working tree should now be at revision '1' with the content |
327 | 70 | # from 1. | 69 | # from 1. |
328 | 71 | result = bzrdir.BzrDir.open('checkout') | 70 | result = bzrdir.BzrDir.open('checkout') |
329 | @@ -75,7 +74,6 @@ | |||
330 | 75 | def test_checkout_light_dash_r(self): | 74 | def test_checkout_light_dash_r(self): |
331 | 76 | out, err = self.run_bzr(['checkout','--lightweight', '-r', '-2', | 75 | out, err = self.run_bzr(['checkout','--lightweight', '-r', '-2', |
332 | 77 | 'branch', 'checkout']) | 76 | 'branch', 'checkout']) |
333 | 78 | self.assertNotContainsRe(out, 'Copying history') | ||
334 | 79 | # the working tree should now be at revision '1' with the content | 77 | # the working tree should now be at revision '1' with the content |
335 | 80 | # from 1. | 78 | # from 1. |
336 | 81 | result = bzrdir.BzrDir.open('checkout') | 79 | result = bzrdir.BzrDir.open('checkout') |
Thanks, this is a very nice bug to fix.
I would prefer the message came out through trace or the ui factory
than directly to self.outf, because that will make it easier to
refactor out of the cmd implementation, and it's more likely to
automatically respect --quiet. You might then be able to test more
cleanly through TestUIFactory.