Merge lp:~jameinel/bzr/2.1b1-pack-on-the-fly into lp:bzr
- 2.1b1-pack-on-the-fly
- Merge into bzr.dev
Proposed by
John A Meinel
Status: | Merged | ||||
---|---|---|---|---|---|
Merged at revision: | not available | ||||
Proposed branch: | lp:~jameinel/bzr/2.1b1-pack-on-the-fly | ||||
Merge into: | lp:bzr | ||||
Diff against target: | None lines | ||||
To merge this branch: | bzr merge lp:~jameinel/bzr/2.1b1-pack-on-the-fly | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
bzr-core | Pending | ||
Review via email: mp+11162@code.launchpad.net |
Commit message
Description of the change
To post a comment you must log in.
Revision history for this message
John A Meinel (jameinel) wrote : | # |
Revision history for this message
Robert Collins (lifeless) wrote : | # |
Conceptually great; I'm looking now.
The review merge is bong; I'm going to pull locally, sync up and get a clean diff.
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | === modified file 'Makefile' | |||
2 | --- Makefile 2009-08-03 20:38:39 +0000 | |||
3 | +++ Makefile 2009-08-27 00:53:27 +0000 | |||
4 | @@ -1,4 +1,4 @@ | |||
6 | 1 | # Copyright (C) 2005, 2006, 2007, 2008 Canonical Ltd | 1 | # Copyright (C) 2005, 2006, 2007, 2008, 2009 Canonical Ltd |
7 | 2 | # | 2 | # |
8 | 3 | # This program is free software; you can redistribute it and/or modify | 3 | # This program is free software; you can redistribute it and/or modify |
9 | 4 | # it under the terms of the GNU General Public License as published by | 4 | # it under the terms of the GNU General Public License as published by |
10 | @@ -40,8 +40,6 @@ | |||
11 | 40 | 40 | ||
12 | 41 | check-nodocs: extensions | 41 | check-nodocs: extensions |
13 | 42 | $(PYTHON) -Werror -O ./bzr selftest -1v $(tests) | 42 | $(PYTHON) -Werror -O ./bzr selftest -1v $(tests) |
14 | 43 | @echo "Running all tests with no locale." | ||
15 | 44 | LC_CTYPE= LANG=C LC_ALL= ./bzr selftest -1v $(tests) 2>&1 | sed -e 's/^/[ascii] /' | ||
16 | 45 | 43 | ||
17 | 46 | # Run Python style checker (apt-get install pyflakes) | 44 | # Run Python style checker (apt-get install pyflakes) |
18 | 47 | # | 45 | # |
19 | 48 | 46 | ||
20 | === modified file 'NEWS' | |||
21 | --- NEWS 2009-08-30 22:02:45 +0000 | |||
22 | +++ NEWS 2009-09-03 21:04:22 +0000 | |||
23 | @@ -6,6 +6,55 @@ | |||
24 | 6 | .. contents:: List of Releases | 6 | .. contents:: List of Releases |
25 | 7 | :depth: 1 | 7 | :depth: 1 |
26 | 8 | 8 | ||
27 | 9 | In Development | ||
28 | 10 | ############## | ||
29 | 11 | |||
30 | 12 | Compatibility Breaks | ||
31 | 13 | ******************** | ||
32 | 14 | |||
33 | 15 | New Features | ||
34 | 16 | ************ | ||
35 | 17 | |||
36 | 18 | Bug Fixes | ||
37 | 19 | ********* | ||
38 | 20 | |||
39 | 21 | * ``bzr check`` in pack-0.92, 1.6 and 1.9 format repositories will no | ||
40 | 22 | longer report incorrect errors about ``Missing inventory ('TREE_ROOT', ...)`` | ||
41 | 23 | (Robert Collins, #416732) | ||
42 | 24 | |||
43 | 25 | * Don't restrict the command name used to run the test suite. | ||
44 | 26 | (Vincent Ladeuil, #419950) | ||
45 | 27 | |||
46 | 28 | Improvements | ||
47 | 29 | ************ | ||
48 | 30 | |||
49 | 31 | Documentation | ||
50 | 32 | ************* | ||
51 | 33 | |||
52 | 34 | API Changes | ||
53 | 35 | *********** | ||
54 | 36 | |||
55 | 37 | * ``bzrlib.tests`` now uses ``stopTestRun`` for its ``TestResult`` | ||
56 | 38 | subclasses - the same as python's unittest module. (Robert Collins) | ||
57 | 39 | |||
58 | 40 | Internals | ||
59 | 41 | ********* | ||
60 | 42 | |||
61 | 43 | * The ``bzrlib.lsprof`` module has a new class ``BzrProfiler`` which makes | ||
62 | 44 | profiling in some situations like callbacks and generators easier. | ||
63 | 45 | (Robert Collins) | ||
64 | 46 | |||
65 | 47 | Testing | ||
66 | 48 | ******* | ||
67 | 49 | |||
68 | 50 | * Passing ``--lsprof-tests -v`` to bzr selftest will cause lsprof output to | ||
69 | 51 | be output for every test. Note that this is very verbose! (Robert Collins) | ||
70 | 52 | |||
71 | 53 | * Test parameterisation now does a shallow copy, not a deep copy of the test | ||
72 | 54 | to be parameterised. This is not expected to break external use of test | ||
73 | 55 | parameterisation, and is substantially faster. (Robert Collins) | ||
74 | 56 | |||
75 | 57 | |||
76 | 9 | bzr 2.0rc2 | 58 | bzr 2.0rc2 |
77 | 10 | ########## | 59 | ########## |
78 | 11 | 60 | ||
79 | @@ -20,10 +69,34 @@ | |||
80 | 20 | revisions that are in the fallback repository. (Regressed in 2.0rc1). | 69 | revisions that are in the fallback repository. (Regressed in 2.0rc1). |
81 | 21 | (John Arbash Meinel, #419241) | 70 | (John Arbash Meinel, #419241) |
82 | 22 | 71 | ||
83 | 72 | * Fetches from 2a to 2a are now again requested in 'groupcompress' order. | ||
84 | 73 | Groups that are seen as 'underutilized' will be repacked on-the-fly. | ||
85 | 74 | This means that when the source is fully packed, there is minimal | ||
86 | 75 | overhead during the fetch, but if the source is poorly packed the result | ||
87 | 76 | is a fairly well packed repository (not as good as 'bzr pack' but | ||
88 | 77 | good-enough.) (Robert Collins, John Arbash Meinel, #402652) | ||
89 | 78 | |||
90 | 23 | * Fix a segmentation fault when computing the ``merge_sort`` of a graph | 79 | * Fix a segmentation fault when computing the ``merge_sort`` of a graph |
91 | 24 | that has a ghost in the mainline ancestry. | 80 | that has a ghost in the mainline ancestry. |
92 | 25 | (John Arbash Meinel, #419241) | 81 | (John Arbash Meinel, #419241) |
93 | 26 | 82 | ||
94 | 83 | * ``groupcompress`` sort order is now more stable, rather than relying on | ||
95 | 84 | ``topo_sort`` ordering. The implementation is now | ||
96 | 85 | ``KnownGraph.gc_sort``. (John Arbash Meinel) | ||
97 | 86 | |||
98 | 87 | * Local data conversion will generate correct deltas. This is a critical | ||
99 | 88 | bugfix vs 2.0rc1, and all 2.0rc1 users should upgrade to 2.0rc2 before | ||
100 | 89 | converting repositories. (Robert Collins, #422849) | ||
101 | 90 | |||
102 | 91 | * Network streams now decode adjacent records of the same type into a | ||
103 | 92 | single stream, reducing layering churn. (Robert Collins) | ||
104 | 93 | |||
105 | 94 | Documentation | ||
106 | 95 | ************* | ||
107 | 96 | |||
108 | 97 | * The main table of contents now provides links to the new Migration Docs | ||
109 | 98 | and Plugins Guide. (Ian Clatworthy) | ||
110 | 99 | |||
111 | 27 | 100 | ||
112 | 28 | bzr 2.0rc1 | 101 | bzr 2.0rc1 |
113 | 29 | ########## | 102 | ########## |
114 | @@ -64,6 +137,9 @@ | |||
115 | 64 | Bug Fixes | 137 | Bug Fixes |
116 | 65 | ********* | 138 | ********* |
117 | 66 | 139 | ||
118 | 140 | * Further tweaks to handling of ``bzr add`` messages about ignored files. | ||
119 | 141 | (Jason Spashett, #76616) | ||
120 | 142 | |||
121 | 67 | * Fetches were being requested in 'groupcompress' order, but weren't | 143 | * Fetches were being requested in 'groupcompress' order, but weren't |
122 | 68 | recombining the groups. Thus they would 'fragment' to get the correct | 144 | recombining the groups. Thus they would 'fragment' to get the correct |
123 | 69 | order, but not 'recombine' to actually benefit from it. Until we get | 145 | order, but not 'recombine' to actually benefit from it. Until we get |
124 | @@ -133,9 +209,6 @@ | |||
125 | 133 | classes changed to manage lock lifetime of the trees they open in a way | 209 | classes changed to manage lock lifetime of the trees they open in a way |
126 | 134 | consistent with reader-exclusive locks. (Robert Collins, #305006) | 210 | consistent with reader-exclusive locks. (Robert Collins, #305006) |
127 | 135 | 211 | ||
128 | 136 | Internals | ||
129 | 137 | ********* | ||
130 | 138 | |||
131 | 139 | Testing | 212 | Testing |
132 | 140 | ******* | 213 | ******* |
133 | 141 | 214 | ||
134 | @@ -149,13 +222,29 @@ | |||
135 | 149 | conversion will commit too many copies a file. | 222 | conversion will commit too many copies a file. |
136 | 150 | (Martin Pool, #415508) | 223 | (Martin Pool, #415508) |
137 | 151 | 224 | ||
138 | 225 | Improvements | ||
139 | 226 | ************ | ||
140 | 227 | |||
141 | 228 | * ``bzr push`` locally on windows will no longer give a locking error with | ||
142 | 229 | dirstate based formats. (Robert Collins) | ||
143 | 230 | |||
144 | 231 | * ``bzr shelve`` and ``bzr unshelve`` now work on windows. | ||
145 | 232 | (Robert Collins, #305006) | ||
146 | 233 | |||
147 | 152 | API Changes | 234 | API Changes |
148 | 153 | *********** | 235 | *********** |
149 | 154 | 236 | ||
150 | 237 | * ``bzrlib.shelf_ui`` has had the ``from_args`` convenience methods of its | ||
151 | 238 | classes changed to manage lock lifetime of the trees they open in a way | ||
152 | 239 | consistent with reader-exclusive locks. (Robert Collins, #305006) | ||
153 | 240 | |||
154 | 155 | * ``Tree.path_content_summary`` may return a size of None, when called on | 241 | * ``Tree.path_content_summary`` may return a size of None, when called on |
155 | 156 | a tree with content filtering where the size of the canonical form | 242 | a tree with content filtering where the size of the canonical form |
156 | 157 | cannot be cheaply determined. (Martin Pool) | 243 | cannot be cheaply determined. (Martin Pool) |
157 | 158 | 244 | ||
158 | 245 | * When manually creating transport servers in test cases, a new helper | ||
159 | 246 | ``TestCase.start_server`` that registers a cleanup and starts the server | ||
160 | 247 | should be used. (Robert Collins) | ||
161 | 159 | 248 | ||
162 | 160 | bzr 1.18 | 249 | bzr 1.18 |
163 | 161 | ######## | 250 | ######## |
164 | @@ -493,6 +582,17 @@ | |||
165 | 493 | ``countTestsCases``. (Robert Collins) | 582 | ``countTestsCases``. (Robert Collins) |
166 | 494 | 583 | ||
167 | 495 | 584 | ||
168 | 585 | bzr 1.17.1 (unreleased) | ||
169 | 586 | ####################### | ||
170 | 587 | |||
171 | 588 | Bug Fixes | ||
172 | 589 | ********* | ||
173 | 590 | |||
174 | 591 | * The optional ``_knit_load_data_pyx`` C extension was never being | ||
175 | 592 | imported. This caused significant slowdowns when reading data from | ||
176 | 593 | knit format repositories. (Andrew Bennetts, #405653) | ||
177 | 594 | |||
178 | 595 | |||
179 | 496 | bzr 1.17 "So late it's brunch" 2009-07-20 | 596 | bzr 1.17 "So late it's brunch" 2009-07-20 |
180 | 497 | ######################################### | 597 | ######################################### |
181 | 498 | :Codename: so-late-its-brunch | 598 | :Codename: so-late-its-brunch |
182 | @@ -991,6 +1091,9 @@ | |||
183 | 991 | Testing | 1091 | Testing |
184 | 992 | ******* | 1092 | ******* |
185 | 993 | 1093 | ||
186 | 1094 | * ``make check`` no longer repeats the test run in ``LANG=C``. | ||
187 | 1095 | (Martin Pool, #386180) | ||
188 | 1096 | |||
189 | 994 | * The number of cores is now correctly detected on OSX. (John Szakmeister) | 1097 | * The number of cores is now correctly detected on OSX. (John Szakmeister) |
190 | 995 | 1098 | ||
191 | 996 | * The number of cores is also detected on Solaris and win32. (Vincent Ladeuil) | 1099 | * The number of cores is also detected on Solaris and win32. (Vincent Ladeuil) |
192 | @@ -4971,7 +5074,7 @@ | |||
193 | 4971 | checkouts. (Aaron Bentley, #182040) | 5074 | checkouts. (Aaron Bentley, #182040) |
194 | 4972 | 5075 | ||
195 | 4973 | * Stop polluting /tmp when running selftest. | 5076 | * Stop polluting /tmp when running selftest. |
197 | 4974 | (Vincent Ladeuil, #123623) | 5077 | (Vincent Ladeuil, #123363) |
198 | 4975 | 5078 | ||
199 | 4976 | * Switch from NFKC => NFC for normalization checks. NFC allows a few | 5079 | * Switch from NFKC => NFC for normalization checks. NFC allows a few |
200 | 4977 | more characters which should be considered valid. | 5080 | more characters which should be considered valid. |
201 | 4978 | 5081 | ||
202 | === modified file 'bzr' | |||
203 | --- bzr 2009-08-11 03:02:56 +0000 | |||
204 | +++ bzr 2009-08-28 05:11:10 +0000 | |||
205 | @@ -23,7 +23,7 @@ | |||
206 | 23 | import warnings | 23 | import warnings |
207 | 24 | 24 | ||
208 | 25 | # update this on each release | 25 | # update this on each release |
210 | 26 | _script_version = (2, 0, 0) | 26 | _script_version = (2, 1, 0) |
211 | 27 | 27 | ||
212 | 28 | if __doc__ is None: | 28 | if __doc__ is None: |
213 | 29 | print "bzr does not support python -OO." | 29 | print "bzr does not support python -OO." |
214 | 30 | 30 | ||
215 | === modified file 'bzrlib/__init__.py' | |||
216 | --- bzrlib/__init__.py 2009-08-27 07:49:53 +0000 | |||
217 | +++ bzrlib/__init__.py 2009-08-30 21:34:42 +0000 | |||
218 | @@ -50,7 +50,7 @@ | |||
219 | 50 | # Python version 2.0 is (2, 0, 0, 'final', 0)." Additionally we use a | 50 | # Python version 2.0 is (2, 0, 0, 'final', 0)." Additionally we use a |
220 | 51 | # releaselevel of 'dev' for unreleased under-development code. | 51 | # releaselevel of 'dev' for unreleased under-development code. |
221 | 52 | 52 | ||
223 | 53 | version_info = (2, 0, 0, 'candidate', 1) | 53 | version_info = (2, 1, 0, 'dev', 0) |
224 | 54 | 54 | ||
225 | 55 | # API compatibility version: bzrlib is currently API compatible with 1.15. | 55 | # API compatibility version: bzrlib is currently API compatible with 1.15. |
226 | 56 | api_minimum_version = (1, 17, 0) | 56 | api_minimum_version = (1, 17, 0) |
227 | 57 | 57 | ||
228 | === modified file 'bzrlib/_known_graph_py.py' | |||
229 | --- bzrlib/_known_graph_py.py 2009-08-17 20:41:26 +0000 | |||
230 | +++ bzrlib/_known_graph_py.py 2009-08-25 18:45:40 +0000 | |||
231 | @@ -97,6 +97,10 @@ | |||
232 | 97 | return [node for node in self._nodes.itervalues() | 97 | return [node for node in self._nodes.itervalues() |
233 | 98 | if not node.parent_keys] | 98 | if not node.parent_keys] |
234 | 99 | 99 | ||
235 | 100 | def _find_tips(self): | ||
236 | 101 | return [node for node in self._nodes.itervalues() | ||
237 | 102 | if not node.child_keys] | ||
238 | 103 | |||
239 | 100 | def _find_gdfo(self): | 104 | def _find_gdfo(self): |
240 | 101 | nodes = self._nodes | 105 | nodes = self._nodes |
241 | 102 | known_parent_gdfos = {} | 106 | known_parent_gdfos = {} |
242 | @@ -218,6 +222,51 @@ | |||
243 | 218 | # We started from the parents, so we don't need to do anymore work | 222 | # We started from the parents, so we don't need to do anymore work |
244 | 219 | return topo_order | 223 | return topo_order |
245 | 220 | 224 | ||
246 | 225 | def gc_sort(self): | ||
247 | 226 | """Return a reverse topological ordering which is 'stable'. | ||
248 | 227 | |||
249 | 228 | There are a few constraints: | ||
250 | 229 | 1) Reverse topological (all children before all parents) | ||
251 | 230 | 2) Grouped by prefix | ||
252 | 231 | 3) 'stable' sorting, so that we get the same result, independent of | ||
253 | 232 | machine, or extra data. | ||
254 | 233 | To do this, we use the same basic algorithm as topo_sort, but when we | ||
255 | 234 | aren't sure what node to access next, we sort them lexicographically. | ||
256 | 235 | """ | ||
257 | 236 | tips = self._find_tips() | ||
258 | 237 | # Split the tips based on prefix | ||
259 | 238 | prefix_tips = {} | ||
260 | 239 | for node in tips: | ||
261 | 240 | if node.key.__class__ is str or len(node.key) == 1: | ||
262 | 241 | prefix = '' | ||
263 | 242 | else: | ||
264 | 243 | prefix = node.key[0] | ||
265 | 244 | prefix_tips.setdefault(prefix, []).append(node) | ||
266 | 245 | |||
267 | 246 | num_seen_children = dict.fromkeys(self._nodes, 0) | ||
268 | 247 | |||
269 | 248 | result = [] | ||
270 | 249 | for prefix in sorted(prefix_tips): | ||
271 | 250 | pending = sorted(prefix_tips[prefix], key=lambda n:n.key, | ||
272 | 251 | reverse=True) | ||
273 | 252 | while pending: | ||
274 | 253 | node = pending.pop() | ||
275 | 254 | if node.parent_keys is None: | ||
276 | 255 | # Ghost node, skip it | ||
277 | 256 | continue | ||
278 | 257 | result.append(node.key) | ||
279 | 258 | for parent_key in sorted(node.parent_keys, reverse=True): | ||
280 | 259 | parent_node = self._nodes[parent_key] | ||
281 | 260 | seen_children = num_seen_children[parent_key] + 1 | ||
282 | 261 | if seen_children == len(parent_node.child_keys): | ||
283 | 262 | # All children have been processed, enqueue this parent | ||
284 | 263 | pending.append(parent_node) | ||
285 | 264 | # This has been queued up, stop tracking it | ||
286 | 265 | del num_seen_children[parent_key] | ||
287 | 266 | else: | ||
288 | 267 | num_seen_children[parent_key] = seen_children | ||
289 | 268 | return result | ||
290 | 269 | |||
291 | 221 | def merge_sort(self, tip_key): | 270 | def merge_sort(self, tip_key): |
292 | 222 | """Compute the merge sorted graph output.""" | 271 | """Compute the merge sorted graph output.""" |
293 | 223 | from bzrlib import tsort | 272 | from bzrlib import tsort |
294 | 224 | 273 | ||
295 | === modified file 'bzrlib/_known_graph_pyx.pyx' | |||
296 | --- bzrlib/_known_graph_pyx.pyx 2009-08-26 16:03:59 +0000 | |||
297 | +++ bzrlib/_known_graph_pyx.pyx 2009-09-02 13:32:52 +0000 | |||
298 | @@ -25,11 +25,18 @@ | |||
299 | 25 | ctypedef struct PyObject: | 25 | ctypedef struct PyObject: |
300 | 26 | pass | 26 | pass |
301 | 27 | 27 | ||
302 | 28 | int PyString_CheckExact(object) | ||
303 | 29 | |||
304 | 30 | int PyObject_RichCompareBool(object, object, int) | ||
305 | 31 | int Py_LT | ||
306 | 32 | |||
307 | 33 | int PyTuple_CheckExact(object) | ||
308 | 28 | object PyTuple_New(Py_ssize_t n) | 34 | object PyTuple_New(Py_ssize_t n) |
309 | 29 | Py_ssize_t PyTuple_GET_SIZE(object t) | 35 | Py_ssize_t PyTuple_GET_SIZE(object t) |
310 | 30 | PyObject * PyTuple_GET_ITEM(object t, Py_ssize_t o) | 36 | PyObject * PyTuple_GET_ITEM(object t, Py_ssize_t o) |
311 | 31 | void PyTuple_SET_ITEM(object t, Py_ssize_t o, object v) | 37 | void PyTuple_SET_ITEM(object t, Py_ssize_t o, object v) |
312 | 32 | 38 | ||
313 | 39 | int PyList_CheckExact(object) | ||
314 | 33 | Py_ssize_t PyList_GET_SIZE(object l) | 40 | Py_ssize_t PyList_GET_SIZE(object l) |
315 | 34 | PyObject * PyList_GET_ITEM(object l, Py_ssize_t o) | 41 | PyObject * PyList_GET_ITEM(object l, Py_ssize_t o) |
316 | 35 | int PyList_SetItem(object l, Py_ssize_t o, object l) except -1 | 42 | int PyList_SetItem(object l, Py_ssize_t o, object l) except -1 |
317 | @@ -108,14 +115,65 @@ | |||
318 | 108 | return <_KnownGraphNode>temp_node | 115 | return <_KnownGraphNode>temp_node |
319 | 109 | 116 | ||
320 | 110 | 117 | ||
322 | 111 | cdef _KnownGraphNode _get_parent(parents, Py_ssize_t pos): | 118 | cdef _KnownGraphNode _get_tuple_node(tpl, Py_ssize_t pos): |
323 | 112 | cdef PyObject *temp_node | 119 | cdef PyObject *temp_node |
324 | 113 | cdef _KnownGraphNode node | ||
325 | 114 | 120 | ||
327 | 115 | temp_node = PyTuple_GET_ITEM(parents, pos) | 121 | temp_node = PyTuple_GET_ITEM(tpl, pos) |
328 | 116 | return <_KnownGraphNode>temp_node | 122 | return <_KnownGraphNode>temp_node |
329 | 117 | 123 | ||
330 | 118 | 124 | ||
331 | 125 | def get_key(node): | ||
332 | 126 | cdef _KnownGraphNode real_node | ||
333 | 127 | real_node = node | ||
334 | 128 | return real_node.key | ||
335 | 129 | |||
336 | 130 | |||
337 | 131 | cdef object _sort_list_nodes(object lst_or_tpl, int reverse): | ||
338 | 132 | """Sort a list of _KnownGraphNode objects. | ||
339 | 133 | |||
340 | 134 | If lst_or_tpl is a list, it is allowed to mutate in place. It may also | ||
341 | 135 | just return the input list if everything is already sorted. | ||
342 | 136 | """ | ||
343 | 137 | cdef _KnownGraphNode node1, node2 | ||
344 | 138 | cdef int do_swap, is_tuple | ||
345 | 139 | cdef Py_ssize_t length | ||
346 | 140 | |||
347 | 141 | is_tuple = PyTuple_CheckExact(lst_or_tpl) | ||
348 | 142 | if not (is_tuple or PyList_CheckExact(lst_or_tpl)): | ||
349 | 143 | raise TypeError('lst_or_tpl must be a list or tuple.') | ||
350 | 144 | length = len(lst_or_tpl) | ||
351 | 145 | if length == 0 or length == 1: | ||
352 | 146 | return lst_or_tpl | ||
353 | 147 | if length == 2: | ||
354 | 148 | if is_tuple: | ||
355 | 149 | node1 = _get_tuple_node(lst_or_tpl, 0) | ||
356 | 150 | node2 = _get_tuple_node(lst_or_tpl, 1) | ||
357 | 151 | else: | ||
358 | 152 | node1 = _get_list_node(lst_or_tpl, 0) | ||
359 | 153 | node2 = _get_list_node(lst_or_tpl, 1) | ||
360 | 154 | if reverse: | ||
361 | 155 | do_swap = PyObject_RichCompareBool(node1.key, node2.key, Py_LT) | ||
362 | 156 | else: | ||
363 | 157 | do_swap = PyObject_RichCompareBool(node2.key, node1.key, Py_LT) | ||
364 | 158 | if not do_swap: | ||
365 | 159 | return lst_or_tpl | ||
366 | 160 | if is_tuple: | ||
367 | 161 | return (node2, node1) | ||
368 | 162 | else: | ||
369 | 163 | # Swap 'in-place', since lists are mutable | ||
370 | 164 | Py_INCREF(node1) | ||
371 | 165 | PyList_SetItem(lst_or_tpl, 1, node1) | ||
372 | 166 | Py_INCREF(node2) | ||
373 | 167 | PyList_SetItem(lst_or_tpl, 0, node2) | ||
374 | 168 | return lst_or_tpl | ||
375 | 169 | # For all other sizes, we just use 'sorted()' | ||
376 | 170 | if is_tuple: | ||
377 | 171 | # Note that sorted() is just list(iterable).sort() | ||
378 | 172 | lst_or_tpl = list(lst_or_tpl) | ||
379 | 173 | lst_or_tpl.sort(key=get_key, reverse=reverse) | ||
380 | 174 | return lst_or_tpl | ||
381 | 175 | |||
382 | 176 | |||
383 | 119 | cdef class _MergeSorter | 177 | cdef class _MergeSorter |
384 | 120 | 178 | ||
385 | 121 | cdef class KnownGraph: | 179 | cdef class KnownGraph: |
386 | @@ -216,6 +274,19 @@ | |||
387 | 216 | PyList_Append(tails, node) | 274 | PyList_Append(tails, node) |
388 | 217 | return tails | 275 | return tails |
389 | 218 | 276 | ||
390 | 277 | def _find_tips(self): | ||
391 | 278 | cdef PyObject *temp_node | ||
392 | 279 | cdef _KnownGraphNode node | ||
393 | 280 | cdef Py_ssize_t pos | ||
394 | 281 | |||
395 | 282 | tips = [] | ||
396 | 283 | pos = 0 | ||
397 | 284 | while PyDict_Next(self._nodes, &pos, NULL, &temp_node): | ||
398 | 285 | node = <_KnownGraphNode>temp_node | ||
399 | 286 | if PyList_GET_SIZE(node.children) == 0: | ||
400 | 287 | PyList_Append(tips, node) | ||
401 | 288 | return tips | ||
402 | 289 | |||
403 | 219 | def _find_gdfo(self): | 290 | def _find_gdfo(self): |
404 | 220 | cdef _KnownGraphNode node | 291 | cdef _KnownGraphNode node |
405 | 221 | cdef _KnownGraphNode child | 292 | cdef _KnownGraphNode child |
406 | @@ -322,7 +393,7 @@ | |||
407 | 322 | continue | 393 | continue |
408 | 323 | if node.parents is not None and PyTuple_GET_SIZE(node.parents) > 0: | 394 | if node.parents is not None and PyTuple_GET_SIZE(node.parents) > 0: |
409 | 324 | for pos from 0 <= pos < PyTuple_GET_SIZE(node.parents): | 395 | for pos from 0 <= pos < PyTuple_GET_SIZE(node.parents): |
411 | 325 | parent_node = _get_parent(node.parents, pos) | 396 | parent_node = _get_tuple_node(node.parents, pos) |
412 | 326 | last_item = last_item + 1 | 397 | last_item = last_item + 1 |
413 | 327 | if last_item < PyList_GET_SIZE(pending): | 398 | if last_item < PyList_GET_SIZE(pending): |
414 | 328 | Py_INCREF(parent_node) # SetItem steals a ref | 399 | Py_INCREF(parent_node) # SetItem steals a ref |
415 | @@ -397,6 +468,77 @@ | |||
416 | 397 | # We started from the parents, so we don't need to do anymore work | 468 | # We started from the parents, so we don't need to do anymore work |
417 | 398 | return topo_order | 469 | return topo_order |
418 | 399 | 470 | ||
419 | 471 | def gc_sort(self): | ||
420 | 472 | """Return a reverse topological ordering which is 'stable'. | ||
421 | 473 | |||
422 | 474 | There are a few constraints: | ||
423 | 475 | 1) Reverse topological (all children before all parents) | ||
424 | 476 | 2) Grouped by prefix | ||
425 | 477 | 3) 'stable' sorting, so that we get the same result, independent of | ||
426 | 478 | machine, or extra data. | ||
427 | 479 | To do this, we use the same basic algorithm as topo_sort, but when we | ||
428 | 480 | aren't sure what node to access next, we sort them lexicographically. | ||
429 | 481 | """ | ||
430 | 482 | cdef PyObject *temp | ||
431 | 483 | cdef Py_ssize_t pos, last_item | ||
432 | 484 | cdef _KnownGraphNode node, node2, parent_node | ||
433 | 485 | |||
434 | 486 | tips = self._find_tips() | ||
435 | 487 | # Split the tips based on prefix | ||
436 | 488 | prefix_tips = {} | ||
437 | 489 | for pos from 0 <= pos < PyList_GET_SIZE(tips): | ||
438 | 490 | node = _get_list_node(tips, pos) | ||
439 | 491 | if PyString_CheckExact(node.key) or len(node.key) == 1: | ||
440 | 492 | prefix = '' | ||
441 | 493 | else: | ||
442 | 494 | prefix = node.key[0] | ||
443 | 495 | temp = PyDict_GetItem(prefix_tips, prefix) | ||
444 | 496 | if temp == NULL: | ||
445 | 497 | prefix_tips[prefix] = [node] | ||
446 | 498 | else: | ||
447 | 499 | tip_nodes = <object>temp | ||
448 | 500 | PyList_Append(tip_nodes, node) | ||
449 | 501 | |||
450 | 502 | result = [] | ||
451 | 503 | for prefix in sorted(prefix_tips): | ||
452 | 504 | temp = PyDict_GetItem(prefix_tips, prefix) | ||
453 | 505 | assert temp != NULL | ||
454 | 506 | tip_nodes = <object>temp | ||
455 | 507 | pending = _sort_list_nodes(tip_nodes, 1) | ||
456 | 508 | last_item = PyList_GET_SIZE(pending) - 1 | ||
457 | 509 | while last_item >= 0: | ||
458 | 510 | node = _get_list_node(pending, last_item) | ||
459 | 511 | last_item = last_item - 1 | ||
460 | 512 | if node.parents is None: | ||
461 | 513 | # Ghost | ||
462 | 514 | continue | ||
463 | 515 | PyList_Append(result, node.key) | ||
464 | 516 | # Sorting the parent keys isn't strictly necessary for stable | ||
465 | 517 | # sorting of a given graph. But it does help minimize the | ||
466 | 518 | # differences between graphs | ||
467 | 519 | # For bzr.dev ancestry: | ||
468 | 520 | # 4.73ms no sort | ||
469 | 521 | # 7.73ms RichCompareBool sort | ||
470 | 522 | parents = _sort_list_nodes(node.parents, 1) | ||
471 | 523 | for pos from 0 <= pos < len(parents): | ||
472 | 524 | if PyTuple_CheckExact(parents): | ||
473 | 525 | parent_node = _get_tuple_node(parents, pos) | ||
474 | 526 | else: | ||
475 | 527 | parent_node = _get_list_node(parents, pos) | ||
476 | 528 | # TODO: GraphCycle detection | ||
477 | 529 | parent_node.seen = parent_node.seen + 1 | ||
478 | 530 | if (parent_node.seen | ||
479 | 531 | == PyList_GET_SIZE(parent_node.children)): | ||
480 | 532 | # All children have been processed, queue up this | ||
481 | 533 | # parent | ||
482 | 534 | last_item = last_item + 1 | ||
483 | 535 | if last_item < PyList_GET_SIZE(pending): | ||
484 | 536 | Py_INCREF(parent_node) # SetItem steals a ref | ||
485 | 537 | PyList_SetItem(pending, last_item, parent_node) | ||
486 | 538 | else: | ||
487 | 539 | PyList_Append(pending, parent_node) | ||
488 | 540 | parent_node.seen = 0 | ||
489 | 541 | return result | ||
490 | 400 | 542 | ||
491 | 401 | def merge_sort(self, tip_key): | 543 | def merge_sort(self, tip_key): |
492 | 402 | """Compute the merge sorted graph output.""" | 544 | """Compute the merge sorted graph output.""" |
493 | @@ -522,7 +664,7 @@ | |||
494 | 522 | raise RuntimeError('ghost nodes should not be pushed' | 664 | raise RuntimeError('ghost nodes should not be pushed' |
495 | 523 | ' onto the stack: %s' % (node,)) | 665 | ' onto the stack: %s' % (node,)) |
496 | 524 | if PyTuple_GET_SIZE(node.parents) > 0: | 666 | if PyTuple_GET_SIZE(node.parents) > 0: |
498 | 525 | parent_node = _get_parent(node.parents, 0) | 667 | parent_node = _get_tuple_node(node.parents, 0) |
499 | 526 | ms_node.left_parent = parent_node | 668 | ms_node.left_parent = parent_node |
500 | 527 | if parent_node.parents is None: # left-hand ghost | 669 | if parent_node.parents is None: # left-hand ghost |
501 | 528 | ms_node.left_pending_parent = None | 670 | ms_node.left_pending_parent = None |
502 | @@ -532,7 +674,7 @@ | |||
503 | 532 | if PyTuple_GET_SIZE(node.parents) > 1: | 674 | if PyTuple_GET_SIZE(node.parents) > 1: |
504 | 533 | ms_node.pending_parents = [] | 675 | ms_node.pending_parents = [] |
505 | 534 | for pos from 1 <= pos < PyTuple_GET_SIZE(node.parents): | 676 | for pos from 1 <= pos < PyTuple_GET_SIZE(node.parents): |
507 | 535 | parent_node = _get_parent(node.parents, pos) | 677 | parent_node = _get_tuple_node(node.parents, pos) |
508 | 536 | if parent_node.parents is None: # ghost | 678 | if parent_node.parents is None: # ghost |
509 | 537 | continue | 679 | continue |
510 | 538 | PyList_Append(ms_node.pending_parents, parent_node) | 680 | PyList_Append(ms_node.pending_parents, parent_node) |
511 | 539 | 681 | ||
512 | === modified file 'bzrlib/builtins.py' | |||
513 | --- bzrlib/builtins.py 2009-08-26 03:20:32 +0000 | |||
514 | +++ bzrlib/builtins.py 2009-08-28 05:00:33 +0000 | |||
515 | @@ -3382,6 +3382,8 @@ | |||
516 | 3382 | Option('lsprof-timed', | 3382 | Option('lsprof-timed', |
517 | 3383 | help='Generate lsprof output for benchmarked' | 3383 | help='Generate lsprof output for benchmarked' |
518 | 3384 | ' sections of code.'), | 3384 | ' sections of code.'), |
519 | 3385 | Option('lsprof-tests', | ||
520 | 3386 | help='Generate lsprof output for each test.'), | ||
521 | 3385 | Option('cache-dir', type=str, | 3387 | Option('cache-dir', type=str, |
522 | 3386 | help='Cache intermediate benchmark output in this ' | 3388 | help='Cache intermediate benchmark output in this ' |
523 | 3387 | 'directory.'), | 3389 | 'directory.'), |
524 | @@ -3428,7 +3430,7 @@ | |||
525 | 3428 | first=False, list_only=False, | 3430 | first=False, list_only=False, |
526 | 3429 | randomize=None, exclude=None, strict=False, | 3431 | randomize=None, exclude=None, strict=False, |
527 | 3430 | load_list=None, debugflag=None, starting_with=None, subunit=False, | 3432 | load_list=None, debugflag=None, starting_with=None, subunit=False, |
529 | 3431 | parallel=None): | 3433 | parallel=None, lsprof_tests=False): |
530 | 3432 | from bzrlib.tests import selftest | 3434 | from bzrlib.tests import selftest |
531 | 3433 | import bzrlib.benchmarks as benchmarks | 3435 | import bzrlib.benchmarks as benchmarks |
532 | 3434 | from bzrlib.benchmarks import tree_creator | 3436 | from bzrlib.benchmarks import tree_creator |
533 | @@ -3468,6 +3470,7 @@ | |||
534 | 3468 | "transport": transport, | 3470 | "transport": transport, |
535 | 3469 | "test_suite_factory": test_suite_factory, | 3471 | "test_suite_factory": test_suite_factory, |
536 | 3470 | "lsprof_timed": lsprof_timed, | 3472 | "lsprof_timed": lsprof_timed, |
537 | 3473 | "lsprof_tests": lsprof_tests, | ||
538 | 3471 | "bench_history": benchfile, | 3474 | "bench_history": benchfile, |
539 | 3472 | "matching_tests_first": first, | 3475 | "matching_tests_first": first, |
540 | 3473 | "list_only": list_only, | 3476 | "list_only": list_only, |
541 | 3474 | 3477 | ||
542 | === modified file 'bzrlib/groupcompress.py' | |||
543 | --- bzrlib/groupcompress.py 2009-08-26 16:47:51 +0000 | |||
544 | +++ bzrlib/groupcompress.py 2009-09-03 15:25:36 +0000 | |||
545 | @@ -457,7 +457,6 @@ | |||
546 | 457 | # There are code paths that first extract as fulltext, and then | 457 | # There are code paths that first extract as fulltext, and then |
547 | 458 | # extract as storage_kind (smart fetch). So we don't break the | 458 | # extract as storage_kind (smart fetch). So we don't break the |
548 | 459 | # refcycle here, but instead in manager.get_record_stream() | 459 | # refcycle here, but instead in manager.get_record_stream() |
549 | 460 | # self._manager = None | ||
550 | 461 | if storage_kind == 'fulltext': | 460 | if storage_kind == 'fulltext': |
551 | 462 | return self._bytes | 461 | return self._bytes |
552 | 463 | else: | 462 | else: |
553 | @@ -469,6 +468,14 @@ | |||
554 | 469 | class _LazyGroupContentManager(object): | 468 | class _LazyGroupContentManager(object): |
555 | 470 | """This manages a group of _LazyGroupCompressFactory objects.""" | 469 | """This manages a group of _LazyGroupCompressFactory objects.""" |
556 | 471 | 470 | ||
557 | 471 | _max_cut_fraction = 0.75 # We allow a block to be trimmed to 75% of | ||
558 | 472 | # current size, and still be considered | ||
559 | 473 | # resuable | ||
560 | 474 | _full_block_size = 4*1024*1024 | ||
561 | 475 | _full_mixed_block_size = 2*1024*1024 | ||
562 | 476 | _full_enough_block_size = 3*1024*1024 # size at which we won't repack | ||
563 | 477 | _full_enough_mixed_block_size = 2*768*1024 # 1.5MB | ||
564 | 478 | |||
565 | 472 | def __init__(self, block): | 479 | def __init__(self, block): |
566 | 473 | self._block = block | 480 | self._block = block |
567 | 474 | # We need to preserve the ordering | 481 | # We need to preserve the ordering |
568 | @@ -546,22 +553,23 @@ | |||
569 | 546 | # time (self._block._content) is a little expensive. | 553 | # time (self._block._content) is a little expensive. |
570 | 547 | self._block._ensure_content(self._last_byte) | 554 | self._block._ensure_content(self._last_byte) |
571 | 548 | 555 | ||
573 | 549 | def _check_rebuild_block(self): | 556 | def _check_rebuild_action(self): |
574 | 550 | """Check to see if our block should be repacked.""" | 557 | """Check to see if our block should be repacked.""" |
575 | 551 | total_bytes_used = 0 | 558 | total_bytes_used = 0 |
576 | 552 | last_byte_used = 0 | 559 | last_byte_used = 0 |
577 | 553 | for factory in self._factories: | 560 | for factory in self._factories: |
578 | 554 | total_bytes_used += factory._end - factory._start | 561 | total_bytes_used += factory._end - factory._start |
582 | 555 | last_byte_used = max(last_byte_used, factory._end) | 562 | if last_byte_used < factory._end: |
583 | 556 | # If we are using most of the bytes from the block, we have nothing | 563 | last_byte_used = factory._end |
584 | 557 | # else to check (currently more that 1/2) | 564 | # If we are using more than half of the bytes from the block, we have |
585 | 565 | # nothing else to check | ||
586 | 558 | if total_bytes_used * 2 >= self._block._content_length: | 566 | if total_bytes_used * 2 >= self._block._content_length: |
590 | 559 | return | 567 | return None, last_byte_used, total_bytes_used |
591 | 560 | # Can we just strip off the trailing bytes? If we are going to be | 568 | # We are using less than 50% of the content. Is the content we are |
592 | 561 | # transmitting more than 50% of the front of the content, go ahead | 569 | # using at the beginning of the block? If so, we can just trim the |
593 | 570 | # tail, rather than rebuilding from scratch. | ||
594 | 562 | if total_bytes_used * 2 > last_byte_used: | 571 | if total_bytes_used * 2 > last_byte_used: |
597 | 563 | self._trim_block(last_byte_used) | 572 | return 'trim', last_byte_used, total_bytes_used |
596 | 564 | return | ||
598 | 565 | 573 | ||
599 | 566 | # We are using a small amount of the data, and it isn't just packed | 574 | # We are using a small amount of the data, and it isn't just packed |
600 | 567 | # nicely at the front, so rebuild the content. | 575 | # nicely at the front, so rebuild the content. |
601 | @@ -574,7 +582,77 @@ | |||
602 | 574 | # expanding many deltas into fulltexts, as well. | 582 | # expanding many deltas into fulltexts, as well. |
603 | 575 | # If we build a cheap enough 'strip', then we could try a strip, | 583 | # If we build a cheap enough 'strip', then we could try a strip, |
604 | 576 | # if that expands the content, we then rebuild. | 584 | # if that expands the content, we then rebuild. |
606 | 577 | self._rebuild_block() | 585 | return 'rebuild', last_byte_used, total_bytes_used |
607 | 586 | |||
608 | 587 | def check_is_well_utilized(self): | ||
609 | 588 | """Is the current block considered 'well utilized'? | ||
610 | 589 | |||
611 | 590 | This is a bit of a heuristic, but it basically asks if the current | ||
612 | 591 | block considers itself to be a fully developed group, rather than just | ||
613 | 592 | a loose collection of data. | ||
614 | 593 | """ | ||
615 | 594 | if len(self._factories) == 1: | ||
616 | 595 | # A block of length 1 is never considered 'well utilized' :) | ||
617 | 596 | return False | ||
618 | 597 | action, last_byte_used, total_bytes_used = self._check_rebuild_action() | ||
619 | 598 | block_size = self._block._content_length | ||
620 | 599 | if total_bytes_used < block_size * self._max_cut_fraction: | ||
621 | 600 | # This block wants to trim itself small enough that we want to | ||
622 | 601 | # consider it under-utilized. | ||
623 | 602 | return False | ||
624 | 603 | # TODO: This code is meant to be the twin of _insert_record_stream's | ||
625 | 604 | # 'start_new_block' logic. It would probably be better to factor | ||
626 | 605 | # out that logic into a shared location, so that it stays | ||
627 | 606 | # together better | ||
628 | 607 | # We currently assume a block is properly utilized whenever it is >75% | ||
629 | 608 | # of the size of a 'full' block. In normal operation, a block is | ||
630 | 609 | # considered full when it hits 4MB of same-file content. So any block | ||
631 | 610 | # >3MB is 'full enough'. | ||
632 | 611 | # The only time this isn't true is when a given block has large-object | ||
633 | 612 | # content. (a single file >4MB, etc.) | ||
634 | 613 | # Under these circumstances, we allow a block to grow to | ||
635 | 614 | # 2 x largest_content. Which means that if a given block had a large | ||
636 | 615 | # object, it may actually be under-utilized. However, given that this | ||
637 | 616 | # is 'pack-on-the-fly' it is probably reasonable to not repack large | ||
638 | 617 | # contet blobs on-the-fly. | ||
639 | 618 | if block_size >= self._full_enough_block_size: | ||
640 | 619 | return True | ||
641 | 620 | # If a block is <3MB, it still may be considered 'full' if it contains | ||
642 | 621 | # mixed content. The current rule is 2MB of mixed content is considered | ||
643 | 622 | # full. So check to see if this block contains mixed content, and | ||
644 | 623 | # set the threshold appropriately. | ||
645 | 624 | common_prefix = None | ||
646 | 625 | for factory in self._factories: | ||
647 | 626 | prefix = factory.key[:-1] | ||
648 | 627 | if common_prefix is None: | ||
649 | 628 | common_prefix = prefix | ||
650 | 629 | elif prefix != common_prefix: | ||
651 | 630 | # Mixed content, check the size appropriately | ||
652 | 631 | if block_size >= self._full_enough_mixed_block_size: | ||
653 | 632 | return True | ||
654 | 633 | break | ||
655 | 634 | # The content failed both the mixed check and the single-content check | ||
656 | 635 | # so obviously it is not fully utilized | ||
657 | 636 | # TODO: there is one other constraint that isn't being checked | ||
658 | 637 | # namely, that the entries in the block are in the appropriate | ||
659 | 638 | # order. For example, you could insert the entries in exactly | ||
660 | 639 | # reverse groupcompress order, and we would think that is ok. | ||
661 | 640 | # (all the right objects are in one group, and it is fully | ||
662 | 641 | # utilized, etc.) For now, we assume that case is rare, | ||
663 | 642 | # especially since we should always fetch in 'groupcompress' | ||
664 | 643 | # order. | ||
665 | 644 | return False | ||
666 | 645 | |||
667 | 646 | def _check_rebuild_block(self): | ||
668 | 647 | action, last_byte_used, total_bytes_used = self._check_rebuild_action() | ||
669 | 648 | if action is None: | ||
670 | 649 | return | ||
671 | 650 | if action == 'trim': | ||
672 | 651 | self._trim_block(last_byte_used) | ||
673 | 652 | elif action == 'rebuild': | ||
674 | 653 | self._rebuild_block() | ||
675 | 654 | else: | ||
676 | 655 | raise ValueError('unknown rebuild action: %r' % (action,)) | ||
677 | 578 | 656 | ||
678 | 579 | def _wire_bytes(self): | 657 | def _wire_bytes(self): |
679 | 580 | """Return a byte stream suitable for transmitting over the wire.""" | 658 | """Return a byte stream suitable for transmitting over the wire.""" |
680 | @@ -1570,6 +1648,7 @@ | |||
681 | 1570 | block_length = None | 1648 | block_length = None |
682 | 1571 | # XXX: TODO: remove this, it is just for safety checking for now | 1649 | # XXX: TODO: remove this, it is just for safety checking for now |
683 | 1572 | inserted_keys = set() | 1650 | inserted_keys = set() |
684 | 1651 | reuse_this_block = reuse_blocks | ||
685 | 1573 | for record in stream: | 1652 | for record in stream: |
686 | 1574 | # Raise an error when a record is missing. | 1653 | # Raise an error when a record is missing. |
687 | 1575 | if record.storage_kind == 'absent': | 1654 | if record.storage_kind == 'absent': |
688 | @@ -1583,10 +1662,20 @@ | |||
689 | 1583 | if reuse_blocks: | 1662 | if reuse_blocks: |
690 | 1584 | # If the reuse_blocks flag is set, check to see if we can just | 1663 | # If the reuse_blocks flag is set, check to see if we can just |
691 | 1585 | # copy a groupcompress block as-is. | 1664 | # copy a groupcompress block as-is. |
692 | 1665 | # We only check on the first record (groupcompress-block) not | ||
693 | 1666 | # on all of the (groupcompress-block-ref) entries. | ||
694 | 1667 | # The reuse_this_block flag is then kept for as long as | ||
695 | 1668 | if record.storage_kind == 'groupcompress-block': | ||
696 | 1669 | # Check to see if we really want to re-use this block | ||
697 | 1670 | insert_manager = record._manager | ||
698 | 1671 | reuse_this_block = insert_manager.check_is_well_utilized() | ||
699 | 1672 | else: | ||
700 | 1673 | reuse_this_block = False | ||
701 | 1674 | if reuse_this_block: | ||
702 | 1675 | # We still want to reuse this block | ||
703 | 1586 | if record.storage_kind == 'groupcompress-block': | 1676 | if record.storage_kind == 'groupcompress-block': |
704 | 1587 | # Insert the raw block into the target repo | 1677 | # Insert the raw block into the target repo |
705 | 1588 | insert_manager = record._manager | 1678 | insert_manager = record._manager |
706 | 1589 | insert_manager._check_rebuild_block() | ||
707 | 1590 | bytes = record._manager._block.to_bytes() | 1679 | bytes = record._manager._block.to_bytes() |
708 | 1591 | _, start, length = self._access.add_raw_records( | 1680 | _, start, length = self._access.add_raw_records( |
709 | 1592 | [(None, len(bytes))], bytes)[0] | 1681 | [(None, len(bytes))], bytes)[0] |
710 | @@ -1597,6 +1686,11 @@ | |||
711 | 1597 | 'groupcompress-block-ref'): | 1686 | 'groupcompress-block-ref'): |
712 | 1598 | if insert_manager is None: | 1687 | if insert_manager is None: |
713 | 1599 | raise AssertionError('No insert_manager set') | 1688 | raise AssertionError('No insert_manager set') |
714 | 1689 | if insert_manager is not record._manager: | ||
715 | 1690 | raise AssertionError('insert_manager does not match' | ||
716 | 1691 | ' the current record, we cannot be positive' | ||
717 | 1692 | ' that the appropriate content was inserted.' | ||
718 | 1693 | ) | ||
719 | 1600 | value = "%d %d %d %d" % (block_start, block_length, | 1694 | value = "%d %d %d %d" % (block_start, block_length, |
720 | 1601 | record._start, record._end) | 1695 | record._start, record._end) |
721 | 1602 | nodes = [(record.key, value, (record.parents,))] | 1696 | nodes = [(record.key, value, (record.parents,))] |
722 | 1603 | 1697 | ||
723 | === modified file 'bzrlib/lsprof.py' | |||
724 | --- bzrlib/lsprof.py 2009-03-08 06:18:06 +0000 | |||
725 | +++ bzrlib/lsprof.py 2009-08-24 21:05:09 +0000 | |||
726 | @@ -13,45 +13,74 @@ | |||
727 | 13 | 13 | ||
728 | 14 | __all__ = ['profile', 'Stats'] | 14 | __all__ = ['profile', 'Stats'] |
729 | 15 | 15 | ||
730 | 16 | _g_threadmap = {} | ||
731 | 17 | |||
732 | 18 | |||
733 | 19 | def _thread_profile(f, *args, **kwds): | ||
734 | 20 | # we lose the first profile point for a new thread in order to trampoline | ||
735 | 21 | # a new Profile object into place | ||
736 | 22 | global _g_threadmap | ||
737 | 23 | thr = thread.get_ident() | ||
738 | 24 | _g_threadmap[thr] = p = Profiler() | ||
739 | 25 | # this overrides our sys.setprofile hook: | ||
740 | 26 | p.enable(subcalls=True, builtins=True) | ||
741 | 27 | |||
742 | 28 | |||
743 | 29 | def profile(f, *args, **kwds): | 16 | def profile(f, *args, **kwds): |
744 | 30 | """Run a function profile. | 17 | """Run a function profile. |
745 | 31 | 18 | ||
746 | 32 | Exceptions are not caught: If you need stats even when exceptions are to be | 19 | Exceptions are not caught: If you need stats even when exceptions are to be |
749 | 33 | raised, passing in a closure that will catch the exceptions and transform | 20 | raised, pass in a closure that will catch the exceptions and transform them |
750 | 34 | them appropriately for your driver function. | 21 | appropriately for your driver function. |
751 | 35 | 22 | ||
752 | 36 | :return: The functions return value and a stats object. | 23 | :return: The functions return value and a stats object. |
753 | 37 | """ | 24 | """ |
758 | 38 | global _g_threadmap | 25 | profiler = BzrProfiler() |
759 | 39 | p = Profiler() | 26 | profiler.start() |
756 | 40 | p.enable(subcalls=True) | ||
757 | 41 | threading.setprofile(_thread_profile) | ||
760 | 42 | try: | 27 | try: |
761 | 43 | ret = f(*args, **kwds) | 28 | ret = f(*args, **kwds) |
762 | 44 | finally: | 29 | finally: |
765 | 45 | p.disable() | 30 | stats = profiler.stop() |
766 | 46 | for pp in _g_threadmap.values(): | 31 | return ret, stats |
767 | 32 | |||
768 | 33 | |||
769 | 34 | class BzrProfiler(object): | ||
770 | 35 | """Bzr utility wrapper around Profiler. | ||
771 | 36 | |||
772 | 37 | For most uses the module level 'profile()' function will be suitable. | ||
773 | 38 | However profiling when a simple wrapped function isn't available may | ||
774 | 39 | be easier to accomplish using this class. | ||
775 | 40 | |||
776 | 41 | To use it, create a BzrProfiler and call start() on it. Some arbitrary | ||
777 | 42 | time later call stop() to stop profiling and retrieve the statistics | ||
778 | 43 | from the code executed in the interim. | ||
779 | 44 | """ | ||
780 | 45 | |||
781 | 46 | def start(self): | ||
782 | 47 | """Start profiling. | ||
783 | 48 | |||
784 | 49 | This hooks into threading and will record all calls made until | ||
785 | 50 | stop() is called. | ||
786 | 51 | """ | ||
787 | 52 | self._g_threadmap = {} | ||
788 | 53 | self.p = Profiler() | ||
789 | 54 | self.p.enable(subcalls=True) | ||
790 | 55 | threading.setprofile(self._thread_profile) | ||
791 | 56 | |||
792 | 57 | def stop(self): | ||
793 | 58 | """Stop profiling. | ||
794 | 59 | |||
795 | 60 | This unhooks from threading and cleans up the profiler, returning | ||
796 | 61 | the gathered Stats object. | ||
797 | 62 | |||
798 | 63 | :return: A bzrlib.lsprof.Stats object. | ||
799 | 64 | """ | ||
800 | 65 | self.p.disable() | ||
801 | 66 | for pp in self._g_threadmap.values(): | ||
802 | 47 | pp.disable() | 67 | pp.disable() |
803 | 48 | threading.setprofile(None) | 68 | threading.setprofile(None) |
804 | 69 | p = self.p | ||
805 | 70 | self.p = None | ||
806 | 71 | threads = {} | ||
807 | 72 | for tid, pp in self._g_threadmap.items(): | ||
808 | 73 | threads[tid] = Stats(pp.getstats(), {}) | ||
809 | 74 | self._g_threadmap = None | ||
810 | 75 | return Stats(p.getstats(), threads) | ||
811 | 49 | 76 | ||
817 | 50 | threads = {} | 77 | def _thread_profile(self, f, *args, **kwds): |
818 | 51 | for tid, pp in _g_threadmap.items(): | 78 | # we lose the first profile point for a new thread in order to |
819 | 52 | threads[tid] = Stats(pp.getstats(), {}) | 79 | # trampoline a new Profile object into place |
820 | 53 | _g_threadmap = {} | 80 | thr = thread.get_ident() |
821 | 54 | return ret, Stats(p.getstats(), threads) | 81 | self._g_threadmap[thr] = p = Profiler() |
822 | 82 | # this overrides our sys.setprofile hook: | ||
823 | 83 | p.enable(subcalls=True, builtins=True) | ||
824 | 55 | 84 | ||
825 | 56 | 85 | ||
826 | 57 | class Stats(object): | 86 | class Stats(object): |
827 | 58 | 87 | ||
828 | === modified file 'bzrlib/repofmt/groupcompress_repo.py' | |||
829 | --- bzrlib/repofmt/groupcompress_repo.py 2009-08-24 19:34:13 +0000 | |||
830 | +++ bzrlib/repofmt/groupcompress_repo.py 2009-09-01 06:10:24 +0000 | |||
831 | @@ -932,7 +932,7 @@ | |||
832 | 932 | super(GroupCHKStreamSource, self).__init__(from_repository, to_format) | 932 | super(GroupCHKStreamSource, self).__init__(from_repository, to_format) |
833 | 933 | self._revision_keys = None | 933 | self._revision_keys = None |
834 | 934 | self._text_keys = None | 934 | self._text_keys = None |
836 | 935 | # self._text_fetch_order = 'unordered' | 935 | self._text_fetch_order = 'groupcompress' |
837 | 936 | self._chk_id_roots = None | 936 | self._chk_id_roots = None |
838 | 937 | self._chk_p_id_roots = None | 937 | self._chk_p_id_roots = None |
839 | 938 | 938 | ||
840 | @@ -949,7 +949,7 @@ | |||
841 | 949 | p_id_roots_set = set() | 949 | p_id_roots_set = set() |
842 | 950 | source_vf = self.from_repository.inventories | 950 | source_vf = self.from_repository.inventories |
843 | 951 | stream = source_vf.get_record_stream(inventory_keys, | 951 | stream = source_vf.get_record_stream(inventory_keys, |
845 | 952 | 'unordered', True) | 952 | 'groupcompress', True) |
846 | 953 | for record in stream: | 953 | for record in stream: |
847 | 954 | if record.storage_kind == 'absent': | 954 | if record.storage_kind == 'absent': |
848 | 955 | if allow_absent: | 955 | if allow_absent: |
849 | 956 | 956 | ||
850 | === modified file 'bzrlib/repository.py' | |||
851 | --- bzrlib/repository.py 2009-08-30 22:02:45 +0000 | |||
852 | +++ bzrlib/repository.py 2009-09-03 15:26:27 +0000 | |||
853 | @@ -3844,6 +3844,9 @@ | |||
854 | 3844 | possible_trees.append((basis_id, cache[basis_id])) | 3844 | possible_trees.append((basis_id, cache[basis_id])) |
855 | 3845 | basis_id, delta = self._get_delta_for_revision(tree, parent_ids, | 3845 | basis_id, delta = self._get_delta_for_revision(tree, parent_ids, |
856 | 3846 | possible_trees) | 3846 | possible_trees) |
857 | 3847 | revision = self.source.get_revision(current_revision_id) | ||
858 | 3848 | pending_deltas.append((basis_id, delta, | ||
859 | 3849 | current_revision_id, revision.parent_ids)) | ||
860 | 3847 | if self._converting_to_rich_root: | 3850 | if self._converting_to_rich_root: |
861 | 3848 | self._revision_id_to_root_id[current_revision_id] = \ | 3851 | self._revision_id_to_root_id[current_revision_id] = \ |
862 | 3849 | tree.get_root_id() | 3852 | tree.get_root_id() |
863 | @@ -3878,9 +3881,6 @@ | |||
864 | 3878 | if entry.revision == file_revision: | 3881 | if entry.revision == file_revision: |
865 | 3879 | texts_possibly_new_in_tree.remove(file_key) | 3882 | texts_possibly_new_in_tree.remove(file_key) |
866 | 3880 | text_keys.update(texts_possibly_new_in_tree) | 3883 | text_keys.update(texts_possibly_new_in_tree) |
867 | 3881 | revision = self.source.get_revision(current_revision_id) | ||
868 | 3882 | pending_deltas.append((basis_id, delta, | ||
869 | 3883 | current_revision_id, revision.parent_ids)) | ||
870 | 3884 | pending_revisions.append(revision) | 3884 | pending_revisions.append(revision) |
871 | 3885 | cache[current_revision_id] = tree | 3885 | cache[current_revision_id] = tree |
872 | 3886 | basis_id = current_revision_id | 3886 | basis_id = current_revision_id |
873 | 3887 | 3887 | ||
874 | === modified file 'bzrlib/smart/repository.py' | |||
875 | --- bzrlib/smart/repository.py 2009-08-14 00:55:42 +0000 | |||
876 | +++ bzrlib/smart/repository.py 2009-09-02 22:29:55 +0000 | |||
877 | @@ -519,36 +519,92 @@ | |||
878 | 519 | yield pack_writer.end() | 519 | yield pack_writer.end() |
879 | 520 | 520 | ||
880 | 521 | 521 | ||
881 | 522 | class _ByteStreamDecoder(object): | ||
882 | 523 | """Helper for _byte_stream_to_stream. | ||
883 | 524 | |||
884 | 525 | Broadly this class has to unwrap two layers of iterators: | ||
885 | 526 | (type, substream) | ||
886 | 527 | (substream details) | ||
887 | 528 | |||
888 | 529 | This is complicated by wishing to return type, iterator_for_type, but | ||
889 | 530 | getting the data for iterator_for_type when we find out type: we can't | ||
890 | 531 | simply pass a generator down to the NetworkRecordStream parser, instead | ||
891 | 532 | we have a little local state to seed each NetworkRecordStream instance, | ||
892 | 533 | and gather the type that we'll be yielding. | ||
893 | 534 | |||
894 | 535 | :ivar byte_stream: The byte stream being decoded. | ||
895 | 536 | :ivar stream_decoder: A pack parser used to decode the bytestream | ||
896 | 537 | :ivar current_type: The current type, used to join adjacent records of the | ||
897 | 538 | same type into a single stream. | ||
898 | 539 | :ivar first_bytes: The first bytes to give the next NetworkRecordStream. | ||
899 | 540 | """ | ||
900 | 541 | |||
901 | 542 | def __init__(self, byte_stream): | ||
902 | 543 | """Create a _ByteStreamDecoder.""" | ||
903 | 544 | self.stream_decoder = pack.ContainerPushParser() | ||
904 | 545 | self.current_type = None | ||
905 | 546 | self.first_bytes = None | ||
906 | 547 | self.byte_stream = byte_stream | ||
907 | 548 | |||
908 | 549 | def iter_stream_decoder(self): | ||
909 | 550 | """Iterate the contents of the pack from stream_decoder.""" | ||
910 | 551 | # dequeue pending items | ||
911 | 552 | for record in self.stream_decoder.read_pending_records(): | ||
912 | 553 | yield record | ||
913 | 554 | # Pull bytes of the wire, decode them to records, yield those records. | ||
914 | 555 | for bytes in self.byte_stream: | ||
915 | 556 | self.stream_decoder.accept_bytes(bytes) | ||
916 | 557 | for record in self.stream_decoder.read_pending_records(): | ||
917 | 558 | yield record | ||
918 | 559 | |||
919 | 560 | def iter_substream_bytes(self): | ||
920 | 561 | if self.first_bytes is not None: | ||
921 | 562 | yield self.first_bytes | ||
922 | 563 | # If we run out of pack records, single the outer layer to stop. | ||
923 | 564 | self.first_bytes = None | ||
924 | 565 | for record in self.iter_pack_records: | ||
925 | 566 | record_names, record_bytes = record | ||
926 | 567 | record_name, = record_names | ||
927 | 568 | substream_type = record_name[0] | ||
928 | 569 | if substream_type != self.current_type: | ||
929 | 570 | # end of a substream, seed the next substream. | ||
930 | 571 | self.current_type = substream_type | ||
931 | 572 | self.first_bytes = record_bytes | ||
932 | 573 | return | ||
933 | 574 | yield record_bytes | ||
934 | 575 | |||
935 | 576 | def record_stream(self): | ||
936 | 577 | """Yield substream_type, substream from the byte stream.""" | ||
937 | 578 | self.seed_state() | ||
938 | 579 | # Make and consume sub generators, one per substream type: | ||
939 | 580 | while self.first_bytes is not None: | ||
940 | 581 | substream = NetworkRecordStream(self.iter_substream_bytes()) | ||
941 | 582 | # after substream is fully consumed, self.current_type is set to | ||
942 | 583 | # the next type, and self.first_bytes is set to the matching bytes. | ||
943 | 584 | yield self.current_type, substream.read() | ||
944 | 585 | |||
945 | 586 | def seed_state(self): | ||
946 | 587 | """Prepare the _ByteStreamDecoder to decode from the pack stream.""" | ||
947 | 588 | # Set a single generator we can use to get data from the pack stream. | ||
948 | 589 | self.iter_pack_records = self.iter_stream_decoder() | ||
949 | 590 | # Seed the very first subiterator with content; after this each one | ||
950 | 591 | # seeds the next. | ||
951 | 592 | list(self.iter_substream_bytes()) | ||
952 | 593 | |||
953 | 594 | |||
954 | 522 | def _byte_stream_to_stream(byte_stream): | 595 | def _byte_stream_to_stream(byte_stream): |
955 | 523 | """Convert a byte stream into a format and a stream. | 596 | """Convert a byte stream into a format and a stream. |
956 | 524 | 597 | ||
957 | 525 | :param byte_stream: A bytes iterator, as output by _stream_to_byte_stream. | 598 | :param byte_stream: A bytes iterator, as output by _stream_to_byte_stream. |
958 | 526 | :return: (RepositoryFormat, stream_generator) | 599 | :return: (RepositoryFormat, stream_generator) |
959 | 527 | """ | 600 | """ |
978 | 528 | stream_decoder = pack.ContainerPushParser() | 601 | decoder = _ByteStreamDecoder(byte_stream) |
961 | 529 | def record_stream(): | ||
962 | 530 | """Closure to return the substreams.""" | ||
963 | 531 | # May have fully parsed records already. | ||
964 | 532 | for record in stream_decoder.read_pending_records(): | ||
965 | 533 | record_names, record_bytes = record | ||
966 | 534 | record_name, = record_names | ||
967 | 535 | substream_type = record_name[0] | ||
968 | 536 | substream = NetworkRecordStream([record_bytes]) | ||
969 | 537 | yield substream_type, substream.read() | ||
970 | 538 | for bytes in byte_stream: | ||
971 | 539 | stream_decoder.accept_bytes(bytes) | ||
972 | 540 | for record in stream_decoder.read_pending_records(): | ||
973 | 541 | record_names, record_bytes = record | ||
974 | 542 | record_name, = record_names | ||
975 | 543 | substream_type = record_name[0] | ||
976 | 544 | substream = NetworkRecordStream([record_bytes]) | ||
977 | 545 | yield substream_type, substream.read() | ||
979 | 546 | for bytes in byte_stream: | 602 | for bytes in byte_stream: |
982 | 547 | stream_decoder.accept_bytes(bytes) | 603 | decoder.stream_decoder.accept_bytes(bytes) |
983 | 548 | for record in stream_decoder.read_pending_records(max=1): | 604 | for record in decoder.stream_decoder.read_pending_records(max=1): |
984 | 549 | record_names, src_format_name = record | 605 | record_names, src_format_name = record |
985 | 550 | src_format = network_format_registry.get(src_format_name) | 606 | src_format = network_format_registry.get(src_format_name) |
987 | 551 | return src_format, record_stream() | 607 | return src_format, decoder.record_stream() |
988 | 552 | 608 | ||
989 | 553 | 609 | ||
990 | 554 | class SmartServerRepositoryUnlock(SmartServerRepositoryRequest): | 610 | class SmartServerRepositoryUnlock(SmartServerRepositoryRequest): |
991 | 555 | 611 | ||
992 | === modified file 'bzrlib/tests/__init__.py' | |||
993 | --- bzrlib/tests/__init__.py 2009-08-24 20:30:18 +0000 | |||
994 | +++ bzrlib/tests/__init__.py 2009-08-28 21:05:31 +0000 | |||
995 | @@ -28,6 +28,7 @@ | |||
996 | 28 | 28 | ||
997 | 29 | import atexit | 29 | import atexit |
998 | 30 | import codecs | 30 | import codecs |
999 | 31 | from copy import copy | ||
1000 | 31 | from cStringIO import StringIO | 32 | from cStringIO import StringIO |
1001 | 32 | import difflib | 33 | import difflib |
1002 | 33 | import doctest | 34 | import doctest |
1003 | @@ -174,17 +175,47 @@ | |||
1004 | 174 | self._overall_start_time = time.time() | 175 | self._overall_start_time = time.time() |
1005 | 175 | self._strict = strict | 176 | self._strict = strict |
1006 | 176 | 177 | ||
1010 | 177 | def done(self): | 178 | def stopTestRun(self): |
1011 | 178 | # nb: called stopTestRun in the version of this that Python merged | 179 | run = self.testsRun |
1012 | 179 | # upstream, according to lifeless 20090803 | 180 | actionTaken = "Ran" |
1013 | 181 | stopTime = time.time() | ||
1014 | 182 | timeTaken = stopTime - self.startTime | ||
1015 | 183 | self.printErrors() | ||
1016 | 184 | self.stream.writeln(self.separator2) | ||
1017 | 185 | self.stream.writeln("%s %d test%s in %.3fs" % (actionTaken, | ||
1018 | 186 | run, run != 1 and "s" or "", timeTaken)) | ||
1019 | 187 | self.stream.writeln() | ||
1020 | 188 | if not self.wasSuccessful(): | ||
1021 | 189 | self.stream.write("FAILED (") | ||
1022 | 190 | failed, errored = map(len, (self.failures, self.errors)) | ||
1023 | 191 | if failed: | ||
1024 | 192 | self.stream.write("failures=%d" % failed) | ||
1025 | 193 | if errored: | ||
1026 | 194 | if failed: self.stream.write(", ") | ||
1027 | 195 | self.stream.write("errors=%d" % errored) | ||
1028 | 196 | if self.known_failure_count: | ||
1029 | 197 | if failed or errored: self.stream.write(", ") | ||
1030 | 198 | self.stream.write("known_failure_count=%d" % | ||
1031 | 199 | self.known_failure_count) | ||
1032 | 200 | self.stream.writeln(")") | ||
1033 | 201 | else: | ||
1034 | 202 | if self.known_failure_count: | ||
1035 | 203 | self.stream.writeln("OK (known_failures=%d)" % | ||
1036 | 204 | self.known_failure_count) | ||
1037 | 205 | else: | ||
1038 | 206 | self.stream.writeln("OK") | ||
1039 | 207 | if self.skip_count > 0: | ||
1040 | 208 | skipped = self.skip_count | ||
1041 | 209 | self.stream.writeln('%d test%s skipped' % | ||
1042 | 210 | (skipped, skipped != 1 and "s" or "")) | ||
1043 | 211 | if self.unsupported: | ||
1044 | 212 | for feature, count in sorted(self.unsupported.items()): | ||
1045 | 213 | self.stream.writeln("Missing feature '%s' skipped %d tests." % | ||
1046 | 214 | (feature, count)) | ||
1047 | 180 | if self._strict: | 215 | if self._strict: |
1048 | 181 | ok = self.wasStrictlySuccessful() | 216 | ok = self.wasStrictlySuccessful() |
1049 | 182 | else: | 217 | else: |
1050 | 183 | ok = self.wasSuccessful() | 218 | ok = self.wasSuccessful() |
1051 | 184 | if ok: | ||
1052 | 185 | self.stream.write('tests passed\n') | ||
1053 | 186 | else: | ||
1054 | 187 | self.stream.write('tests failed\n') | ||
1055 | 188 | if TestCase._first_thread_leaker_id: | 219 | if TestCase._first_thread_leaker_id: |
1056 | 189 | self.stream.write( | 220 | self.stream.write( |
1057 | 190 | '%s is leaking threads among %d leaking tests.\n' % ( | 221 | '%s is leaking threads among %d leaking tests.\n' % ( |
1058 | @@ -382,12 +413,12 @@ | |||
1059 | 382 | else: | 413 | else: |
1060 | 383 | raise errors.BzrError("Unknown whence %r" % whence) | 414 | raise errors.BzrError("Unknown whence %r" % whence) |
1061 | 384 | 415 | ||
1062 | 385 | def finished(self): | ||
1063 | 386 | pass | ||
1064 | 387 | |||
1065 | 388 | def report_cleaning_up(self): | 416 | def report_cleaning_up(self): |
1066 | 389 | pass | 417 | pass |
1067 | 390 | 418 | ||
1068 | 419 | def startTestRun(self): | ||
1069 | 420 | self.startTime = time.time() | ||
1070 | 421 | |||
1071 | 391 | def report_success(self, test): | 422 | def report_success(self, test): |
1072 | 392 | pass | 423 | pass |
1073 | 393 | 424 | ||
1074 | @@ -420,15 +451,14 @@ | |||
1075 | 420 | self.pb.update_latency = 0 | 451 | self.pb.update_latency = 0 |
1076 | 421 | self.pb.show_transport_activity = False | 452 | self.pb.show_transport_activity = False |
1077 | 422 | 453 | ||
1079 | 423 | def done(self): | 454 | def stopTestRun(self): |
1080 | 424 | # called when the tests that are going to run have run | 455 | # called when the tests that are going to run have run |
1081 | 425 | self.pb.clear() | 456 | self.pb.clear() |
1082 | 426 | super(TextTestResult, self).done() | ||
1083 | 427 | |||
1084 | 428 | def finished(self): | ||
1085 | 429 | self.pb.finished() | 457 | self.pb.finished() |
1086 | 458 | super(TextTestResult, self).stopTestRun() | ||
1087 | 430 | 459 | ||
1089 | 431 | def report_starting(self): | 460 | def startTestRun(self): |
1090 | 461 | super(TextTestResult, self).startTestRun() | ||
1091 | 432 | self.pb.update('[test 0/%d] Starting' % (self.num_tests)) | 462 | self.pb.update('[test 0/%d] Starting' % (self.num_tests)) |
1092 | 433 | 463 | ||
1093 | 434 | def printErrors(self): | 464 | def printErrors(self): |
1094 | @@ -513,7 +543,8 @@ | |||
1095 | 513 | result = a_string | 543 | result = a_string |
1096 | 514 | return result.ljust(final_width) | 544 | return result.ljust(final_width) |
1097 | 515 | 545 | ||
1099 | 516 | def report_starting(self): | 546 | def startTestRun(self): |
1100 | 547 | super(VerboseTestResult, self).startTestRun() | ||
1101 | 517 | self.stream.write('running %d tests...\n' % self.num_tests) | 548 | self.stream.write('running %d tests...\n' % self.num_tests) |
1102 | 518 | 549 | ||
1103 | 519 | def report_test_start(self, test): | 550 | def report_test_start(self, test): |
1104 | @@ -577,88 +608,57 @@ | |||
1105 | 577 | descriptions=0, | 608 | descriptions=0, |
1106 | 578 | verbosity=1, | 609 | verbosity=1, |
1107 | 579 | bench_history=None, | 610 | bench_history=None, |
1108 | 580 | list_only=False, | ||
1109 | 581 | strict=False, | 611 | strict=False, |
1110 | 612 | result_decorators=None, | ||
1111 | 582 | ): | 613 | ): |
1112 | 614 | """Create a TextTestRunner. | ||
1113 | 615 | |||
1114 | 616 | :param result_decorators: An optional list of decorators to apply | ||
1115 | 617 | to the result object being used by the runner. Decorators are | ||
1116 | 618 | applied left to right - the first element in the list is the | ||
1117 | 619 | innermost decorator. | ||
1118 | 620 | """ | ||
1119 | 583 | self.stream = unittest._WritelnDecorator(stream) | 621 | self.stream = unittest._WritelnDecorator(stream) |
1120 | 584 | self.descriptions = descriptions | 622 | self.descriptions = descriptions |
1121 | 585 | self.verbosity = verbosity | 623 | self.verbosity = verbosity |
1122 | 586 | self._bench_history = bench_history | 624 | self._bench_history = bench_history |
1123 | 587 | self.list_only = list_only | ||
1124 | 588 | self._strict = strict | 625 | self._strict = strict |
1125 | 626 | self._result_decorators = result_decorators or [] | ||
1126 | 589 | 627 | ||
1127 | 590 | def run(self, test): | 628 | def run(self, test): |
1128 | 591 | "Run the given test case or test suite." | 629 | "Run the given test case or test suite." |
1129 | 592 | startTime = time.time() | ||
1130 | 593 | if self.verbosity == 1: | 630 | if self.verbosity == 1: |
1131 | 594 | result_class = TextTestResult | 631 | result_class = TextTestResult |
1132 | 595 | elif self.verbosity >= 2: | 632 | elif self.verbosity >= 2: |
1133 | 596 | result_class = VerboseTestResult | 633 | result_class = VerboseTestResult |
1135 | 597 | result = result_class(self.stream, | 634 | original_result = result_class(self.stream, |
1136 | 598 | self.descriptions, | 635 | self.descriptions, |
1137 | 599 | self.verbosity, | 636 | self.verbosity, |
1138 | 600 | bench_history=self._bench_history, | 637 | bench_history=self._bench_history, |
1139 | 601 | strict=self._strict, | 638 | strict=self._strict, |
1140 | 602 | ) | 639 | ) |
1200 | 603 | result.stop_early = self.stop_on_failure | 640 | # Signal to result objects that look at stop early policy to stop, |
1201 | 604 | result.report_starting() | 641 | original_result.stop_early = self.stop_on_failure |
1202 | 605 | if self.list_only: | 642 | result = original_result |
1203 | 606 | if self.verbosity >= 2: | 643 | for decorator in self._result_decorators: |
1204 | 607 | self.stream.writeln("Listing tests only ...\n") | 644 | result = decorator(result) |
1205 | 608 | run = 0 | 645 | result.stop_early = self.stop_on_failure |
1206 | 609 | for t in iter_suite_tests(test): | 646 | try: |
1207 | 610 | self.stream.writeln("%s" % (t.id())) | 647 | import testtools |
1208 | 611 | run += 1 | 648 | except ImportError: |
1209 | 612 | return None | 649 | pass |
1210 | 613 | else: | 650 | else: |
1211 | 614 | try: | 651 | if isinstance(test, testtools.ConcurrentTestSuite): |
1212 | 615 | import testtools | 652 | # We need to catch bzr specific behaviors |
1213 | 616 | except ImportError: | 653 | result = BZRTransformingResult(result) |
1214 | 617 | test.run(result) | 654 | result.startTestRun() |
1215 | 618 | else: | 655 | try: |
1216 | 619 | if isinstance(test, testtools.ConcurrentTestSuite): | 656 | test.run(result) |
1217 | 620 | # We need to catch bzr specific behaviors | 657 | finally: |
1218 | 621 | test.run(BZRTransformingResult(result)) | 658 | result.stopTestRun() |
1219 | 622 | else: | 659 | # higher level code uses our extended protocol to determine |
1220 | 623 | test.run(result) | 660 | # what exit code to give. |
1221 | 624 | run = result.testsRun | 661 | return original_result |
1163 | 625 | actionTaken = "Ran" | ||
1164 | 626 | stopTime = time.time() | ||
1165 | 627 | timeTaken = stopTime - startTime | ||
1166 | 628 | result.printErrors() | ||
1167 | 629 | self.stream.writeln(result.separator2) | ||
1168 | 630 | self.stream.writeln("%s %d test%s in %.3fs" % (actionTaken, | ||
1169 | 631 | run, run != 1 and "s" or "", timeTaken)) | ||
1170 | 632 | self.stream.writeln() | ||
1171 | 633 | if not result.wasSuccessful(): | ||
1172 | 634 | self.stream.write("FAILED (") | ||
1173 | 635 | failed, errored = map(len, (result.failures, result.errors)) | ||
1174 | 636 | if failed: | ||
1175 | 637 | self.stream.write("failures=%d" % failed) | ||
1176 | 638 | if errored: | ||
1177 | 639 | if failed: self.stream.write(", ") | ||
1178 | 640 | self.stream.write("errors=%d" % errored) | ||
1179 | 641 | if result.known_failure_count: | ||
1180 | 642 | if failed or errored: self.stream.write(", ") | ||
1181 | 643 | self.stream.write("known_failure_count=%d" % | ||
1182 | 644 | result.known_failure_count) | ||
1183 | 645 | self.stream.writeln(")") | ||
1184 | 646 | else: | ||
1185 | 647 | if result.known_failure_count: | ||
1186 | 648 | self.stream.writeln("OK (known_failures=%d)" % | ||
1187 | 649 | result.known_failure_count) | ||
1188 | 650 | else: | ||
1189 | 651 | self.stream.writeln("OK") | ||
1190 | 652 | if result.skip_count > 0: | ||
1191 | 653 | skipped = result.skip_count | ||
1192 | 654 | self.stream.writeln('%d test%s skipped' % | ||
1193 | 655 | (skipped, skipped != 1 and "s" or "")) | ||
1194 | 656 | if result.unsupported: | ||
1195 | 657 | for feature, count in sorted(result.unsupported.items()): | ||
1196 | 658 | self.stream.writeln("Missing feature '%s' skipped %d tests." % | ||
1197 | 659 | (feature, count)) | ||
1198 | 660 | result.finished() | ||
1199 | 661 | return result | ||
1222 | 662 | 662 | ||
1223 | 663 | 663 | ||
1224 | 664 | def iter_suite_tests(suite): | 664 | def iter_suite_tests(suite): |
1225 | @@ -928,6 +928,18 @@ | |||
1226 | 928 | def _lock_broken(self, result): | 928 | def _lock_broken(self, result): |
1227 | 929 | self._lock_actions.append(('broken', result)) | 929 | self._lock_actions.append(('broken', result)) |
1228 | 930 | 930 | ||
1229 | 931 | def start_server(self, transport_server, backing_server=None): | ||
1230 | 932 | """Start transport_server for this test. | ||
1231 | 933 | |||
1232 | 934 | This starts the server, registers a cleanup for it and permits the | ||
1233 | 935 | server's urls to be used. | ||
1234 | 936 | """ | ||
1235 | 937 | if backing_server is None: | ||
1236 | 938 | transport_server.setUp() | ||
1237 | 939 | else: | ||
1238 | 940 | transport_server.setUp(backing_server) | ||
1239 | 941 | self.addCleanup(transport_server.tearDown) | ||
1240 | 942 | |||
1241 | 931 | def _ndiff_strings(self, a, b): | 943 | def _ndiff_strings(self, a, b): |
1242 | 932 | """Return ndiff between two strings containing lines. | 944 | """Return ndiff between two strings containing lines. |
1243 | 933 | 945 | ||
1244 | @@ -2067,13 +2079,12 @@ | |||
1245 | 2067 | if self.__readonly_server is None: | 2079 | if self.__readonly_server is None: |
1246 | 2068 | if self.transport_readonly_server is None: | 2080 | if self.transport_readonly_server is None: |
1247 | 2069 | # readonly decorator requested | 2081 | # readonly decorator requested |
1248 | 2070 | # bring up the server | ||
1249 | 2071 | self.__readonly_server = ReadonlyServer() | 2082 | self.__readonly_server = ReadonlyServer() |
1250 | 2072 | self.__readonly_server.setUp(self.get_vfs_only_server()) | ||
1251 | 2073 | else: | 2083 | else: |
1252 | 2084 | # explicit readonly transport. | ||
1253 | 2074 | self.__readonly_server = self.create_transport_readonly_server() | 2085 | self.__readonly_server = self.create_transport_readonly_server() |
1256 | 2075 | self.__readonly_server.setUp(self.get_vfs_only_server()) | 2086 | self.start_server(self.__readonly_server, |
1257 | 2076 | self.addCleanup(self.__readonly_server.tearDown) | 2087 | self.get_vfs_only_server()) |
1258 | 2077 | return self.__readonly_server | 2088 | return self.__readonly_server |
1259 | 2078 | 2089 | ||
1260 | 2079 | def get_readonly_url(self, relpath=None): | 2090 | def get_readonly_url(self, relpath=None): |
1261 | @@ -2098,8 +2109,7 @@ | |||
1262 | 2098 | """ | 2109 | """ |
1263 | 2099 | if self.__vfs_server is None: | 2110 | if self.__vfs_server is None: |
1264 | 2100 | self.__vfs_server = MemoryServer() | 2111 | self.__vfs_server = MemoryServer() |
1267 | 2101 | self.__vfs_server.setUp() | 2112 | self.start_server(self.__vfs_server) |
1266 | 2102 | self.addCleanup(self.__vfs_server.tearDown) | ||
1268 | 2103 | return self.__vfs_server | 2113 | return self.__vfs_server |
1269 | 2104 | 2114 | ||
1270 | 2105 | def get_server(self): | 2115 | def get_server(self): |
1271 | @@ -2112,19 +2122,13 @@ | |||
1272 | 2112 | then the self.get_vfs_server is returned. | 2122 | then the self.get_vfs_server is returned. |
1273 | 2113 | """ | 2123 | """ |
1274 | 2114 | if self.__server is None: | 2124 | if self.__server is None: |
1277 | 2115 | if self.transport_server is None or self.transport_server is self.vfs_transport_factory: | 2125 | if (self.transport_server is None or self.transport_server is |
1278 | 2116 | return self.get_vfs_only_server() | 2126 | self.vfs_transport_factory): |
1279 | 2127 | self.__server = self.get_vfs_only_server() | ||
1280 | 2117 | else: | 2128 | else: |
1281 | 2118 | # bring up a decorated means of access to the vfs only server. | 2129 | # bring up a decorated means of access to the vfs only server. |
1282 | 2119 | self.__server = self.transport_server() | 2130 | self.__server = self.transport_server() |
1291 | 2120 | try: | 2131 | self.start_server(self.__server, self.get_vfs_only_server()) |
1284 | 2121 | self.__server.setUp(self.get_vfs_only_server()) | ||
1285 | 2122 | except TypeError, e: | ||
1286 | 2123 | # This should never happen; the try:Except here is to assist | ||
1287 | 2124 | # developers having to update code rather than seeing an | ||
1288 | 2125 | # uninformative TypeError. | ||
1289 | 2126 | raise Exception, "Old server API in use: %s, %s" % (self.__server, e) | ||
1290 | 2127 | self.addCleanup(self.__server.tearDown) | ||
1292 | 2128 | return self.__server | 2132 | return self.__server |
1293 | 2129 | 2133 | ||
1294 | 2130 | def _adjust_url(self, base, relpath): | 2134 | def _adjust_url(self, base, relpath): |
1295 | @@ -2263,9 +2267,8 @@ | |||
1296 | 2263 | 2267 | ||
1297 | 2264 | def make_smart_server(self, path): | 2268 | def make_smart_server(self, path): |
1298 | 2265 | smart_server = server.SmartTCPServer_for_testing() | 2269 | smart_server = server.SmartTCPServer_for_testing() |
1300 | 2266 | smart_server.setUp(self.get_server()) | 2270 | self.start_server(smart_server, self.get_server()) |
1301 | 2267 | remote_transport = get_transport(smart_server.get_url()).clone(path) | 2271 | remote_transport = get_transport(smart_server.get_url()).clone(path) |
1302 | 2268 | self.addCleanup(smart_server.tearDown) | ||
1303 | 2269 | return remote_transport | 2272 | return remote_transport |
1304 | 2270 | 2273 | ||
1305 | 2271 | def make_branch_and_memory_tree(self, relpath, format=None): | 2274 | def make_branch_and_memory_tree(self, relpath, format=None): |
1306 | @@ -2472,8 +2475,7 @@ | |||
1307 | 2472 | """ | 2475 | """ |
1308 | 2473 | if self.__vfs_server is None: | 2476 | if self.__vfs_server is None: |
1309 | 2474 | self.__vfs_server = self.vfs_transport_factory() | 2477 | self.__vfs_server = self.vfs_transport_factory() |
1312 | 2475 | self.__vfs_server.setUp() | 2478 | self.start_server(self.__vfs_server) |
1311 | 2476 | self.addCleanup(self.__vfs_server.tearDown) | ||
1313 | 2477 | return self.__vfs_server | 2479 | return self.__vfs_server |
1314 | 2478 | 2480 | ||
1315 | 2479 | def make_branch_and_tree(self, relpath, format=None): | 2481 | def make_branch_and_tree(self, relpath, format=None): |
1316 | @@ -2486,6 +2488,15 @@ | |||
1317 | 2486 | repository will also be accessed locally. Otherwise a lightweight | 2488 | repository will also be accessed locally. Otherwise a lightweight |
1318 | 2487 | checkout is created and returned. | 2489 | checkout is created and returned. |
1319 | 2488 | 2490 | ||
1320 | 2491 | We do this because we can't physically create a tree in the local | ||
1321 | 2492 | path, with a branch reference to the transport_factory url, and | ||
1322 | 2493 | a branch + repository in the vfs_transport, unless the vfs_transport | ||
1323 | 2494 | namespace is distinct from the local disk - the two branch objects | ||
1324 | 2495 | would collide. While we could construct a tree with its branch object | ||
1325 | 2496 | pointing at the transport_factory transport in memory, reopening it | ||
1326 | 2497 | would behaving unexpectedly, and has in the past caused testing bugs | ||
1327 | 2498 | when we tried to do it that way. | ||
1328 | 2499 | |||
1329 | 2489 | :param format: The BzrDirFormat. | 2500 | :param format: The BzrDirFormat. |
1330 | 2490 | :returns: the WorkingTree. | 2501 | :returns: the WorkingTree. |
1331 | 2491 | """ | 2502 | """ |
1332 | @@ -2762,7 +2773,9 @@ | |||
1333 | 2762 | strict=False, | 2773 | strict=False, |
1334 | 2763 | runner_class=None, | 2774 | runner_class=None, |
1335 | 2764 | suite_decorators=None, | 2775 | suite_decorators=None, |
1337 | 2765 | stream=None): | 2776 | stream=None, |
1338 | 2777 | result_decorators=None, | ||
1339 | 2778 | ): | ||
1340 | 2766 | """Run a test suite for bzr selftest. | 2779 | """Run a test suite for bzr selftest. |
1341 | 2767 | 2780 | ||
1342 | 2768 | :param runner_class: The class of runner to use. Must support the | 2781 | :param runner_class: The class of runner to use. Must support the |
1343 | @@ -2783,8 +2796,8 @@ | |||
1344 | 2783 | descriptions=0, | 2796 | descriptions=0, |
1345 | 2784 | verbosity=verbosity, | 2797 | verbosity=verbosity, |
1346 | 2785 | bench_history=bench_history, | 2798 | bench_history=bench_history, |
1347 | 2786 | list_only=list_only, | ||
1348 | 2787 | strict=strict, | 2799 | strict=strict, |
1349 | 2800 | result_decorators=result_decorators, | ||
1350 | 2788 | ) | 2801 | ) |
1351 | 2789 | runner.stop_on_failure=stop_on_failure | 2802 | runner.stop_on_failure=stop_on_failure |
1352 | 2790 | # built in decorator factories: | 2803 | # built in decorator factories: |
1353 | @@ -2805,10 +2818,15 @@ | |||
1354 | 2805 | decorators.append(CountingDecorator) | 2818 | decorators.append(CountingDecorator) |
1355 | 2806 | for decorator in decorators: | 2819 | for decorator in decorators: |
1356 | 2807 | suite = decorator(suite) | 2820 | suite = decorator(suite) |
1357 | 2808 | result = runner.run(suite) | ||
1358 | 2809 | if list_only: | 2821 | if list_only: |
1359 | 2822 | # Done after test suite decoration to allow randomisation etc | ||
1360 | 2823 | # to take effect, though that is of marginal benefit. | ||
1361 | 2824 | if verbosity >= 2: | ||
1362 | 2825 | stream.write("Listing tests only ...\n") | ||
1363 | 2826 | for t in iter_suite_tests(suite): | ||
1364 | 2827 | stream.write("%s\n" % (t.id())) | ||
1365 | 2810 | return True | 2828 | return True |
1367 | 2811 | result.done() | 2829 | result = runner.run(suite) |
1368 | 2812 | if strict: | 2830 | if strict: |
1369 | 2813 | return result.wasStrictlySuccessful() | 2831 | return result.wasStrictlySuccessful() |
1370 | 2814 | else: | 2832 | else: |
1371 | @@ -3131,7 +3149,7 @@ | |||
1372 | 3131 | return result | 3149 | return result |
1373 | 3132 | 3150 | ||
1374 | 3133 | 3151 | ||
1376 | 3134 | class BZRTransformingResult(unittest.TestResult): | 3152 | class ForwardingResult(unittest.TestResult): |
1377 | 3135 | 3153 | ||
1378 | 3136 | def __init__(self, target): | 3154 | def __init__(self, target): |
1379 | 3137 | unittest.TestResult.__init__(self) | 3155 | unittest.TestResult.__init__(self) |
1380 | @@ -3143,6 +3161,27 @@ | |||
1381 | 3143 | def stopTest(self, test): | 3161 | def stopTest(self, test): |
1382 | 3144 | self.result.stopTest(test) | 3162 | self.result.stopTest(test) |
1383 | 3145 | 3163 | ||
1384 | 3164 | def startTestRun(self): | ||
1385 | 3165 | self.result.startTestRun() | ||
1386 | 3166 | |||
1387 | 3167 | def stopTestRun(self): | ||
1388 | 3168 | self.result.stopTestRun() | ||
1389 | 3169 | |||
1390 | 3170 | def addSkip(self, test, reason): | ||
1391 | 3171 | self.result.addSkip(test, reason) | ||
1392 | 3172 | |||
1393 | 3173 | def addSuccess(self, test): | ||
1394 | 3174 | self.result.addSuccess(test) | ||
1395 | 3175 | |||
1396 | 3176 | def addError(self, test, err): | ||
1397 | 3177 | self.result.addError(test, err) | ||
1398 | 3178 | |||
1399 | 3179 | def addFailure(self, test, err): | ||
1400 | 3180 | self.result.addFailure(test, err) | ||
1401 | 3181 | |||
1402 | 3182 | |||
1403 | 3183 | class BZRTransformingResult(ForwardingResult): | ||
1404 | 3184 | |||
1405 | 3146 | def addError(self, test, err): | 3185 | def addError(self, test, err): |
1406 | 3147 | feature = self._error_looks_like('UnavailableFeature: ', err) | 3186 | feature = self._error_looks_like('UnavailableFeature: ', err) |
1407 | 3148 | if feature is not None: | 3187 | if feature is not None: |
1408 | @@ -3158,12 +3197,6 @@ | |||
1409 | 3158 | else: | 3197 | else: |
1410 | 3159 | self.result.addFailure(test, err) | 3198 | self.result.addFailure(test, err) |
1411 | 3160 | 3199 | ||
1412 | 3161 | def addSkip(self, test, reason): | ||
1413 | 3162 | self.result.addSkip(test, reason) | ||
1414 | 3163 | |||
1415 | 3164 | def addSuccess(self, test): | ||
1416 | 3165 | self.result.addSuccess(test) | ||
1417 | 3166 | |||
1418 | 3167 | def _error_looks_like(self, prefix, err): | 3200 | def _error_looks_like(self, prefix, err): |
1419 | 3168 | """Deserialize exception and returns the stringify value.""" | 3201 | """Deserialize exception and returns the stringify value.""" |
1420 | 3169 | import subunit | 3202 | import subunit |
1421 | @@ -3181,6 +3214,38 @@ | |||
1422 | 3181 | return value | 3214 | return value |
1423 | 3182 | 3215 | ||
1424 | 3183 | 3216 | ||
1425 | 3217 | class ProfileResult(ForwardingResult): | ||
1426 | 3218 | """Generate profiling data for all activity between start and success. | ||
1427 | 3219 | |||
1428 | 3220 | The profile data is appended to the test's _benchcalls attribute and can | ||
1429 | 3221 | be accessed by the forwarded-to TestResult. | ||
1430 | 3222 | |||
1431 | 3223 | While it might be cleaner do accumulate this in stopTest, addSuccess is | ||
1432 | 3224 | where our existing output support for lsprof is, and this class aims to | ||
1433 | 3225 | fit in with that: while it could be moved it's not necessary to accomplish | ||
1434 | 3226 | test profiling, nor would it be dramatically cleaner. | ||
1435 | 3227 | """ | ||
1436 | 3228 | |||
1437 | 3229 | def startTest(self, test): | ||
1438 | 3230 | self.profiler = bzrlib.lsprof.BzrProfiler() | ||
1439 | 3231 | self.profiler.start() | ||
1440 | 3232 | ForwardingResult.startTest(self, test) | ||
1441 | 3233 | |||
1442 | 3234 | def addSuccess(self, test): | ||
1443 | 3235 | stats = self.profiler.stop() | ||
1444 | 3236 | try: | ||
1445 | 3237 | calls = test._benchcalls | ||
1446 | 3238 | except AttributeError: | ||
1447 | 3239 | test._benchcalls = [] | ||
1448 | 3240 | calls = test._benchcalls | ||
1449 | 3241 | calls.append(((test.id(), "", ""), stats)) | ||
1450 | 3242 | ForwardingResult.addSuccess(self, test) | ||
1451 | 3243 | |||
1452 | 3244 | def stopTest(self, test): | ||
1453 | 3245 | ForwardingResult.stopTest(self, test) | ||
1454 | 3246 | self.profiler = None | ||
1455 | 3247 | |||
1456 | 3248 | |||
1457 | 3184 | # Controlled by "bzr selftest -E=..." option | 3249 | # Controlled by "bzr selftest -E=..." option |
1458 | 3185 | # Currently supported: | 3250 | # Currently supported: |
1459 | 3186 | # -Eallow_debug Will no longer clear debug.debug_flags() so it | 3251 | # -Eallow_debug Will no longer clear debug.debug_flags() so it |
1460 | @@ -3208,6 +3273,7 @@ | |||
1461 | 3208 | runner_class=None, | 3273 | runner_class=None, |
1462 | 3209 | suite_decorators=None, | 3274 | suite_decorators=None, |
1463 | 3210 | stream=None, | 3275 | stream=None, |
1464 | 3276 | lsprof_tests=False, | ||
1465 | 3211 | ): | 3277 | ): |
1466 | 3212 | """Run the whole test suite under the enhanced runner""" | 3278 | """Run the whole test suite under the enhanced runner""" |
1467 | 3213 | # XXX: Very ugly way to do this... | 3279 | # XXX: Very ugly way to do this... |
1468 | @@ -3242,6 +3308,9 @@ | |||
1469 | 3242 | if starting_with: | 3308 | if starting_with: |
1470 | 3243 | # But always filter as requested. | 3309 | # But always filter as requested. |
1471 | 3244 | suite = filter_suite_by_id_startswith(suite, starting_with) | 3310 | suite = filter_suite_by_id_startswith(suite, starting_with) |
1472 | 3311 | result_decorators = [] | ||
1473 | 3312 | if lsprof_tests: | ||
1474 | 3313 | result_decorators.append(ProfileResult) | ||
1475 | 3245 | return run_suite(suite, 'testbzr', verbose=verbose, pattern=pattern, | 3314 | return run_suite(suite, 'testbzr', verbose=verbose, pattern=pattern, |
1476 | 3246 | stop_on_failure=stop_on_failure, | 3315 | stop_on_failure=stop_on_failure, |
1477 | 3247 | transport=transport, | 3316 | transport=transport, |
1478 | @@ -3255,6 +3324,7 @@ | |||
1479 | 3255 | runner_class=runner_class, | 3324 | runner_class=runner_class, |
1480 | 3256 | suite_decorators=suite_decorators, | 3325 | suite_decorators=suite_decorators, |
1481 | 3257 | stream=stream, | 3326 | stream=stream, |
1482 | 3327 | result_decorators=result_decorators, | ||
1483 | 3258 | ) | 3328 | ) |
1484 | 3259 | finally: | 3329 | finally: |
1485 | 3260 | default_transport = old_transport | 3330 | default_transport = old_transport |
1486 | @@ -3416,6 +3486,206 @@ | |||
1487 | 3416 | test_prefix_alias_registry.register('bp', 'bzrlib.plugins') | 3486 | test_prefix_alias_registry.register('bp', 'bzrlib.plugins') |
1488 | 3417 | 3487 | ||
1489 | 3418 | 3488 | ||
1490 | 3489 | def _test_suite_testmod_names(): | ||
1491 | 3490 | """Return the standard list of test module names to test.""" | ||
1492 | 3491 | return [ | ||
1493 | 3492 | 'bzrlib.doc', | ||
1494 | 3493 | 'bzrlib.tests.blackbox', | ||
1495 | 3494 | 'bzrlib.tests.commands', | ||
1496 | 3495 | 'bzrlib.tests.per_branch', | ||
1497 | 3496 | 'bzrlib.tests.per_bzrdir', | ||
1498 | 3497 | 'bzrlib.tests.per_interrepository', | ||
1499 | 3498 | 'bzrlib.tests.per_intertree', | ||
1500 | 3499 | 'bzrlib.tests.per_inventory', | ||
1501 | 3500 | 'bzrlib.tests.per_interbranch', | ||
1502 | 3501 | 'bzrlib.tests.per_lock', | ||
1503 | 3502 | 'bzrlib.tests.per_transport', | ||
1504 | 3503 | 'bzrlib.tests.per_tree', | ||
1505 | 3504 | 'bzrlib.tests.per_pack_repository', | ||
1506 | 3505 | 'bzrlib.tests.per_repository', | ||
1507 | 3506 | 'bzrlib.tests.per_repository_chk', | ||
1508 | 3507 | 'bzrlib.tests.per_repository_reference', | ||
1509 | 3508 | 'bzrlib.tests.per_versionedfile', | ||
1510 | 3509 | 'bzrlib.tests.per_workingtree', | ||
1511 | 3510 | 'bzrlib.tests.test__annotator', | ||
1512 | 3511 | 'bzrlib.tests.test__chk_map', | ||
1513 | 3512 | 'bzrlib.tests.test__dirstate_helpers', | ||
1514 | 3513 | 'bzrlib.tests.test__groupcompress', | ||
1515 | 3514 | 'bzrlib.tests.test__known_graph', | ||
1516 | 3515 | 'bzrlib.tests.test__rio', | ||
1517 | 3516 | 'bzrlib.tests.test__walkdirs_win32', | ||
1518 | 3517 | 'bzrlib.tests.test_ancestry', | ||
1519 | 3518 | 'bzrlib.tests.test_annotate', | ||
1520 | 3519 | 'bzrlib.tests.test_api', | ||
1521 | 3520 | 'bzrlib.tests.test_atomicfile', | ||
1522 | 3521 | 'bzrlib.tests.test_bad_files', | ||
1523 | 3522 | 'bzrlib.tests.test_bencode', | ||
1524 | 3523 | 'bzrlib.tests.test_bisect_multi', | ||
1525 | 3524 | 'bzrlib.tests.test_branch', | ||
1526 | 3525 | 'bzrlib.tests.test_branchbuilder', | ||
1527 | 3526 | 'bzrlib.tests.test_btree_index', | ||
1528 | 3527 | 'bzrlib.tests.test_bugtracker', | ||
1529 | 3528 | 'bzrlib.tests.test_bundle', | ||
1530 | 3529 | 'bzrlib.tests.test_bzrdir', | ||
1531 | 3530 | 'bzrlib.tests.test__chunks_to_lines', | ||
1532 | 3531 | 'bzrlib.tests.test_cache_utf8', | ||
1533 | 3532 | 'bzrlib.tests.test_chk_map', | ||
1534 | 3533 | 'bzrlib.tests.test_chk_serializer', | ||
1535 | 3534 | 'bzrlib.tests.test_chunk_writer', | ||
1536 | 3535 | 'bzrlib.tests.test_clean_tree', | ||
1537 | 3536 | 'bzrlib.tests.test_commands', | ||
1538 | 3537 | 'bzrlib.tests.test_commit', | ||
1539 | 3538 | 'bzrlib.tests.test_commit_merge', | ||
1540 | 3539 | 'bzrlib.tests.test_config', | ||
1541 | 3540 | 'bzrlib.tests.test_conflicts', | ||
1542 | 3541 | 'bzrlib.tests.test_counted_lock', | ||
1543 | 3542 | 'bzrlib.tests.test_crash', | ||
1544 | 3543 | 'bzrlib.tests.test_decorators', | ||
1545 | 3544 | 'bzrlib.tests.test_delta', | ||
1546 | 3545 | 'bzrlib.tests.test_debug', | ||
1547 | 3546 | 'bzrlib.tests.test_deprecated_graph', | ||
1548 | 3547 | 'bzrlib.tests.test_diff', | ||
1549 | 3548 | 'bzrlib.tests.test_directory_service', | ||
1550 | 3549 | 'bzrlib.tests.test_dirstate', | ||
1551 | 3550 | 'bzrlib.tests.test_email_message', | ||
1552 | 3551 | 'bzrlib.tests.test_eol_filters', | ||
1553 | 3552 | 'bzrlib.tests.test_errors', | ||
1554 | 3553 | 'bzrlib.tests.test_export', | ||
1555 | 3554 | 'bzrlib.tests.test_extract', | ||
1556 | 3555 | 'bzrlib.tests.test_fetch', | ||
1557 | 3556 | 'bzrlib.tests.test_fifo_cache', | ||
1558 | 3557 | 'bzrlib.tests.test_filters', | ||
1559 | 3558 | 'bzrlib.tests.test_ftp_transport', | ||
1560 | 3559 | 'bzrlib.tests.test_foreign', | ||
1561 | 3560 | 'bzrlib.tests.test_generate_docs', | ||
1562 | 3561 | 'bzrlib.tests.test_generate_ids', | ||
1563 | 3562 | 'bzrlib.tests.test_globbing', | ||
1564 | 3563 | 'bzrlib.tests.test_gpg', | ||
1565 | 3564 | 'bzrlib.tests.test_graph', | ||
1566 | 3565 | 'bzrlib.tests.test_groupcompress', | ||
1567 | 3566 | 'bzrlib.tests.test_hashcache', | ||
1568 | 3567 | 'bzrlib.tests.test_help', | ||
1569 | 3568 | 'bzrlib.tests.test_hooks', | ||
1570 | 3569 | 'bzrlib.tests.test_http', | ||
1571 | 3570 | 'bzrlib.tests.test_http_response', | ||
1572 | 3571 | 'bzrlib.tests.test_https_ca_bundle', | ||
1573 | 3572 | 'bzrlib.tests.test_identitymap', | ||
1574 | 3573 | 'bzrlib.tests.test_ignores', | ||
1575 | 3574 | 'bzrlib.tests.test_index', | ||
1576 | 3575 | 'bzrlib.tests.test_info', | ||
1577 | 3576 | 'bzrlib.tests.test_inv', | ||
1578 | 3577 | 'bzrlib.tests.test_inventory_delta', | ||
1579 | 3578 | 'bzrlib.tests.test_knit', | ||
1580 | 3579 | 'bzrlib.tests.test_lazy_import', | ||
1581 | 3580 | 'bzrlib.tests.test_lazy_regex', | ||
1582 | 3581 | 'bzrlib.tests.test_lock', | ||
1583 | 3582 | 'bzrlib.tests.test_lockable_files', | ||
1584 | 3583 | 'bzrlib.tests.test_lockdir', | ||
1585 | 3584 | 'bzrlib.tests.test_log', | ||
1586 | 3585 | 'bzrlib.tests.test_lru_cache', | ||
1587 | 3586 | 'bzrlib.tests.test_lsprof', | ||
1588 | 3587 | 'bzrlib.tests.test_mail_client', | ||
1589 | 3588 | 'bzrlib.tests.test_memorytree', | ||
1590 | 3589 | 'bzrlib.tests.test_merge', | ||
1591 | 3590 | 'bzrlib.tests.test_merge3', | ||
1592 | 3591 | 'bzrlib.tests.test_merge_core', | ||
1593 | 3592 | 'bzrlib.tests.test_merge_directive', | ||
1594 | 3593 | 'bzrlib.tests.test_missing', | ||
1595 | 3594 | 'bzrlib.tests.test_msgeditor', | ||
1596 | 3595 | 'bzrlib.tests.test_multiparent', | ||
1597 | 3596 | 'bzrlib.tests.test_mutabletree', | ||
1598 | 3597 | 'bzrlib.tests.test_nonascii', | ||
1599 | 3598 | 'bzrlib.tests.test_options', | ||
1600 | 3599 | 'bzrlib.tests.test_osutils', | ||
1601 | 3600 | 'bzrlib.tests.test_osutils_encodings', | ||
1602 | 3601 | 'bzrlib.tests.test_pack', | ||
1603 | 3602 | 'bzrlib.tests.test_patch', | ||
1604 | 3603 | 'bzrlib.tests.test_patches', | ||
1605 | 3604 | 'bzrlib.tests.test_permissions', | ||
1606 | 3605 | 'bzrlib.tests.test_plugins', | ||
1607 | 3606 | 'bzrlib.tests.test_progress', | ||
1608 | 3607 | 'bzrlib.tests.test_read_bundle', | ||
1609 | 3608 | 'bzrlib.tests.test_reconcile', | ||
1610 | 3609 | 'bzrlib.tests.test_reconfigure', | ||
1611 | 3610 | 'bzrlib.tests.test_registry', | ||
1612 | 3611 | 'bzrlib.tests.test_remote', | ||
1613 | 3612 | 'bzrlib.tests.test_rename_map', | ||
1614 | 3613 | 'bzrlib.tests.test_repository', | ||
1615 | 3614 | 'bzrlib.tests.test_revert', | ||
1616 | 3615 | 'bzrlib.tests.test_revision', | ||
1617 | 3616 | 'bzrlib.tests.test_revisionspec', | ||
1618 | 3617 | 'bzrlib.tests.test_revisiontree', | ||
1619 | 3618 | 'bzrlib.tests.test_rio', | ||
1620 | 3619 | 'bzrlib.tests.test_rules', | ||
1621 | 3620 | 'bzrlib.tests.test_sampler', | ||
1622 | 3621 | 'bzrlib.tests.test_selftest', | ||
1623 | 3622 | 'bzrlib.tests.test_serializer', | ||
1624 | 3623 | 'bzrlib.tests.test_setup', | ||
1625 | 3624 | 'bzrlib.tests.test_sftp_transport', | ||
1626 | 3625 | 'bzrlib.tests.test_shelf', | ||
1627 | 3626 | 'bzrlib.tests.test_shelf_ui', | ||
1628 | 3627 | 'bzrlib.tests.test_smart', | ||
1629 | 3628 | 'bzrlib.tests.test_smart_add', | ||
1630 | 3629 | 'bzrlib.tests.test_smart_request', | ||
1631 | 3630 | 'bzrlib.tests.test_smart_transport', | ||
1632 | 3631 | 'bzrlib.tests.test_smtp_connection', | ||
1633 | 3632 | 'bzrlib.tests.test_source', | ||
1634 | 3633 | 'bzrlib.tests.test_ssh_transport', | ||
1635 | 3634 | 'bzrlib.tests.test_status', | ||
1636 | 3635 | 'bzrlib.tests.test_store', | ||
1637 | 3636 | 'bzrlib.tests.test_strace', | ||
1638 | 3637 | 'bzrlib.tests.test_subsume', | ||
1639 | 3638 | 'bzrlib.tests.test_switch', | ||
1640 | 3639 | 'bzrlib.tests.test_symbol_versioning', | ||
1641 | 3640 | 'bzrlib.tests.test_tag', | ||
1642 | 3641 | 'bzrlib.tests.test_testament', | ||
1643 | 3642 | 'bzrlib.tests.test_textfile', | ||
1644 | 3643 | 'bzrlib.tests.test_textmerge', | ||
1645 | 3644 | 'bzrlib.tests.test_timestamp', | ||
1646 | 3645 | 'bzrlib.tests.test_trace', | ||
1647 | 3646 | 'bzrlib.tests.test_transactions', | ||
1648 | 3647 | 'bzrlib.tests.test_transform', | ||
1649 | 3648 | 'bzrlib.tests.test_transport', | ||
1650 | 3649 | 'bzrlib.tests.test_transport_log', | ||
1651 | 3650 | 'bzrlib.tests.test_tree', | ||
1652 | 3651 | 'bzrlib.tests.test_treebuilder', | ||
1653 | 3652 | 'bzrlib.tests.test_tsort', | ||
1654 | 3653 | 'bzrlib.tests.test_tuned_gzip', | ||
1655 | 3654 | 'bzrlib.tests.test_ui', | ||
1656 | 3655 | 'bzrlib.tests.test_uncommit', | ||
1657 | 3656 | 'bzrlib.tests.test_upgrade', | ||
1658 | 3657 | 'bzrlib.tests.test_upgrade_stacked', | ||
1659 | 3658 | 'bzrlib.tests.test_urlutils', | ||
1660 | 3659 | 'bzrlib.tests.test_version', | ||
1661 | 3660 | 'bzrlib.tests.test_version_info', | ||
1662 | 3661 | 'bzrlib.tests.test_weave', | ||
1663 | 3662 | 'bzrlib.tests.test_whitebox', | ||
1664 | 3663 | 'bzrlib.tests.test_win32utils', | ||
1665 | 3664 | 'bzrlib.tests.test_workingtree', | ||
1666 | 3665 | 'bzrlib.tests.test_workingtree_4', | ||
1667 | 3666 | 'bzrlib.tests.test_wsgi', | ||
1668 | 3667 | 'bzrlib.tests.test_xml', | ||
1669 | 3668 | ] | ||
1670 | 3669 | |||
1671 | 3670 | |||
1672 | 3671 | def _test_suite_modules_to_doctest(): | ||
1673 | 3672 | """Return the list of modules to doctest.""" | ||
1674 | 3673 | return [ | ||
1675 | 3674 | 'bzrlib', | ||
1676 | 3675 | 'bzrlib.branchbuilder', | ||
1677 | 3676 | 'bzrlib.export', | ||
1678 | 3677 | 'bzrlib.inventory', | ||
1679 | 3678 | 'bzrlib.iterablefile', | ||
1680 | 3679 | 'bzrlib.lockdir', | ||
1681 | 3680 | 'bzrlib.merge3', | ||
1682 | 3681 | 'bzrlib.option', | ||
1683 | 3682 | 'bzrlib.symbol_versioning', | ||
1684 | 3683 | 'bzrlib.tests', | ||
1685 | 3684 | 'bzrlib.timestamp', | ||
1686 | 3685 | 'bzrlib.version_info_formats.format_custom', | ||
1687 | 3686 | ] | ||
1688 | 3687 | |||
1689 | 3688 | |||
1690 | 3419 | def test_suite(keep_only=None, starting_with=None): | 3689 | def test_suite(keep_only=None, starting_with=None): |
1691 | 3420 | """Build and return TestSuite for the whole of bzrlib. | 3690 | """Build and return TestSuite for the whole of bzrlib. |
1692 | 3421 | 3691 | ||
1693 | @@ -3427,184 +3697,6 @@ | |||
1694 | 3427 | This function can be replaced if you need to change the default test | 3697 | This function can be replaced if you need to change the default test |
1695 | 3428 | suite on a global basis, but it is not encouraged. | 3698 | suite on a global basis, but it is not encouraged. |
1696 | 3429 | """ | 3699 | """ |
1697 | 3430 | testmod_names = [ | ||
1698 | 3431 | 'bzrlib.doc', | ||
1699 | 3432 | 'bzrlib.tests.blackbox', | ||
1700 | 3433 | 'bzrlib.tests.commands', | ||
1701 | 3434 | 'bzrlib.tests.per_branch', | ||
1702 | 3435 | 'bzrlib.tests.per_bzrdir', | ||
1703 | 3436 | 'bzrlib.tests.per_interrepository', | ||
1704 | 3437 | 'bzrlib.tests.per_intertree', | ||
1705 | 3438 | 'bzrlib.tests.per_inventory', | ||
1706 | 3439 | 'bzrlib.tests.per_interbranch', | ||
1707 | 3440 | 'bzrlib.tests.per_lock', | ||
1708 | 3441 | 'bzrlib.tests.per_transport', | ||
1709 | 3442 | 'bzrlib.tests.per_tree', | ||
1710 | 3443 | 'bzrlib.tests.per_pack_repository', | ||
1711 | 3444 | 'bzrlib.tests.per_repository', | ||
1712 | 3445 | 'bzrlib.tests.per_repository_chk', | ||
1713 | 3446 | 'bzrlib.tests.per_repository_reference', | ||
1714 | 3447 | 'bzrlib.tests.per_versionedfile', | ||
1715 | 3448 | 'bzrlib.tests.per_workingtree', | ||
1716 | 3449 | 'bzrlib.tests.test__annotator', | ||
1717 | 3450 | 'bzrlib.tests.test__chk_map', | ||
1718 | 3451 | 'bzrlib.tests.test__dirstate_helpers', | ||
1719 | 3452 | 'bzrlib.tests.test__groupcompress', | ||
1720 | 3453 | 'bzrlib.tests.test__known_graph', | ||
1721 | 3454 | 'bzrlib.tests.test__rio', | ||
1722 | 3455 | 'bzrlib.tests.test__walkdirs_win32', | ||
1723 | 3456 | 'bzrlib.tests.test_ancestry', | ||
1724 | 3457 | 'bzrlib.tests.test_annotate', | ||
1725 | 3458 | 'bzrlib.tests.test_api', | ||
1726 | 3459 | 'bzrlib.tests.test_atomicfile', | ||
1727 | 3460 | 'bzrlib.tests.test_bad_files', | ||
1728 | 3461 | 'bzrlib.tests.test_bencode', | ||
1729 | 3462 | 'bzrlib.tests.test_bisect_multi', | ||
1730 | 3463 | 'bzrlib.tests.test_branch', | ||
1731 | 3464 | 'bzrlib.tests.test_branchbuilder', | ||
1732 | 3465 | 'bzrlib.tests.test_btree_index', | ||
1733 | 3466 | 'bzrlib.tests.test_bugtracker', | ||
1734 | 3467 | 'bzrlib.tests.test_bundle', | ||
1735 | 3468 | 'bzrlib.tests.test_bzrdir', | ||
1736 | 3469 | 'bzrlib.tests.test__chunks_to_lines', | ||
1737 | 3470 | 'bzrlib.tests.test_cache_utf8', | ||
1738 | 3471 | 'bzrlib.tests.test_chk_map', | ||
1739 | 3472 | 'bzrlib.tests.test_chk_serializer', | ||
1740 | 3473 | 'bzrlib.tests.test_chunk_writer', | ||
1741 | 3474 | 'bzrlib.tests.test_clean_tree', | ||
1742 | 3475 | 'bzrlib.tests.test_commands', | ||
1743 | 3476 | 'bzrlib.tests.test_commit', | ||
1744 | 3477 | 'bzrlib.tests.test_commit_merge', | ||
1745 | 3478 | 'bzrlib.tests.test_config', | ||
1746 | 3479 | 'bzrlib.tests.test_conflicts', | ||
1747 | 3480 | 'bzrlib.tests.test_counted_lock', | ||
1748 | 3481 | 'bzrlib.tests.test_crash', | ||
1749 | 3482 | 'bzrlib.tests.test_decorators', | ||
1750 | 3483 | 'bzrlib.tests.test_delta', | ||
1751 | 3484 | 'bzrlib.tests.test_debug', | ||
1752 | 3485 | 'bzrlib.tests.test_deprecated_graph', | ||
1753 | 3486 | 'bzrlib.tests.test_diff', | ||
1754 | 3487 | 'bzrlib.tests.test_directory_service', | ||
1755 | 3488 | 'bzrlib.tests.test_dirstate', | ||
1756 | 3489 | 'bzrlib.tests.test_email_message', | ||
1757 | 3490 | 'bzrlib.tests.test_eol_filters', | ||
1758 | 3491 | 'bzrlib.tests.test_errors', | ||
1759 | 3492 | 'bzrlib.tests.test_export', | ||
1760 | 3493 | 'bzrlib.tests.test_extract', | ||
1761 | 3494 | 'bzrlib.tests.test_fetch', | ||
1762 | 3495 | 'bzrlib.tests.test_fifo_cache', | ||
1763 | 3496 | 'bzrlib.tests.test_filters', | ||
1764 | 3497 | 'bzrlib.tests.test_ftp_transport', | ||
1765 | 3498 | 'bzrlib.tests.test_foreign', | ||
1766 | 3499 | 'bzrlib.tests.test_generate_docs', | ||
1767 | 3500 | 'bzrlib.tests.test_generate_ids', | ||
1768 | 3501 | 'bzrlib.tests.test_globbing', | ||
1769 | 3502 | 'bzrlib.tests.test_gpg', | ||
1770 | 3503 | 'bzrlib.tests.test_graph', | ||
1771 | 3504 | 'bzrlib.tests.test_groupcompress', | ||
1772 | 3505 | 'bzrlib.tests.test_hashcache', | ||
1773 | 3506 | 'bzrlib.tests.test_help', | ||
1774 | 3507 | 'bzrlib.tests.test_hooks', | ||
1775 | 3508 | 'bzrlib.tests.test_http', | ||
1776 | 3509 | 'bzrlib.tests.test_http_response', | ||
1777 | 3510 | 'bzrlib.tests.test_https_ca_bundle', | ||
1778 | 3511 | 'bzrlib.tests.test_identitymap', | ||
1779 | 3512 | 'bzrlib.tests.test_ignores', | ||
1780 | 3513 | 'bzrlib.tests.test_index', | ||
1781 | 3514 | 'bzrlib.tests.test_info', | ||
1782 | 3515 | 'bzrlib.tests.test_inv', | ||
1783 | 3516 | 'bzrlib.tests.test_inventory_delta', | ||
1784 | 3517 | 'bzrlib.tests.test_knit', | ||
1785 | 3518 | 'bzrlib.tests.test_lazy_import', | ||
1786 | 3519 | 'bzrlib.tests.test_lazy_regex', | ||
1787 | 3520 | 'bzrlib.tests.test_lock', | ||
1788 | 3521 | 'bzrlib.tests.test_lockable_files', | ||
1789 | 3522 | 'bzrlib.tests.test_lockdir', | ||
1790 | 3523 | 'bzrlib.tests.test_log', | ||
1791 | 3524 | 'bzrlib.tests.test_lru_cache', | ||
1792 | 3525 | 'bzrlib.tests.test_lsprof', | ||
1793 | 3526 | 'bzrlib.tests.test_mail_client', | ||
1794 | 3527 | 'bzrlib.tests.test_memorytree', | ||
1795 | 3528 | 'bzrlib.tests.test_merge', | ||
1796 | 3529 | 'bzrlib.tests.test_merge3', | ||
1797 | 3530 | 'bzrlib.tests.test_merge_core', | ||
1798 | 3531 | 'bzrlib.tests.test_merge_directive', | ||
1799 | 3532 | 'bzrlib.tests.test_missing', | ||
1800 | 3533 | 'bzrlib.tests.test_msgeditor', | ||
1801 | 3534 | 'bzrlib.tests.test_multiparent', | ||
1802 | 3535 | 'bzrlib.tests.test_mutabletree', | ||
1803 | 3536 | 'bzrlib.tests.test_nonascii', | ||
1804 | 3537 | 'bzrlib.tests.test_options', | ||
1805 | 3538 | 'bzrlib.tests.test_osutils', | ||
1806 | 3539 | 'bzrlib.tests.test_osutils_encodings', | ||
1807 | 3540 | 'bzrlib.tests.test_pack', | ||
1808 | 3541 | 'bzrlib.tests.test_patch', | ||
1809 | 3542 | 'bzrlib.tests.test_patches', | ||
1810 | 3543 | 'bzrlib.tests.test_permissions', | ||
1811 | 3544 | 'bzrlib.tests.test_plugins', | ||
1812 | 3545 | 'bzrlib.tests.test_progress', | ||
1813 | 3546 | 'bzrlib.tests.test_read_bundle', | ||
1814 | 3547 | 'bzrlib.tests.test_reconcile', | ||
1815 | 3548 | 'bzrlib.tests.test_reconfigure', | ||
1816 | 3549 | 'bzrlib.tests.test_registry', | ||
1817 | 3550 | 'bzrlib.tests.test_remote', | ||
1818 | 3551 | 'bzrlib.tests.test_rename_map', | ||
1819 | 3552 | 'bzrlib.tests.test_repository', | ||
1820 | 3553 | 'bzrlib.tests.test_revert', | ||
1821 | 3554 | 'bzrlib.tests.test_revision', | ||
1822 | 3555 | 'bzrlib.tests.test_revisionspec', | ||
1823 | 3556 | 'bzrlib.tests.test_revisiontree', | ||
1824 | 3557 | 'bzrlib.tests.test_rio', | ||
1825 | 3558 | 'bzrlib.tests.test_rules', | ||
1826 | 3559 | 'bzrlib.tests.test_sampler', | ||
1827 | 3560 | 'bzrlib.tests.test_selftest', | ||
1828 | 3561 | 'bzrlib.tests.test_serializer', | ||
1829 | 3562 | 'bzrlib.tests.test_setup', | ||
1830 | 3563 | 'bzrlib.tests.test_sftp_transport', | ||
1831 | 3564 | 'bzrlib.tests.test_shelf', | ||
1832 | 3565 | 'bzrlib.tests.test_shelf_ui', | ||
1833 | 3566 | 'bzrlib.tests.test_smart', | ||
1834 | 3567 | 'bzrlib.tests.test_smart_add', | ||
1835 | 3568 | 'bzrlib.tests.test_smart_request', | ||
1836 | 3569 | 'bzrlib.tests.test_smart_transport', | ||
1837 | 3570 | 'bzrlib.tests.test_smtp_connection', | ||
1838 | 3571 | 'bzrlib.tests.test_source', | ||
1839 | 3572 | 'bzrlib.tests.test_ssh_transport', | ||
1840 | 3573 | 'bzrlib.tests.test_status', | ||
1841 | 3574 | 'bzrlib.tests.test_store', | ||
1842 | 3575 | 'bzrlib.tests.test_strace', | ||
1843 | 3576 | 'bzrlib.tests.test_subsume', | ||
1844 | 3577 | 'bzrlib.tests.test_switch', | ||
1845 | 3578 | 'bzrlib.tests.test_symbol_versioning', | ||
1846 | 3579 | 'bzrlib.tests.test_tag', | ||
1847 | 3580 | 'bzrlib.tests.test_testament', | ||
1848 | 3581 | 'bzrlib.tests.test_textfile', | ||
1849 | 3582 | 'bzrlib.tests.test_textmerge', | ||
1850 | 3583 | 'bzrlib.tests.test_timestamp', | ||
1851 | 3584 | 'bzrlib.tests.test_trace', | ||
1852 | 3585 | 'bzrlib.tests.test_transactions', | ||
1853 | 3586 | 'bzrlib.tests.test_transform', | ||
1854 | 3587 | 'bzrlib.tests.test_transport', | ||
1855 | 3588 | 'bzrlib.tests.test_transport_log', | ||
1856 | 3589 | 'bzrlib.tests.test_tree', | ||
1857 | 3590 | 'bzrlib.tests.test_treebuilder', | ||
1858 | 3591 | 'bzrlib.tests.test_tsort', | ||
1859 | 3592 | 'bzrlib.tests.test_tuned_gzip', | ||
1860 | 3593 | 'bzrlib.tests.test_ui', | ||
1861 | 3594 | 'bzrlib.tests.test_uncommit', | ||
1862 | 3595 | 'bzrlib.tests.test_upgrade', | ||
1863 | 3596 | 'bzrlib.tests.test_upgrade_stacked', | ||
1864 | 3597 | 'bzrlib.tests.test_urlutils', | ||
1865 | 3598 | 'bzrlib.tests.test_version', | ||
1866 | 3599 | 'bzrlib.tests.test_version_info', | ||
1867 | 3600 | 'bzrlib.tests.test_weave', | ||
1868 | 3601 | 'bzrlib.tests.test_whitebox', | ||
1869 | 3602 | 'bzrlib.tests.test_win32utils', | ||
1870 | 3603 | 'bzrlib.tests.test_workingtree', | ||
1871 | 3604 | 'bzrlib.tests.test_workingtree_4', | ||
1872 | 3605 | 'bzrlib.tests.test_wsgi', | ||
1873 | 3606 | 'bzrlib.tests.test_xml', | ||
1874 | 3607 | ] | ||
1875 | 3608 | 3700 | ||
1876 | 3609 | loader = TestUtil.TestLoader() | 3701 | loader = TestUtil.TestLoader() |
1877 | 3610 | 3702 | ||
1878 | @@ -3639,24 +3731,9 @@ | |||
1879 | 3639 | suite = loader.suiteClass() | 3731 | suite = loader.suiteClass() |
1880 | 3640 | 3732 | ||
1881 | 3641 | # modules building their suite with loadTestsFromModuleNames | 3733 | # modules building their suite with loadTestsFromModuleNames |
1900 | 3642 | suite.addTest(loader.loadTestsFromModuleNames(testmod_names)) | 3734 | suite.addTest(loader.loadTestsFromModuleNames(_test_suite_testmod_names())) |
1901 | 3643 | 3735 | ||
1902 | 3644 | modules_to_doctest = [ | 3736 | for mod in _test_suite_modules_to_doctest(): |
1885 | 3645 | 'bzrlib', | ||
1886 | 3646 | 'bzrlib.branchbuilder', | ||
1887 | 3647 | 'bzrlib.export', | ||
1888 | 3648 | 'bzrlib.inventory', | ||
1889 | 3649 | 'bzrlib.iterablefile', | ||
1890 | 3650 | 'bzrlib.lockdir', | ||
1891 | 3651 | 'bzrlib.merge3', | ||
1892 | 3652 | 'bzrlib.option', | ||
1893 | 3653 | 'bzrlib.symbol_versioning', | ||
1894 | 3654 | 'bzrlib.tests', | ||
1895 | 3655 | 'bzrlib.timestamp', | ||
1896 | 3656 | 'bzrlib.version_info_formats.format_custom', | ||
1897 | 3657 | ] | ||
1898 | 3658 | |||
1899 | 3659 | for mod in modules_to_doctest: | ||
1903 | 3660 | if not interesting_module(mod): | 3737 | if not interesting_module(mod): |
1904 | 3661 | # No tests to keep here, move along | 3738 | # No tests to keep here, move along |
1905 | 3662 | continue | 3739 | continue |
1906 | @@ -3803,8 +3880,7 @@ | |||
1907 | 3803 | :param new_id: The id to assign to it. | 3880 | :param new_id: The id to assign to it. |
1908 | 3804 | :return: The new test. | 3881 | :return: The new test. |
1909 | 3805 | """ | 3882 | """ |
1912 | 3806 | from copy import deepcopy | 3883 | new_test = copy(test) |
1911 | 3807 | new_test = deepcopy(test) | ||
1913 | 3808 | new_test.id = lambda: new_id | 3884 | new_test.id = lambda: new_id |
1914 | 3809 | return new_test | 3885 | return new_test |
1915 | 3810 | 3886 | ||
1916 | 3811 | 3887 | ||
1917 | === modified file 'bzrlib/tests/blackbox/test_filesystem_cicp.py' | |||
1918 | --- bzrlib/tests/blackbox/test_filesystem_cicp.py 2009-04-06 08:17:53 +0000 | |||
1919 | +++ bzrlib/tests/blackbox/test_filesystem_cicp.py 2009-08-26 09:06:02 +0000 | |||
1920 | @@ -216,12 +216,19 @@ | |||
1921 | 216 | 216 | ||
1922 | 217 | 217 | ||
1923 | 218 | class TestMisc(TestCICPBase): | 218 | class TestMisc(TestCICPBase): |
1924 | 219 | |||
1925 | 219 | def test_status(self): | 220 | def test_status(self): |
1926 | 220 | wt = self._make_mixed_case_tree() | 221 | wt = self._make_mixed_case_tree() |
1927 | 221 | self.run_bzr('add') | 222 | self.run_bzr('add') |
1928 | 222 | 223 | ||
1931 | 223 | self.check_output('added:\n CamelCaseParent/CamelCase\n lowercaseparent/lowercase\n', | 224 | self.check_output( |
1932 | 224 | 'status camelcaseparent/camelcase LOWERCASEPARENT/LOWERCASE') | 225 | """added: |
1933 | 226 | CamelCaseParent/ | ||
1934 | 227 | CamelCaseParent/CamelCase | ||
1935 | 228 | lowercaseparent/ | ||
1936 | 229 | lowercaseparent/lowercase | ||
1937 | 230 | """, | ||
1938 | 231 | 'status camelcaseparent/camelcase LOWERCASEPARENT/LOWERCASE') | ||
1939 | 225 | 232 | ||
1940 | 226 | def test_ci(self): | 233 | def test_ci(self): |
1941 | 227 | wt = self._make_mixed_case_tree() | 234 | wt = self._make_mixed_case_tree() |
1942 | 228 | 235 | ||
1943 | === modified file 'bzrlib/tests/blackbox/test_info.py' | |||
1944 | --- bzrlib/tests/blackbox/test_info.py 2009-08-17 03:47:03 +0000 | |||
1945 | +++ bzrlib/tests/blackbox/test_info.py 2009-08-25 23:38:10 +0000 | |||
1946 | @@ -1328,6 +1328,10 @@ | |||
1947 | 1328 | def test_info_locking_oslocks(self): | 1328 | def test_info_locking_oslocks(self): |
1948 | 1329 | if sys.platform == "win32": | 1329 | if sys.platform == "win32": |
1949 | 1330 | raise TestSkipped("don't use oslocks on win32 in unix manner") | 1330 | raise TestSkipped("don't use oslocks on win32 in unix manner") |
1950 | 1331 | # This test tests old (all-in-one, OS lock using) behaviour which | ||
1951 | 1332 | # simply cannot work on windows (and is indeed why we changed our | ||
1952 | 1333 | # design. As such, don't try to remove the thisFailsStrictLockCheck | ||
1953 | 1334 | # call here. | ||
1954 | 1331 | self.thisFailsStrictLockCheck() | 1335 | self.thisFailsStrictLockCheck() |
1955 | 1332 | 1336 | ||
1956 | 1333 | tree = self.make_branch_and_tree('branch', | 1337 | tree = self.make_branch_and_tree('branch', |
1957 | 1334 | 1338 | ||
1958 | === modified file 'bzrlib/tests/blackbox/test_push.py' | |||
1959 | --- bzrlib/tests/blackbox/test_push.py 2009-08-20 04:09:58 +0000 | |||
1960 | +++ bzrlib/tests/blackbox/test_push.py 2009-08-27 22:17:35 +0000 | |||
1961 | @@ -576,9 +576,7 @@ | |||
1962 | 576 | def setUp(self): | 576 | def setUp(self): |
1963 | 577 | tests.TestCaseWithTransport.setUp(self) | 577 | tests.TestCaseWithTransport.setUp(self) |
1964 | 578 | self.memory_server = RedirectingMemoryServer() | 578 | self.memory_server = RedirectingMemoryServer() |
1968 | 579 | self.memory_server.setUp() | 579 | self.start_server(self.memory_server) |
1966 | 580 | self.addCleanup(self.memory_server.tearDown) | ||
1967 | 581 | |||
1969 | 582 | # Make the branch and tree that we'll be pushing. | 580 | # Make the branch and tree that we'll be pushing. |
1970 | 583 | t = self.make_branch_and_tree('tree') | 581 | t = self.make_branch_and_tree('tree') |
1971 | 584 | self.build_tree(['tree/file']) | 582 | self.build_tree(['tree/file']) |
1972 | 585 | 583 | ||
1973 | === modified file 'bzrlib/tests/blackbox/test_selftest.py' | |||
1974 | --- bzrlib/tests/blackbox/test_selftest.py 2009-08-24 05:23:11 +0000 | |||
1975 | +++ bzrlib/tests/blackbox/test_selftest.py 2009-08-24 22:32:53 +0000 | |||
1976 | @@ -172,3 +172,7 @@ | |||
1977 | 172 | outputs_nothing(['selftest', '--list-only', '--exclude', 'selftest']) | 172 | outputs_nothing(['selftest', '--list-only', '--exclude', 'selftest']) |
1978 | 173 | finally: | 173 | finally: |
1979 | 174 | tests.selftest = original_selftest | 174 | tests.selftest = original_selftest |
1980 | 175 | |||
1981 | 176 | def test_lsprof_tests(self): | ||
1982 | 177 | params = self.get_params_passed_to_core('selftest --lsprof-tests') | ||
1983 | 178 | self.assertEqual(True, params[1]["lsprof_tests"]) | ||
1984 | 175 | 179 | ||
1985 | === modified file 'bzrlib/tests/blackbox/test_serve.py' | |||
1986 | --- bzrlib/tests/blackbox/test_serve.py 2009-07-20 11:27:05 +0000 | |||
1987 | +++ bzrlib/tests/blackbox/test_serve.py 2009-08-27 22:17:35 +0000 | |||
1988 | @@ -209,8 +209,7 @@ | |||
1989 | 209 | ssh_server = SFTPServer(StubSSHServer) | 209 | ssh_server = SFTPServer(StubSSHServer) |
1990 | 210 | # XXX: We *don't* want to override the default SSH vendor, so we set | 210 | # XXX: We *don't* want to override the default SSH vendor, so we set |
1991 | 211 | # _vendor to what _get_ssh_vendor returns. | 211 | # _vendor to what _get_ssh_vendor returns. |
1994 | 212 | ssh_server.setUp() | 212 | self.start_server(ssh_server) |
1993 | 213 | self.addCleanup(ssh_server.tearDown) | ||
1995 | 214 | port = ssh_server._listener.port | 213 | port = ssh_server._listener.port |
1996 | 215 | 214 | ||
1997 | 216 | # Access the branch via a bzr+ssh URL. The BZR_REMOTE_PATH environment | 215 | # Access the branch via a bzr+ssh URL. The BZR_REMOTE_PATH environment |
1998 | 217 | 216 | ||
1999 | === modified file 'bzrlib/tests/blackbox/test_split.py' | |||
2000 | --- bzrlib/tests/blackbox/test_split.py 2009-06-08 02:02:08 +0000 | |||
2001 | +++ bzrlib/tests/blackbox/test_split.py 2009-08-27 21:48:33 +0000 | |||
2002 | @@ -31,7 +31,7 @@ | |||
2003 | 31 | wt.add(['b', 'b/c']) | 31 | wt.add(['b', 'b/c']) |
2004 | 32 | wt.commit('rev1') | 32 | wt.commit('rev1') |
2005 | 33 | self.run_bzr('split a/b') | 33 | self.run_bzr('split a/b') |
2007 | 34 | self.run_bzr_error(('.* is not versioned',), 'split q') | 34 | self.run_bzr_error(('.* is not versioned',), 'split q', working_dir='a') |
2008 | 35 | 35 | ||
2009 | 36 | def test_split_repo_failure(self): | 36 | def test_split_repo_failure(self): |
2010 | 37 | repo = self.make_repository('branch', shared=True, format='knit') | 37 | repo = self.make_repository('branch', shared=True, format='knit') |
2011 | 38 | 38 | ||
2012 | === modified file 'bzrlib/tests/http_utils.py' | |||
2013 | --- bzrlib/tests/http_utils.py 2009-05-04 14:48:21 +0000 | |||
2014 | +++ bzrlib/tests/http_utils.py 2009-08-27 22:17:35 +0000 | |||
2015 | @@ -133,8 +133,7 @@ | |||
2016 | 133 | """Get the server instance for the secondary transport.""" | 133 | """Get the server instance for the secondary transport.""" |
2017 | 134 | if self.__secondary_server is None: | 134 | if self.__secondary_server is None: |
2018 | 135 | self.__secondary_server = self.create_transport_secondary_server() | 135 | self.__secondary_server = self.create_transport_secondary_server() |
2021 | 136 | self.__secondary_server.setUp() | 136 | self.start_server(self.__secondary_server) |
2020 | 137 | self.addCleanup(self.__secondary_server.tearDown) | ||
2022 | 138 | return self.__secondary_server | 137 | return self.__secondary_server |
2023 | 139 | 138 | ||
2024 | 140 | 139 | ||
2025 | 141 | 140 | ||
2026 | === modified file 'bzrlib/tests/per_branch/test_push.py' | |||
2027 | --- bzrlib/tests/per_branch/test_push.py 2009-08-14 00:55:42 +0000 | |||
2028 | +++ bzrlib/tests/per_branch/test_push.py 2009-08-27 22:17:35 +0000 | |||
2029 | @@ -394,8 +394,7 @@ | |||
2030 | 394 | # Create a smart server that publishes whatever the backing VFS server | 394 | # Create a smart server that publishes whatever the backing VFS server |
2031 | 395 | # does. | 395 | # does. |
2032 | 396 | self.smart_server = server.SmartTCPServer_for_testing() | 396 | self.smart_server = server.SmartTCPServer_for_testing() |
2035 | 397 | self.smart_server.setUp(self.get_server()) | 397 | self.start_server(self.smart_server, self.get_server()) |
2034 | 398 | self.addCleanup(self.smart_server.tearDown) | ||
2036 | 399 | # Make two empty branches, 'empty' and 'target'. | 398 | # Make two empty branches, 'empty' and 'target'. |
2037 | 400 | self.empty_branch = self.make_branch('empty') | 399 | self.empty_branch = self.make_branch('empty') |
2038 | 401 | self.make_branch('target') | 400 | self.make_branch('target') |
2039 | 402 | 401 | ||
2040 | === modified file 'bzrlib/tests/per_pack_repository.py' | |||
2041 | --- bzrlib/tests/per_pack_repository.py 2009-08-14 00:55:42 +0000 | |||
2042 | +++ bzrlib/tests/per_pack_repository.py 2009-08-27 22:17:35 +0000 | |||
2043 | @@ -271,8 +271,7 @@ | |||
2044 | 271 | # failing to delete obsolete packs is not fatal | 271 | # failing to delete obsolete packs is not fatal |
2045 | 272 | format = self.get_format() | 272 | format = self.get_format() |
2046 | 273 | server = fakenfs.FakeNFSServer() | 273 | server = fakenfs.FakeNFSServer() |
2049 | 274 | server.setUp() | 274 | self.start_server(server) |
2048 | 275 | self.addCleanup(server.tearDown) | ||
2050 | 276 | transport = get_transport(server.get_url()) | 275 | transport = get_transport(server.get_url()) |
2051 | 277 | bzrdir = self.get_format().initialize_on_transport(transport) | 276 | bzrdir = self.get_format().initialize_on_transport(transport) |
2052 | 278 | repo = bzrdir.create_repository() | 277 | repo = bzrdir.create_repository() |
2053 | @@ -1020,8 +1019,7 @@ | |||
2054 | 1020 | # Create a smart server that publishes whatever the backing VFS server | 1019 | # Create a smart server that publishes whatever the backing VFS server |
2055 | 1021 | # does. | 1020 | # does. |
2056 | 1022 | self.smart_server = server.SmartTCPServer_for_testing() | 1021 | self.smart_server = server.SmartTCPServer_for_testing() |
2059 | 1023 | self.smart_server.setUp(self.get_server()) | 1022 | self.start_server(self.smart_server, self.get_server()) |
2058 | 1024 | self.addCleanup(self.smart_server.tearDown) | ||
2060 | 1025 | # Log all HPSS calls into self.hpss_calls. | 1023 | # Log all HPSS calls into self.hpss_calls. |
2061 | 1026 | client._SmartClient.hooks.install_named_hook( | 1024 | client._SmartClient.hooks.install_named_hook( |
2062 | 1027 | 'call', self.capture_hpss_call, None) | 1025 | 'call', self.capture_hpss_call, None) |
2063 | 1028 | 1026 | ||
2064 | === modified file 'bzrlib/tests/per_repository/test_repository.py' | |||
2065 | --- bzrlib/tests/per_repository/test_repository.py 2009-08-18 22:03:18 +0000 | |||
2066 | +++ bzrlib/tests/per_repository/test_repository.py 2009-08-27 22:17:35 +0000 | |||
2067 | @@ -823,9 +823,8 @@ | |||
2068 | 823 | be created at the given path.""" | 823 | be created at the given path.""" |
2069 | 824 | repo = self.make_repository(path, shared=shared) | 824 | repo = self.make_repository(path, shared=shared) |
2070 | 825 | smart_server = server.SmartTCPServer_for_testing() | 825 | smart_server = server.SmartTCPServer_for_testing() |
2072 | 826 | smart_server.setUp(self.get_server()) | 826 | self.start_server(smart_server, self.get_server()) |
2073 | 827 | remote_transport = get_transport(smart_server.get_url()).clone(path) | 827 | remote_transport = get_transport(smart_server.get_url()).clone(path) |
2074 | 828 | self.addCleanup(smart_server.tearDown) | ||
2075 | 829 | remote_bzrdir = bzrdir.BzrDir.open_from_transport(remote_transport) | 828 | remote_bzrdir = bzrdir.BzrDir.open_from_transport(remote_transport) |
2076 | 830 | remote_repo = remote_bzrdir.open_repository() | 829 | remote_repo = remote_bzrdir.open_repository() |
2077 | 831 | return remote_repo | 830 | return remote_repo |
2078 | @@ -897,14 +896,6 @@ | |||
2079 | 897 | local_repo = local_bzrdir.open_repository() | 896 | local_repo = local_bzrdir.open_repository() |
2080 | 898 | self.assertEqual(remote_backing_repo._format, local_repo._format) | 897 | self.assertEqual(remote_backing_repo._format, local_repo._format) |
2081 | 899 | 898 | ||
2082 | 900 | # XXX: this helper probably belongs on TestCaseWithTransport | ||
2083 | 901 | def make_smart_server(self, path): | ||
2084 | 902 | smart_server = server.SmartTCPServer_for_testing() | ||
2085 | 903 | smart_server.setUp(self.get_server()) | ||
2086 | 904 | remote_transport = get_transport(smart_server.get_url()).clone(path) | ||
2087 | 905 | self.addCleanup(smart_server.tearDown) | ||
2088 | 906 | return remote_transport | ||
2089 | 907 | |||
2090 | 908 | def test_clone_to_hpss(self): | 899 | def test_clone_to_hpss(self): |
2091 | 909 | pre_metadir_formats = [RepositoryFormat5(), RepositoryFormat6()] | 900 | pre_metadir_formats = [RepositoryFormat5(), RepositoryFormat6()] |
2092 | 910 | if self.repository_format in pre_metadir_formats: | 901 | if self.repository_format in pre_metadir_formats: |
2093 | 911 | 902 | ||
2094 | === modified file 'bzrlib/tests/per_workingtree/test_flush.py' | |||
2095 | --- bzrlib/tests/per_workingtree/test_flush.py 2009-07-31 17:42:29 +0000 | |||
2096 | +++ bzrlib/tests/per_workingtree/test_flush.py 2009-08-25 23:38:10 +0000 | |||
2097 | @@ -16,7 +16,9 @@ | |||
2098 | 16 | 16 | ||
2099 | 17 | """Tests for WorkingTree.flush.""" | 17 | """Tests for WorkingTree.flush.""" |
2100 | 18 | 18 | ||
2101 | 19 | import sys | ||
2102 | 19 | from bzrlib import errors, inventory | 20 | from bzrlib import errors, inventory |
2103 | 21 | from bzrlib.tests import TestSkipped | ||
2104 | 20 | from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree | 22 | from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree |
2105 | 21 | 23 | ||
2106 | 22 | 24 | ||
2107 | @@ -31,8 +33,14 @@ | |||
2108 | 31 | tree.unlock() | 33 | tree.unlock() |
2109 | 32 | 34 | ||
2110 | 33 | def test_flush_when_inventory_is_modified(self): | 35 | def test_flush_when_inventory_is_modified(self): |
2111 | 36 | if sys.platform == "win32": | ||
2112 | 37 | raise TestSkipped("don't use oslocks on win32 in unix manner") | ||
2113 | 34 | # This takes a write lock on the source tree, then opens a second copy | 38 | # This takes a write lock on the source tree, then opens a second copy |
2115 | 35 | # and tries to grab a read lock, which is a bit bogus | 39 | # and tries to grab a read lock. This works on Unix and is a reasonable |
2116 | 40 | # way to detect when the file is actually written to, but it won't work | ||
2117 | 41 | # (as a test) on Windows. It might be nice to instead stub out the | ||
2118 | 42 | # functions used to write and that way do both less work and also be | ||
2119 | 43 | # able to execute on Windows. | ||
2120 | 36 | self.thisFailsStrictLockCheck() | 44 | self.thisFailsStrictLockCheck() |
2121 | 37 | # when doing a flush the inventory should be written if needed. | 45 | # when doing a flush the inventory should be written if needed. |
2122 | 38 | # we test that by changing the inventory (using | 46 | # we test that by changing the inventory (using |
2123 | 39 | 47 | ||
2124 | === modified file 'bzrlib/tests/per_workingtree/test_locking.py' | |||
2125 | --- bzrlib/tests/per_workingtree/test_locking.py 2009-07-31 17:42:29 +0000 | |||
2126 | +++ bzrlib/tests/per_workingtree/test_locking.py 2009-08-25 23:38:10 +0000 | |||
2127 | @@ -16,11 +16,14 @@ | |||
2128 | 16 | 16 | ||
2129 | 17 | """Tests for the (un)lock interfaces on all working tree implemenations.""" | 17 | """Tests for the (un)lock interfaces on all working tree implemenations.""" |
2130 | 18 | 18 | ||
2131 | 19 | import sys | ||
2132 | 20 | |||
2133 | 19 | from bzrlib import ( | 21 | from bzrlib import ( |
2134 | 20 | branch, | 22 | branch, |
2135 | 21 | errors, | 23 | errors, |
2136 | 22 | lockdir, | 24 | lockdir, |
2137 | 23 | ) | 25 | ) |
2138 | 26 | from bzrlib.tests import TestSkipped | ||
2139 | 24 | from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree | 27 | from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree |
2140 | 25 | 28 | ||
2141 | 26 | 29 | ||
2142 | @@ -105,8 +108,14 @@ | |||
2143 | 105 | 108 | ||
2144 | 106 | :param methodname: The lock method to use to establish locks. | 109 | :param methodname: The lock method to use to establish locks. |
2145 | 107 | """ | 110 | """ |
2148 | 108 | # This write locks the local tree, and then grabs a read lock on a | 111 | if sys.platform == "win32": |
2149 | 109 | # copy, which is bogus and the test just needs to be rewritten. | 112 | raise TestSkipped("don't use oslocks on win32 in unix manner") |
2150 | 113 | # This helper takes a write lock on the source tree, then opens a | ||
2151 | 114 | # second copy and tries to grab a read lock. This works on Unix and is | ||
2152 | 115 | # a reasonable way to detect when the file is actually written to, but | ||
2153 | 116 | # it won't work (as a test) on Windows. It might be nice to instead | ||
2154 | 117 | # stub out the functions used to write and that way do both less work | ||
2155 | 118 | # and also be able to execute on Windows. | ||
2156 | 110 | self.thisFailsStrictLockCheck() | 119 | self.thisFailsStrictLockCheck() |
2157 | 111 | # when unlocking the last lock count from tree_write_lock, | 120 | # when unlocking the last lock count from tree_write_lock, |
2158 | 112 | # the tree should do a flush(). | 121 | # the tree should do a flush(). |
2159 | 113 | 122 | ||
2160 | === modified file 'bzrlib/tests/per_workingtree/test_set_root_id.py' | |||
2161 | --- bzrlib/tests/per_workingtree/test_set_root_id.py 2009-08-21 01:48:13 +0000 | |||
2162 | +++ bzrlib/tests/per_workingtree/test_set_root_id.py 2009-08-28 05:00:33 +0000 | |||
2163 | @@ -16,13 +16,18 @@ | |||
2164 | 16 | 16 | ||
2165 | 17 | """Tests for WorkingTree.set_root_id""" | 17 | """Tests for WorkingTree.set_root_id""" |
2166 | 18 | 18 | ||
2167 | 19 | import sys | ||
2168 | 20 | |||
2169 | 19 | from bzrlib import inventory | 21 | from bzrlib import inventory |
2170 | 22 | from bzrlib.tests import TestSkipped | ||
2171 | 20 | from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree | 23 | from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree |
2172 | 21 | 24 | ||
2173 | 22 | 25 | ||
2174 | 23 | class TestSetRootId(TestCaseWithWorkingTree): | 26 | class TestSetRootId(TestCaseWithWorkingTree): |
2175 | 24 | 27 | ||
2176 | 25 | def test_set_and_read_unicode(self): | 28 | def test_set_and_read_unicode(self): |
2177 | 29 | if sys.platform == "win32": | ||
2178 | 30 | raise TestSkipped("don't use oslocks on win32 in unix manner") | ||
2179 | 26 | # This test tests that setting the root doesn't flush, so it | 31 | # This test tests that setting the root doesn't flush, so it |
2180 | 27 | # deliberately tests concurrent access that isn't possible on windows. | 32 | # deliberately tests concurrent access that isn't possible on windows. |
2181 | 28 | self.thisFailsStrictLockCheck() | 33 | self.thisFailsStrictLockCheck() |
2182 | 29 | 34 | ||
2183 | === modified file 'bzrlib/tests/test__known_graph.py' | |||
2184 | --- bzrlib/tests/test__known_graph.py 2009-08-26 16:03:59 +0000 | |||
2185 | +++ bzrlib/tests/test__known_graph.py 2009-09-02 13:32:52 +0000 | |||
2186 | @@ -768,3 +768,70 @@ | |||
2187 | 768 | }, | 768 | }, |
2188 | 769 | 'E', | 769 | 'E', |
2189 | 770 | []) | 770 | []) |
2190 | 771 | |||
2191 | 772 | |||
2192 | 773 | class TestKnownGraphStableReverseTopoSort(TestCaseWithKnownGraph): | ||
2193 | 774 | """Test the sort order returned by gc_sort.""" | ||
2194 | 775 | |||
2195 | 776 | def assertSorted(self, expected, parent_map): | ||
2196 | 777 | graph = self.make_known_graph(parent_map) | ||
2197 | 778 | value = graph.gc_sort() | ||
2198 | 779 | if expected != value: | ||
2199 | 780 | self.assertEqualDiff(pprint.pformat(expected), | ||
2200 | 781 | pprint.pformat(value)) | ||
2201 | 782 | |||
2202 | 783 | def test_empty(self): | ||
2203 | 784 | self.assertSorted([], {}) | ||
2204 | 785 | |||
2205 | 786 | def test_single(self): | ||
2206 | 787 | self.assertSorted(['a'], {'a':()}) | ||
2207 | 788 | self.assertSorted([('a',)], {('a',):()}) | ||
2208 | 789 | self.assertSorted([('F', 'a')], {('F', 'a'):()}) | ||
2209 | 790 | |||
2210 | 791 | def test_linear(self): | ||
2211 | 792 | self.assertSorted(['c', 'b', 'a'], {'a':(), 'b':('a',), 'c':('b',)}) | ||
2212 | 793 | self.assertSorted([('c',), ('b',), ('a',)], | ||
2213 | 794 | {('a',):(), ('b',): (('a',),), ('c',): (('b',),)}) | ||
2214 | 795 | self.assertSorted([('F', 'c'), ('F', 'b'), ('F', 'a')], | ||
2215 | 796 | {('F', 'a'):(), ('F', 'b'): (('F', 'a'),), | ||
2216 | 797 | ('F', 'c'): (('F', 'b'),)}) | ||
2217 | 798 | |||
2218 | 799 | def test_mixed_ancestries(self): | ||
2219 | 800 | # Each prefix should be sorted separately | ||
2220 | 801 | self.assertSorted([('F', 'c'), ('F', 'b'), ('F', 'a'), | ||
2221 | 802 | ('G', 'c'), ('G', 'b'), ('G', 'a'), | ||
2222 | 803 | ('Q', 'c'), ('Q', 'b'), ('Q', 'a'), | ||
2223 | 804 | ], | ||
2224 | 805 | {('F', 'a'):(), ('F', 'b'): (('F', 'a'),), | ||
2225 | 806 | ('F', 'c'): (('F', 'b'),), | ||
2226 | 807 | ('G', 'a'):(), ('G', 'b'): (('G', 'a'),), | ||
2227 | 808 | ('G', 'c'): (('G', 'b'),), | ||
2228 | 809 | ('Q', 'a'):(), ('Q', 'b'): (('Q', 'a'),), | ||
2229 | 810 | ('Q', 'c'): (('Q', 'b'),), | ||
2230 | 811 | }) | ||
2231 | 812 | |||
2232 | 813 | def test_stable_sorting(self): | ||
2233 | 814 | # the sort order should be stable even when extra nodes are added | ||
2234 | 815 | self.assertSorted(['b', 'c', 'a'], | ||
2235 | 816 | {'a':(), 'b':('a',), 'c':('a',)}) | ||
2236 | 817 | self.assertSorted(['b', 'c', 'd', 'a'], | ||
2237 | 818 | {'a':(), 'b':('a',), 'c':('a',), 'd':('a',)}) | ||
2238 | 819 | self.assertSorted(['b', 'c', 'd', 'a'], | ||
2239 | 820 | {'a':(), 'b':('a',), 'c':('a',), 'd':('a',)}) | ||
2240 | 821 | self.assertSorted(['Z', 'b', 'c', 'd', 'a'], | ||
2241 | 822 | {'a':(), 'b':('a',), 'c':('a',), 'd':('a',), | ||
2242 | 823 | 'Z':('a',)}) | ||
2243 | 824 | self.assertSorted(['e', 'b', 'c', 'f', 'Z', 'd', 'a'], | ||
2244 | 825 | {'a':(), 'b':('a',), 'c':('a',), 'd':('a',), | ||
2245 | 826 | 'Z':('a',), | ||
2246 | 827 | 'e':('b', 'c', 'd'), | ||
2247 | 828 | 'f':('d', 'Z'), | ||
2248 | 829 | }) | ||
2249 | 830 | |||
2250 | 831 | def test_skip_ghost(self): | ||
2251 | 832 | self.assertSorted(['b', 'c', 'a'], | ||
2252 | 833 | {'a':(), 'b':('a', 'ghost'), 'c':('a',)}) | ||
2253 | 834 | |||
2254 | 835 | def test_skip_mainline_ghost(self): | ||
2255 | 836 | self.assertSorted(['b', 'c', 'a'], | ||
2256 | 837 | {'a':(), 'b':('ghost', 'a'), 'c':('a',)}) | ||
2257 | 771 | 838 | ||
2258 | === modified file 'bzrlib/tests/test_bundle.py' | |||
2259 | --- bzrlib/tests/test_bundle.py 2009-08-04 14:10:09 +0000 | |||
2260 | +++ bzrlib/tests/test_bundle.py 2009-08-27 22:17:35 +0000 | |||
2261 | @@ -1830,9 +1830,8 @@ | |||
2262 | 1830 | """ | 1830 | """ |
2263 | 1831 | from bzrlib.tests.blackbox.test_push import RedirectingMemoryServer | 1831 | from bzrlib.tests.blackbox.test_push import RedirectingMemoryServer |
2264 | 1832 | server = RedirectingMemoryServer() | 1832 | server = RedirectingMemoryServer() |
2266 | 1833 | server.setUp() | 1833 | self.start_server(server) |
2267 | 1834 | url = server.get_url() + 'infinite-loop' | 1834 | url = server.get_url() + 'infinite-loop' |
2268 | 1835 | self.addCleanup(server.tearDown) | ||
2269 | 1836 | self.assertRaises(errors.NotABundle, read_mergeable_from_url, url) | 1835 | self.assertRaises(errors.NotABundle, read_mergeable_from_url, url) |
2270 | 1837 | 1836 | ||
2271 | 1838 | def test_smart_server_connection_reset(self): | 1837 | def test_smart_server_connection_reset(self): |
2272 | @@ -1841,8 +1840,7 @@ | |||
2273 | 1841 | """ | 1840 | """ |
2274 | 1842 | # Instantiate a server that will provoke a ConnectionReset | 1841 | # Instantiate a server that will provoke a ConnectionReset |
2275 | 1843 | sock_server = _DisconnectingTCPServer() | 1842 | sock_server = _DisconnectingTCPServer() |
2278 | 1844 | sock_server.setUp() | 1843 | self.start_server(sock_server) |
2277 | 1845 | self.addCleanup(sock_server.tearDown) | ||
2279 | 1846 | # We don't really care what the url is since the server will close the | 1844 | # We don't really care what the url is since the server will close the |
2280 | 1847 | # connection without interpreting it | 1845 | # connection without interpreting it |
2281 | 1848 | url = sock_server.get_url() | 1846 | url = sock_server.get_url() |
2282 | 1849 | 1847 | ||
2283 | === modified file 'bzrlib/tests/test_crash.py' | |||
2284 | --- bzrlib/tests/test_crash.py 2009-08-20 04:45:48 +0000 | |||
2285 | +++ bzrlib/tests/test_crash.py 2009-08-28 12:38:01 +0000 | |||
2286 | @@ -18,20 +18,17 @@ | |||
2287 | 18 | from StringIO import StringIO | 18 | from StringIO import StringIO |
2288 | 19 | import sys | 19 | import sys |
2289 | 20 | 20 | ||
2294 | 21 | 21 | from bzrlib import ( | |
2295 | 22 | from bzrlib.crash import ( | 22 | crash, |
2296 | 23 | report_bug, | 23 | tests, |
2293 | 24 | _write_apport_report_to_file, | ||
2297 | 25 | ) | 24 | ) |
2307 | 26 | from bzrlib.tests import TestCase | 25 | |
2308 | 27 | from bzrlib.tests.features import ApportFeature | 26 | from bzrlib.tests import features |
2309 | 28 | 27 | ||
2310 | 29 | 28 | ||
2311 | 30 | class TestApportReporting(TestCase): | 29 | class TestApportReporting(tests.TestCase): |
2312 | 31 | 30 | ||
2313 | 32 | def setUp(self): | 31 | _test_needs_features = [features.ApportFeature] |
2305 | 33 | TestCase.setUp(self) | ||
2306 | 34 | self.requireFeature(ApportFeature) | ||
2314 | 35 | 32 | ||
2315 | 36 | def test_apport_report_contents(self): | 33 | def test_apport_report_contents(self): |
2316 | 37 | try: | 34 | try: |
2317 | @@ -39,19 +36,13 @@ | |||
2318 | 39 | except AssertionError, e: | 36 | except AssertionError, e: |
2319 | 40 | pass | 37 | pass |
2320 | 41 | outf = StringIO() | 38 | outf = StringIO() |
2323 | 42 | _write_apport_report_to_file(sys.exc_info(), | 39 | crash._write_apport_report_to_file(sys.exc_info(), outf) |
2322 | 43 | outf) | ||
2324 | 44 | report = outf.getvalue() | 40 | report = outf.getvalue() |
2325 | 45 | 41 | ||
2328 | 46 | self.assertContainsRe(report, | 42 | self.assertContainsRe(report, '(?m)^BzrVersion:') |
2327 | 47 | '(?m)^BzrVersion:') | ||
2329 | 48 | # should be in the traceback | 43 | # should be in the traceback |
2336 | 49 | self.assertContainsRe(report, | 44 | self.assertContainsRe(report, 'my error') |
2337 | 50 | 'my error') | 45 | self.assertContainsRe(report, 'AssertionError') |
2338 | 51 | self.assertContainsRe(report, | 46 | self.assertContainsRe(report, 'test_apport_report_contents') |
2333 | 52 | 'AssertionError') | ||
2334 | 53 | self.assertContainsRe(report, | ||
2335 | 54 | 'test_apport_report_contents') | ||
2339 | 55 | # should also be in there | 47 | # should also be in there |
2342 | 56 | self.assertContainsRe(report, | 48 | self.assertContainsRe(report, '(?m)^CommandLine:') |
2341 | 57 | '(?m)^CommandLine:.*selftest') | ||
2343 | 58 | 49 | ||
2344 | === modified file 'bzrlib/tests/test_groupcompress.py' | |||
2345 | --- bzrlib/tests/test_groupcompress.py 2009-06-29 14:51:13 +0000 | |||
2346 | +++ bzrlib/tests/test_groupcompress.py 2009-09-03 15:26:27 +0000 | |||
2347 | @@ -538,7 +538,7 @@ | |||
2348 | 538 | 'as-requested', False)] | 538 | 'as-requested', False)] |
2349 | 539 | self.assertEqual([('b',), ('a',), ('d',), ('c',)], keys) | 539 | self.assertEqual([('b',), ('a',), ('d',), ('c',)], keys) |
2350 | 540 | 540 | ||
2352 | 541 | def test_insert_record_stream_re_uses_blocks(self): | 541 | def test_insert_record_stream_reuses_blocks(self): |
2353 | 542 | vf = self.make_test_vf(True, dir='source') | 542 | vf = self.make_test_vf(True, dir='source') |
2354 | 543 | def grouped_stream(revision_ids, first_parents=()): | 543 | def grouped_stream(revision_ids, first_parents=()): |
2355 | 544 | parents = first_parents | 544 | parents = first_parents |
2356 | @@ -582,8 +582,14 @@ | |||
2357 | 582 | vf2 = self.make_test_vf(True, dir='target') | 582 | vf2 = self.make_test_vf(True, dir='target') |
2358 | 583 | # ordering in 'groupcompress' order, should actually swap the groups in | 583 | # ordering in 'groupcompress' order, should actually swap the groups in |
2359 | 584 | # the target vf, but the groups themselves should not be disturbed. | 584 | # the target vf, but the groups themselves should not be disturbed. |
2362 | 585 | vf2.insert_record_stream(vf.get_record_stream( | 585 | def small_size_stream(): |
2363 | 586 | [(r,) for r in 'abcdefgh'], 'groupcompress', False)) | 586 | for record in vf.get_record_stream([(r,) for r in 'abcdefgh'], |
2364 | 587 | 'groupcompress', False): | ||
2365 | 588 | record._manager._full_enough_block_size = \ | ||
2366 | 589 | record._manager._block._content_length | ||
2367 | 590 | yield record | ||
2368 | 591 | |||
2369 | 592 | vf2.insert_record_stream(small_size_stream()) | ||
2370 | 587 | stream = vf2.get_record_stream([(r,) for r in 'abcdefgh'], | 593 | stream = vf2.get_record_stream([(r,) for r in 'abcdefgh'], |
2371 | 588 | 'groupcompress', False) | 594 | 'groupcompress', False) |
2372 | 589 | vf2.writer.end() | 595 | vf2.writer.end() |
2373 | @@ -594,6 +600,44 @@ | |||
2374 | 594 | record._manager._block._z_content) | 600 | record._manager._block._z_content) |
2375 | 595 | self.assertEqual(8, num_records) | 601 | self.assertEqual(8, num_records) |
2376 | 596 | 602 | ||
2377 | 603 | def test_insert_record_stream_packs_on_the_fly(self): | ||
2378 | 604 | vf = self.make_test_vf(True, dir='source') | ||
2379 | 605 | def grouped_stream(revision_ids, first_parents=()): | ||
2380 | 606 | parents = first_parents | ||
2381 | 607 | for revision_id in revision_ids: | ||
2382 | 608 | key = (revision_id,) | ||
2383 | 609 | record = versionedfile.FulltextContentFactory( | ||
2384 | 610 | key, parents, None, | ||
2385 | 611 | 'some content that is\n' | ||
2386 | 612 | 'identical except for\n' | ||
2387 | 613 | 'revision_id:%s\n' % (revision_id,)) | ||
2388 | 614 | yield record | ||
2389 | 615 | parents = (key,) | ||
2390 | 616 | # One group, a-d | ||
2391 | 617 | vf.insert_record_stream(grouped_stream(['a', 'b', 'c', 'd'])) | ||
2392 | 618 | # Second group, e-h | ||
2393 | 619 | vf.insert_record_stream(grouped_stream(['e', 'f', 'g', 'h'], | ||
2394 | 620 | first_parents=(('d',),))) | ||
2395 | 621 | # Now copy the blocks into another vf, and see that the | ||
2396 | 622 | # insert_record_stream rebuilt a new block on-the-fly because of | ||
2397 | 623 | # under-utilization | ||
2398 | 624 | vf2 = self.make_test_vf(True, dir='target') | ||
2399 | 625 | vf2.insert_record_stream(vf.get_record_stream( | ||
2400 | 626 | [(r,) for r in 'abcdefgh'], 'groupcompress', False)) | ||
2401 | 627 | stream = vf2.get_record_stream([(r,) for r in 'abcdefgh'], | ||
2402 | 628 | 'groupcompress', False) | ||
2403 | 629 | vf2.writer.end() | ||
2404 | 630 | num_records = 0 | ||
2405 | 631 | # All of the records should be recombined into a single block | ||
2406 | 632 | block = None | ||
2407 | 633 | for record in stream: | ||
2408 | 634 | num_records += 1 | ||
2409 | 635 | if block is None: | ||
2410 | 636 | block = record._manager._block | ||
2411 | 637 | else: | ||
2412 | 638 | self.assertIs(block, record._manager._block) | ||
2413 | 639 | self.assertEqual(8, num_records) | ||
2414 | 640 | |||
2415 | 597 | def test__insert_record_stream_no_reuse_block(self): | 641 | def test__insert_record_stream_no_reuse_block(self): |
2416 | 598 | vf = self.make_test_vf(True, dir='source') | 642 | vf = self.make_test_vf(True, dir='source') |
2417 | 599 | def grouped_stream(revision_ids, first_parents=()): | 643 | def grouped_stream(revision_ids, first_parents=()): |
2418 | @@ -702,19 +746,128 @@ | |||
2419 | 702 | " 0 8', \(\(\('a',\),\),\)\)") | 746 | " 0 8', \(\(\('a',\),\),\)\)") |
2420 | 703 | 747 | ||
2421 | 704 | 748 | ||
2422 | 749 | class StubGCVF(object): | ||
2423 | 750 | def __init__(self, canned_get_blocks=None): | ||
2424 | 751 | self._group_cache = {} | ||
2425 | 752 | self._canned_get_blocks = canned_get_blocks or [] | ||
2426 | 753 | def _get_blocks(self, read_memos): | ||
2427 | 754 | return iter(self._canned_get_blocks) | ||
2428 | 755 | |||
2429 | 756 | |||
2430 | 757 | class Test_BatchingBlockFetcher(TestCaseWithGroupCompressVersionedFiles): | ||
2431 | 758 | """Simple whitebox unit tests for _BatchingBlockFetcher.""" | ||
2432 | 759 | |||
2433 | 760 | def test_add_key_new_read_memo(self): | ||
2434 | 761 | """Adding a key with an uncached read_memo new to this batch adds that | ||
2435 | 762 | read_memo to the list of memos to fetch. | ||
2436 | 763 | """ | ||
2437 | 764 | # locations are: index_memo, ignored, parents, ignored | ||
2438 | 765 | # where index_memo is: (idx, offset, len, factory_start, factory_end) | ||
2439 | 766 | # and (idx, offset, size) is known as the 'read_memo', identifying the | ||
2440 | 767 | # raw bytes needed. | ||
2441 | 768 | read_memo = ('fake index', 100, 50) | ||
2442 | 769 | locations = { | ||
2443 | 770 | ('key',): (read_memo + (None, None), None, None, None)} | ||
2444 | 771 | batcher = groupcompress._BatchingBlockFetcher(StubGCVF(), locations) | ||
2445 | 772 | total_size = batcher.add_key(('key',)) | ||
2446 | 773 | self.assertEqual(50, total_size) | ||
2447 | 774 | self.assertEqual([('key',)], batcher.keys) | ||
2448 | 775 | self.assertEqual([read_memo], batcher.memos_to_get) | ||
2449 | 776 | |||
2450 | 777 | def test_add_key_duplicate_read_memo(self): | ||
2451 | 778 | """read_memos that occur multiple times in a batch will only be fetched | ||
2452 | 779 | once. | ||
2453 | 780 | """ | ||
2454 | 781 | read_memo = ('fake index', 100, 50) | ||
2455 | 782 | # Two keys, both sharing the same read memo (but different overall | ||
2456 | 783 | # index_memos). | ||
2457 | 784 | locations = { | ||
2458 | 785 | ('key1',): (read_memo + (0, 1), None, None, None), | ||
2459 | 786 | ('key2',): (read_memo + (1, 2), None, None, None)} | ||
2460 | 787 | batcher = groupcompress._BatchingBlockFetcher(StubGCVF(), locations) | ||
2461 | 788 | total_size = batcher.add_key(('key1',)) | ||
2462 | 789 | total_size = batcher.add_key(('key2',)) | ||
2463 | 790 | self.assertEqual(50, total_size) | ||
2464 | 791 | self.assertEqual([('key1',), ('key2',)], batcher.keys) | ||
2465 | 792 | self.assertEqual([read_memo], batcher.memos_to_get) | ||
2466 | 793 | |||
2467 | 794 | def test_add_key_cached_read_memo(self): | ||
2468 | 795 | """Adding a key with a cached read_memo will not cause that read_memo | ||
2469 | 796 | to be added to the list to fetch. | ||
2470 | 797 | """ | ||
2471 | 798 | read_memo = ('fake index', 100, 50) | ||
2472 | 799 | gcvf = StubGCVF() | ||
2473 | 800 | gcvf._group_cache[read_memo] = 'fake block' | ||
2474 | 801 | locations = { | ||
2475 | 802 | ('key',): (read_memo + (None, None), None, None, None)} | ||
2476 | 803 | batcher = groupcompress._BatchingBlockFetcher(gcvf, locations) | ||
2477 | 804 | total_size = batcher.add_key(('key',)) | ||
2478 | 805 | self.assertEqual(0, total_size) | ||
2479 | 806 | self.assertEqual([('key',)], batcher.keys) | ||
2480 | 807 | self.assertEqual([], batcher.memos_to_get) | ||
2481 | 808 | |||
2482 | 809 | def test_yield_factories_empty(self): | ||
2483 | 810 | """An empty batch yields no factories.""" | ||
2484 | 811 | batcher = groupcompress._BatchingBlockFetcher(StubGCVF(), {}) | ||
2485 | 812 | self.assertEqual([], list(batcher.yield_factories())) | ||
2486 | 813 | |||
2487 | 814 | def test_yield_factories_calls_get_blocks(self): | ||
2488 | 815 | """Uncached memos are retrieved via get_blocks.""" | ||
2489 | 816 | read_memo1 = ('fake index', 100, 50) | ||
2490 | 817 | read_memo2 = ('fake index', 150, 40) | ||
2491 | 818 | gcvf = StubGCVF( | ||
2492 | 819 | canned_get_blocks=[ | ||
2493 | 820 | (read_memo1, groupcompress.GroupCompressBlock()), | ||
2494 | 821 | (read_memo2, groupcompress.GroupCompressBlock())]) | ||
2495 | 822 | locations = { | ||
2496 | 823 | ('key1',): (read_memo1 + (None, None), None, None, None), | ||
2497 | 824 | ('key2',): (read_memo2 + (None, None), None, None, None)} | ||
2498 | 825 | batcher = groupcompress._BatchingBlockFetcher(gcvf, locations) | ||
2499 | 826 | batcher.add_key(('key1',)) | ||
2500 | 827 | batcher.add_key(('key2',)) | ||
2501 | 828 | factories = list(batcher.yield_factories(full_flush=True)) | ||
2502 | 829 | self.assertLength(2, factories) | ||
2503 | 830 | keys = [f.key for f in factories] | ||
2504 | 831 | kinds = [f.storage_kind for f in factories] | ||
2505 | 832 | self.assertEqual([('key1',), ('key2',)], keys) | ||
2506 | 833 | self.assertEqual(['groupcompress-block', 'groupcompress-block'], kinds) | ||
2507 | 834 | |||
2508 | 835 | def test_yield_factories_flushing(self): | ||
2509 | 836 | """yield_factories holds back on yielding results from the final block | ||
2510 | 837 | unless passed full_flush=True. | ||
2511 | 838 | """ | ||
2512 | 839 | fake_block = groupcompress.GroupCompressBlock() | ||
2513 | 840 | read_memo = ('fake index', 100, 50) | ||
2514 | 841 | gcvf = StubGCVF() | ||
2515 | 842 | gcvf._group_cache[read_memo] = fake_block | ||
2516 | 843 | locations = { | ||
2517 | 844 | ('key',): (read_memo + (None, None), None, None, None)} | ||
2518 | 845 | batcher = groupcompress._BatchingBlockFetcher(gcvf, locations) | ||
2519 | 846 | batcher.add_key(('key',)) | ||
2520 | 847 | self.assertEqual([], list(batcher.yield_factories())) | ||
2521 | 848 | factories = list(batcher.yield_factories(full_flush=True)) | ||
2522 | 849 | self.assertLength(1, factories) | ||
2523 | 850 | self.assertEqual(('key',), factories[0].key) | ||
2524 | 851 | self.assertEqual('groupcompress-block', factories[0].storage_kind) | ||
2525 | 852 | |||
2526 | 853 | |||
2527 | 705 | class TestLazyGroupCompress(tests.TestCaseWithTransport): | 854 | class TestLazyGroupCompress(tests.TestCaseWithTransport): |
2528 | 706 | 855 | ||
2529 | 707 | _texts = { | 856 | _texts = { |
2530 | 708 | ('key1',): "this is a text\n" | 857 | ('key1',): "this is a text\n" |
2532 | 709 | "with a reasonable amount of compressible bytes\n", | 858 | "with a reasonable amount of compressible bytes\n" |
2533 | 859 | "which can be shared between various other texts\n", | ||
2534 | 710 | ('key2',): "another text\n" | 860 | ('key2',): "another text\n" |
2536 | 711 | "with a reasonable amount of compressible bytes\n", | 861 | "with a reasonable amount of compressible bytes\n" |
2537 | 862 | "which can be shared between various other texts\n", | ||
2538 | 712 | ('key3',): "yet another text which won't be extracted\n" | 863 | ('key3',): "yet another text which won't be extracted\n" |
2540 | 713 | "with a reasonable amount of compressible bytes\n", | 864 | "with a reasonable amount of compressible bytes\n" |
2541 | 865 | "which can be shared between various other texts\n", | ||
2542 | 714 | ('key4',): "this will be extracted\n" | 866 | ('key4',): "this will be extracted\n" |
2543 | 715 | "but references most of its bytes from\n" | 867 | "but references most of its bytes from\n" |
2544 | 716 | "yet another text which won't be extracted\n" | 868 | "yet another text which won't be extracted\n" |
2546 | 717 | "with a reasonable amount of compressible bytes\n", | 869 | "with a reasonable amount of compressible bytes\n" |
2547 | 870 | "which can be shared between various other texts\n", | ||
2548 | 718 | } | 871 | } |
2549 | 719 | def make_block(self, key_to_text): | 872 | def make_block(self, key_to_text): |
2550 | 720 | """Create a GroupCompressBlock, filling it with the given texts.""" | 873 | """Create a GroupCompressBlock, filling it with the given texts.""" |
2551 | @@ -732,6 +885,13 @@ | |||
2552 | 732 | start, end = locations[key] | 885 | start, end = locations[key] |
2553 | 733 | manager.add_factory(key, (), start, end) | 886 | manager.add_factory(key, (), start, end) |
2554 | 734 | 887 | ||
2555 | 888 | def make_block_and_full_manager(self, texts): | ||
2556 | 889 | locations, block = self.make_block(texts) | ||
2557 | 890 | manager = groupcompress._LazyGroupContentManager(block) | ||
2558 | 891 | for key in sorted(texts): | ||
2559 | 892 | self.add_key_to_manager(key, locations, block, manager) | ||
2560 | 893 | return block, manager | ||
2561 | 894 | |||
2562 | 735 | def test_get_fulltexts(self): | 895 | def test_get_fulltexts(self): |
2563 | 736 | locations, block = self.make_block(self._texts) | 896 | locations, block = self.make_block(self._texts) |
2564 | 737 | manager = groupcompress._LazyGroupContentManager(block) | 897 | manager = groupcompress._LazyGroupContentManager(block) |
2565 | @@ -788,8 +948,8 @@ | |||
2566 | 788 | header_len = int(header_len) | 948 | header_len = int(header_len) |
2567 | 789 | block_len = int(block_len) | 949 | block_len = int(block_len) |
2568 | 790 | self.assertEqual('groupcompress-block', storage_kind) | 950 | self.assertEqual('groupcompress-block', storage_kind) |
2571 | 791 | self.assertEqual(33, z_header_len) | 951 | self.assertEqual(34, z_header_len) |
2572 | 792 | self.assertEqual(25, header_len) | 952 | self.assertEqual(26, header_len) |
2573 | 793 | self.assertEqual(len(block_bytes), block_len) | 953 | self.assertEqual(len(block_bytes), block_len) |
2574 | 794 | z_header = rest[:z_header_len] | 954 | z_header = rest[:z_header_len] |
2575 | 795 | header = zlib.decompress(z_header) | 955 | header = zlib.decompress(z_header) |
2576 | @@ -829,13 +989,7 @@ | |||
2577 | 829 | self.assertEqual([('key1',), ('key4',)], result_order) | 989 | self.assertEqual([('key1',), ('key4',)], result_order) |
2578 | 830 | 990 | ||
2579 | 831 | def test__check_rebuild_no_changes(self): | 991 | def test__check_rebuild_no_changes(self): |
2587 | 832 | locations, block = self.make_block(self._texts) | 992 | block, manager = self.make_block_and_full_manager(self._texts) |
2581 | 833 | manager = groupcompress._LazyGroupContentManager(block) | ||
2582 | 834 | # Request all the keys, which ensures that we won't rebuild | ||
2583 | 835 | self.add_key_to_manager(('key1',), locations, block, manager) | ||
2584 | 836 | self.add_key_to_manager(('key2',), locations, block, manager) | ||
2585 | 837 | self.add_key_to_manager(('key3',), locations, block, manager) | ||
2586 | 838 | self.add_key_to_manager(('key4',), locations, block, manager) | ||
2588 | 839 | manager._check_rebuild_block() | 993 | manager._check_rebuild_block() |
2589 | 840 | self.assertIs(block, manager._block) | 994 | self.assertIs(block, manager._block) |
2590 | 841 | 995 | ||
2591 | @@ -866,3 +1020,50 @@ | |||
2592 | 866 | self.assertEqual(('key4',), record.key) | 1020 | self.assertEqual(('key4',), record.key) |
2593 | 867 | self.assertEqual(self._texts[record.key], | 1021 | self.assertEqual(self._texts[record.key], |
2594 | 868 | record.get_bytes_as('fulltext')) | 1022 | record.get_bytes_as('fulltext')) |
2595 | 1023 | |||
2596 | 1024 | def test_check_is_well_utilized_all_keys(self): | ||
2597 | 1025 | block, manager = self.make_block_and_full_manager(self._texts) | ||
2598 | 1026 | self.assertFalse(manager.check_is_well_utilized()) | ||
2599 | 1027 | # Though we can fake it by changing the recommended minimum size | ||
2600 | 1028 | manager._full_enough_block_size = block._content_length | ||
2601 | 1029 | self.assertTrue(manager.check_is_well_utilized()) | ||
2602 | 1030 | # Setting it just above causes it to fail | ||
2603 | 1031 | manager._full_enough_block_size = block._content_length + 1 | ||
2604 | 1032 | self.assertFalse(manager.check_is_well_utilized()) | ||
2605 | 1033 | # Setting the mixed-block size doesn't do anything, because the content | ||
2606 | 1034 | # is considered to not be 'mixed' | ||
2607 | 1035 | manager._full_enough_mixed_block_size = block._content_length | ||
2608 | 1036 | self.assertFalse(manager.check_is_well_utilized()) | ||
2609 | 1037 | |||
2610 | 1038 | def test_check_is_well_utilized_mixed_keys(self): | ||
2611 | 1039 | texts = {} | ||
2612 | 1040 | f1k1 = ('f1', 'k1') | ||
2613 | 1041 | f1k2 = ('f1', 'k2') | ||
2614 | 1042 | f2k1 = ('f2', 'k1') | ||
2615 | 1043 | f2k2 = ('f2', 'k2') | ||
2616 | 1044 | texts[f1k1] = self._texts[('key1',)] | ||
2617 | 1045 | texts[f1k2] = self._texts[('key2',)] | ||
2618 | 1046 | texts[f2k1] = self._texts[('key3',)] | ||
2619 | 1047 | texts[f2k2] = self._texts[('key4',)] | ||
2620 | 1048 | block, manager = self.make_block_and_full_manager(texts) | ||
2621 | 1049 | self.assertFalse(manager.check_is_well_utilized()) | ||
2622 | 1050 | manager._full_enough_block_size = block._content_length | ||
2623 | 1051 | self.assertTrue(manager.check_is_well_utilized()) | ||
2624 | 1052 | manager._full_enough_block_size = block._content_length + 1 | ||
2625 | 1053 | self.assertFalse(manager.check_is_well_utilized()) | ||
2626 | 1054 | manager._full_enough_mixed_block_size = block._content_length | ||
2627 | 1055 | self.assertTrue(manager.check_is_well_utilized()) | ||
2628 | 1056 | |||
2629 | 1057 | def test_check_is_well_utilized_partial_use(self): | ||
2630 | 1058 | locations, block = self.make_block(self._texts) | ||
2631 | 1059 | manager = groupcompress._LazyGroupContentManager(block) | ||
2632 | 1060 | manager._full_enough_block_size = block._content_length | ||
2633 | 1061 | self.add_key_to_manager(('key1',), locations, block, manager) | ||
2634 | 1062 | self.add_key_to_manager(('key2',), locations, block, manager) | ||
2635 | 1063 | # Just using the content from key1 and 2 is not enough to be considered | ||
2636 | 1064 | # 'complete' | ||
2637 | 1065 | self.assertFalse(manager.check_is_well_utilized()) | ||
2638 | 1066 | # However if we add key3, then we have enough, as we only require 75% | ||
2639 | 1067 | # consumption | ||
2640 | 1068 | self.add_key_to_manager(('key4',), locations, block, manager) | ||
2641 | 1069 | self.assertTrue(manager.check_is_well_utilized()) | ||
2642 | 869 | 1070 | ||
2643 | === modified file 'bzrlib/tests/test_http.py' | |||
2644 | --- bzrlib/tests/test_http.py 2009-08-19 16:33:39 +0000 | |||
2645 | +++ bzrlib/tests/test_http.py 2009-08-27 22:17:35 +0000 | |||
2646 | @@ -304,7 +304,7 @@ | |||
2647 | 304 | 304 | ||
2648 | 305 | server = http_server.HttpServer(BogusRequestHandler) | 305 | server = http_server.HttpServer(BogusRequestHandler) |
2649 | 306 | try: | 306 | try: |
2651 | 307 | self.assertRaises(httplib.UnknownProtocol,server.setUp) | 307 | self.assertRaises(httplib.UnknownProtocol, server.setUp) |
2652 | 308 | except: | 308 | except: |
2653 | 309 | server.tearDown() | 309 | server.tearDown() |
2654 | 310 | self.fail('HTTP Server creation did not raise UnknownProtocol') | 310 | self.fail('HTTP Server creation did not raise UnknownProtocol') |
2655 | @@ -312,7 +312,7 @@ | |||
2656 | 312 | def test_force_invalid_protocol(self): | 312 | def test_force_invalid_protocol(self): |
2657 | 313 | server = http_server.HttpServer(protocol_version='HTTP/0.1') | 313 | server = http_server.HttpServer(protocol_version='HTTP/0.1') |
2658 | 314 | try: | 314 | try: |
2660 | 315 | self.assertRaises(httplib.UnknownProtocol,server.setUp) | 315 | self.assertRaises(httplib.UnknownProtocol, server.setUp) |
2661 | 316 | except: | 316 | except: |
2662 | 317 | server.tearDown() | 317 | server.tearDown() |
2663 | 318 | self.fail('HTTP Server creation did not raise UnknownProtocol') | 318 | self.fail('HTTP Server creation did not raise UnknownProtocol') |
2664 | @@ -320,8 +320,10 @@ | |||
2665 | 320 | def test_server_start_and_stop(self): | 320 | def test_server_start_and_stop(self): |
2666 | 321 | server = http_server.HttpServer() | 321 | server = http_server.HttpServer() |
2667 | 322 | server.setUp() | 322 | server.setUp() |
2670 | 323 | self.assertTrue(server._http_running) | 323 | try: |
2671 | 324 | server.tearDown() | 324 | self.assertTrue(server._http_running) |
2672 | 325 | finally: | ||
2673 | 326 | server.tearDown() | ||
2674 | 325 | self.assertFalse(server._http_running) | 327 | self.assertFalse(server._http_running) |
2675 | 326 | 328 | ||
2676 | 327 | def test_create_http_server_one_zero(self): | 329 | def test_create_http_server_one_zero(self): |
2677 | @@ -330,8 +332,7 @@ | |||
2678 | 330 | protocol_version = 'HTTP/1.0' | 332 | protocol_version = 'HTTP/1.0' |
2679 | 331 | 333 | ||
2680 | 332 | server = http_server.HttpServer(RequestHandlerOneZero) | 334 | server = http_server.HttpServer(RequestHandlerOneZero) |
2683 | 333 | server.setUp() | 335 | self.start_server(server) |
2682 | 334 | self.addCleanup(server.tearDown) | ||
2684 | 335 | self.assertIsInstance(server._httpd, http_server.TestingHTTPServer) | 336 | self.assertIsInstance(server._httpd, http_server.TestingHTTPServer) |
2685 | 336 | 337 | ||
2686 | 337 | def test_create_http_server_one_one(self): | 338 | def test_create_http_server_one_one(self): |
2687 | @@ -340,8 +341,7 @@ | |||
2688 | 340 | protocol_version = 'HTTP/1.1' | 341 | protocol_version = 'HTTP/1.1' |
2689 | 341 | 342 | ||
2690 | 342 | server = http_server.HttpServer(RequestHandlerOneOne) | 343 | server = http_server.HttpServer(RequestHandlerOneOne) |
2693 | 343 | server.setUp() | 344 | self.start_server(server) |
2692 | 344 | self.addCleanup(server.tearDown) | ||
2694 | 345 | self.assertIsInstance(server._httpd, | 345 | self.assertIsInstance(server._httpd, |
2695 | 346 | http_server.TestingThreadingHTTPServer) | 346 | http_server.TestingThreadingHTTPServer) |
2696 | 347 | 347 | ||
2697 | @@ -352,8 +352,7 @@ | |||
2698 | 352 | 352 | ||
2699 | 353 | server = http_server.HttpServer(RequestHandlerOneZero, | 353 | server = http_server.HttpServer(RequestHandlerOneZero, |
2700 | 354 | protocol_version='HTTP/1.1') | 354 | protocol_version='HTTP/1.1') |
2703 | 355 | server.setUp() | 355 | self.start_server(server) |
2702 | 356 | self.addCleanup(server.tearDown) | ||
2704 | 357 | self.assertIsInstance(server._httpd, | 356 | self.assertIsInstance(server._httpd, |
2705 | 358 | http_server.TestingThreadingHTTPServer) | 357 | http_server.TestingThreadingHTTPServer) |
2706 | 359 | 358 | ||
2707 | @@ -364,8 +363,7 @@ | |||
2708 | 364 | 363 | ||
2709 | 365 | server = http_server.HttpServer(RequestHandlerOneOne, | 364 | server = http_server.HttpServer(RequestHandlerOneOne, |
2710 | 366 | protocol_version='HTTP/1.0') | 365 | protocol_version='HTTP/1.0') |
2713 | 367 | server.setUp() | 366 | self.start_server(server) |
2712 | 368 | self.addCleanup(server.tearDown) | ||
2714 | 369 | self.assertIsInstance(server._httpd, | 367 | self.assertIsInstance(server._httpd, |
2715 | 370 | http_server.TestingHTTPServer) | 368 | http_server.TestingHTTPServer) |
2716 | 371 | 369 | ||
2717 | @@ -431,8 +429,8 @@ | |||
2718 | 431 | def test_http_impl_urls(self): | 429 | def test_http_impl_urls(self): |
2719 | 432 | """There are servers which ask for particular clients to connect""" | 430 | """There are servers which ask for particular clients to connect""" |
2720 | 433 | server = self._server() | 431 | server = self._server() |
2721 | 432 | server.setUp() | ||
2722 | 434 | try: | 433 | try: |
2723 | 435 | server.setUp() | ||
2724 | 436 | url = server.get_url() | 434 | url = server.get_url() |
2725 | 437 | self.assertTrue(url.startswith('%s://' % self._qualified_prefix)) | 435 | self.assertTrue(url.startswith('%s://' % self._qualified_prefix)) |
2726 | 438 | finally: | 436 | finally: |
2727 | @@ -544,8 +542,7 @@ | |||
2728 | 544 | 542 | ||
2729 | 545 | def test_post_body_is_received(self): | 543 | def test_post_body_is_received(self): |
2730 | 546 | server = RecordingServer(expect_body_tail='end-of-body') | 544 | server = RecordingServer(expect_body_tail='end-of-body') |
2733 | 547 | server.setUp() | 545 | self.start_server(server) |
2732 | 548 | self.addCleanup(server.tearDown) | ||
2734 | 549 | scheme = self._qualified_prefix | 546 | scheme = self._qualified_prefix |
2735 | 550 | url = '%s://%s:%s/' % (scheme, server.host, server.port) | 547 | url = '%s://%s:%s/' % (scheme, server.host, server.port) |
2736 | 551 | http_transport = self._transport(url) | 548 | http_transport = self._transport(url) |
2737 | @@ -780,8 +777,7 @@ | |||
2738 | 780 | 777 | ||
2739 | 781 | def test_send_receive_bytes(self): | 778 | def test_send_receive_bytes(self): |
2740 | 782 | server = RecordingServer(expect_body_tail='c') | 779 | server = RecordingServer(expect_body_tail='c') |
2743 | 783 | server.setUp() | 780 | self.start_server(server) |
2742 | 784 | self.addCleanup(server.tearDown) | ||
2744 | 785 | sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) | 781 | sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) |
2745 | 786 | sock.connect((server.host, server.port)) | 782 | sock.connect((server.host, server.port)) |
2746 | 787 | sock.sendall('abc') | 783 | sock.sendall('abc') |
2747 | 788 | 784 | ||
2748 | === modified file 'bzrlib/tests/test_lsprof.py' | |||
2749 | --- bzrlib/tests/test_lsprof.py 2009-03-23 14:59:43 +0000 | |||
2750 | +++ bzrlib/tests/test_lsprof.py 2009-08-24 21:05:09 +0000 | |||
2751 | @@ -92,3 +92,22 @@ | |||
2752 | 92 | self.stats.save(f) | 92 | self.stats.save(f) |
2753 | 93 | data1 = cPickle.load(open(f)) | 93 | data1 = cPickle.load(open(f)) |
2754 | 94 | self.assertEqual(type(data1), bzrlib.lsprof.Stats) | 94 | self.assertEqual(type(data1), bzrlib.lsprof.Stats) |
2755 | 95 | |||
2756 | 96 | |||
2757 | 97 | class TestBzrProfiler(tests.TestCase): | ||
2758 | 98 | |||
2759 | 99 | _test_needs_features = [LSProfFeature] | ||
2760 | 100 | |||
2761 | 101 | def test_start_call_stuff_stop(self): | ||
2762 | 102 | profiler = bzrlib.lsprof.BzrProfiler() | ||
2763 | 103 | profiler.start() | ||
2764 | 104 | try: | ||
2765 | 105 | def a_function(): | ||
2766 | 106 | pass | ||
2767 | 107 | a_function() | ||
2768 | 108 | finally: | ||
2769 | 109 | stats = profiler.stop() | ||
2770 | 110 | stats.freeze() | ||
2771 | 111 | lines = [str(data) for data in stats.data] | ||
2772 | 112 | lines = [line for line in lines if 'a_function' in line] | ||
2773 | 113 | self.assertLength(1, lines) | ||
2774 | 95 | 114 | ||
2775 | === modified file 'bzrlib/tests/test_remote.py' | |||
2776 | --- bzrlib/tests/test_remote.py 2009-08-27 05:22:14 +0000 | |||
2777 | +++ bzrlib/tests/test_remote.py 2009-08-30 21:34:42 +0000 | |||
2778 | @@ -1945,8 +1945,7 @@ | |||
2779 | 1945 | def test_allows_new_revisions(self): | 1945 | def test_allows_new_revisions(self): |
2780 | 1946 | """get_parent_map's results can be updated by commit.""" | 1946 | """get_parent_map's results can be updated by commit.""" |
2781 | 1947 | smart_server = server.SmartTCPServer_for_testing() | 1947 | smart_server = server.SmartTCPServer_for_testing() |
2784 | 1948 | smart_server.setUp() | 1948 | self.start_server(smart_server) |
2783 | 1949 | self.addCleanup(smart_server.tearDown) | ||
2785 | 1950 | self.make_branch('branch') | 1949 | self.make_branch('branch') |
2786 | 1951 | branch = Branch.open(smart_server.get_url() + '/branch') | 1950 | branch = Branch.open(smart_server.get_url() + '/branch') |
2787 | 1952 | tree = branch.create_checkout('tree', lightweight=True) | 1951 | tree = branch.create_checkout('tree', lightweight=True) |
2788 | @@ -2781,8 +2780,7 @@ | |||
2789 | 2781 | stacked_branch.set_stacked_on_url('../base') | 2780 | stacked_branch.set_stacked_on_url('../base') |
2790 | 2782 | # start a server looking at this | 2781 | # start a server looking at this |
2791 | 2783 | smart_server = server.SmartTCPServer_for_testing() | 2782 | smart_server = server.SmartTCPServer_for_testing() |
2794 | 2784 | smart_server.setUp() | 2783 | self.start_server(smart_server) |
2793 | 2785 | self.addCleanup(smart_server.tearDown) | ||
2795 | 2786 | remote_bzrdir = BzrDir.open(smart_server.get_url() + '/stacked') | 2784 | remote_bzrdir = BzrDir.open(smart_server.get_url() + '/stacked') |
2796 | 2787 | # can get its branch and repository | 2785 | # can get its branch and repository |
2797 | 2788 | remote_branch = remote_bzrdir.open_branch() | 2786 | remote_branch = remote_bzrdir.open_branch() |
2798 | @@ -2943,8 +2941,7 @@ | |||
2799 | 2943 | # Create a smart server that publishes whatever the backing VFS server | 2941 | # Create a smart server that publishes whatever the backing VFS server |
2800 | 2944 | # does. | 2942 | # does. |
2801 | 2945 | self.smart_server = server.SmartTCPServer_for_testing() | 2943 | self.smart_server = server.SmartTCPServer_for_testing() |
2804 | 2946 | self.smart_server.setUp(self.get_server()) | 2944 | self.start_server(self.smart_server, self.get_server()) |
2803 | 2947 | self.addCleanup(self.smart_server.tearDown) | ||
2805 | 2948 | # Log all HPSS calls into self.hpss_calls. | 2945 | # Log all HPSS calls into self.hpss_calls. |
2806 | 2949 | _SmartClient.hooks.install_named_hook( | 2946 | _SmartClient.hooks.install_named_hook( |
2807 | 2950 | 'call', self.capture_hpss_call, None) | 2947 | 'call', self.capture_hpss_call, None) |
2808 | 2951 | 2948 | ||
2809 | === modified file 'bzrlib/tests/test_repository.py' | |||
2810 | --- bzrlib/tests/test_repository.py 2009-08-17 23:15:55 +0000 | |||
2811 | +++ bzrlib/tests/test_repository.py 2009-09-01 21:21:53 +0000 | |||
2812 | @@ -683,6 +683,28 @@ | |||
2813 | 683 | 683 | ||
2814 | 684 | class Test2a(TestCaseWithTransport): | 684 | class Test2a(TestCaseWithTransport): |
2815 | 685 | 685 | ||
2816 | 686 | def test_fetch_combines_groups(self): | ||
2817 | 687 | builder = self.make_branch_builder('source', format='2a') | ||
2818 | 688 | builder.start_series() | ||
2819 | 689 | builder.build_snapshot('1', None, [ | ||
2820 | 690 | ('add', ('', 'root-id', 'directory', '')), | ||
2821 | 691 | ('add', ('file', 'file-id', 'file', 'content\n'))]) | ||
2822 | 692 | builder.build_snapshot('2', ['1'], [ | ||
2823 | 693 | ('modify', ('file-id', 'content-2\n'))]) | ||
2824 | 694 | builder.finish_series() | ||
2825 | 695 | source = builder.get_branch() | ||
2826 | 696 | target = self.make_repository('target', format='2a') | ||
2827 | 697 | target.fetch(source.repository) | ||
2828 | 698 | target.lock_read() | ||
2829 | 699 | self.addCleanup(target.unlock) | ||
2830 | 700 | details = target.texts._index.get_build_details( | ||
2831 | 701 | [('file-id', '1',), ('file-id', '2',)]) | ||
2832 | 702 | file_1_details = details[('file-id', '1')] | ||
2833 | 703 | file_2_details = details[('file-id', '2')] | ||
2834 | 704 | # The index, and what to read off disk, should be the same for both | ||
2835 | 705 | # versions of the file. | ||
2836 | 706 | self.assertEqual(file_1_details[0][:3], file_2_details[0][:3]) | ||
2837 | 707 | |||
2838 | 686 | def test_format_pack_compresses_True(self): | 708 | def test_format_pack_compresses_True(self): |
2839 | 687 | repo = self.make_repository('repo', format='2a') | 709 | repo = self.make_repository('repo', format='2a') |
2840 | 688 | self.assertTrue(repo._format.pack_compresses) | 710 | self.assertTrue(repo._format.pack_compresses) |
2841 | 689 | 711 | ||
2842 | === modified file 'bzrlib/tests/test_selftest.py' | |||
2843 | --- bzrlib/tests/test_selftest.py 2009-08-24 05:35:28 +0000 | |||
2844 | +++ bzrlib/tests/test_selftest.py 2009-08-26 23:25:28 +0000 | |||
2845 | @@ -687,6 +687,26 @@ | |||
2846 | 687 | self.assertEqual(url, t.clone('..').base) | 687 | self.assertEqual(url, t.clone('..').base) |
2847 | 688 | 688 | ||
2848 | 689 | 689 | ||
2849 | 690 | class TestProfileResult(tests.TestCase): | ||
2850 | 691 | |||
2851 | 692 | def test_profiles_tests(self): | ||
2852 | 693 | self.requireFeature(test_lsprof.LSProfFeature) | ||
2853 | 694 | terminal = unittest.TestResult() | ||
2854 | 695 | result = tests.ProfileResult(terminal) | ||
2855 | 696 | class Sample(tests.TestCase): | ||
2856 | 697 | def a(self): | ||
2857 | 698 | self.sample_function() | ||
2858 | 699 | def sample_function(self): | ||
2859 | 700 | pass | ||
2860 | 701 | test = Sample("a") | ||
2861 | 702 | test.attrs_to_keep = test.attrs_to_keep + ('_benchcalls',) | ||
2862 | 703 | test.run(result) | ||
2863 | 704 | self.assertLength(1, test._benchcalls) | ||
2864 | 705 | # We must be able to unpack it as the test reporting code wants | ||
2865 | 706 | (_, _, _), stats = test._benchcalls[0] | ||
2866 | 707 | self.assertTrue(callable(stats.pprint)) | ||
2867 | 708 | |||
2868 | 709 | |||
2869 | 690 | class TestTestResult(tests.TestCase): | 710 | class TestTestResult(tests.TestCase): |
2870 | 691 | 711 | ||
2871 | 692 | def check_timing(self, test_case, expected_re): | 712 | def check_timing(self, test_case, expected_re): |
2872 | @@ -800,7 +820,7 @@ | |||
2873 | 800 | def test_known_failure(self): | 820 | def test_known_failure(self): |
2874 | 801 | """A KnownFailure being raised should trigger several result actions.""" | 821 | """A KnownFailure being raised should trigger several result actions.""" |
2875 | 802 | class InstrumentedTestResult(tests.ExtendedTestResult): | 822 | class InstrumentedTestResult(tests.ExtendedTestResult): |
2877 | 803 | def done(self): pass | 823 | def stopTestRun(self): pass |
2878 | 804 | def startTests(self): pass | 824 | def startTests(self): pass |
2879 | 805 | def report_test_start(self, test): pass | 825 | def report_test_start(self, test): pass |
2880 | 806 | def report_known_failure(self, test, err): | 826 | def report_known_failure(self, test, err): |
2881 | @@ -854,7 +874,7 @@ | |||
2882 | 854 | def test_add_not_supported(self): | 874 | def test_add_not_supported(self): |
2883 | 855 | """Test the behaviour of invoking addNotSupported.""" | 875 | """Test the behaviour of invoking addNotSupported.""" |
2884 | 856 | class InstrumentedTestResult(tests.ExtendedTestResult): | 876 | class InstrumentedTestResult(tests.ExtendedTestResult): |
2886 | 857 | def done(self): pass | 877 | def stopTestRun(self): pass |
2887 | 858 | def startTests(self): pass | 878 | def startTests(self): pass |
2888 | 859 | def report_test_start(self, test): pass | 879 | def report_test_start(self, test): pass |
2889 | 860 | def report_unsupported(self, test, feature): | 880 | def report_unsupported(self, test, feature): |
2890 | @@ -898,7 +918,7 @@ | |||
2891 | 898 | def test_unavailable_exception(self): | 918 | def test_unavailable_exception(self): |
2892 | 899 | """An UnavailableFeature being raised should invoke addNotSupported.""" | 919 | """An UnavailableFeature being raised should invoke addNotSupported.""" |
2893 | 900 | class InstrumentedTestResult(tests.ExtendedTestResult): | 920 | class InstrumentedTestResult(tests.ExtendedTestResult): |
2895 | 901 | def done(self): pass | 921 | def stopTestRun(self): pass |
2896 | 902 | def startTests(self): pass | 922 | def startTests(self): pass |
2897 | 903 | def report_test_start(self, test): pass | 923 | def report_test_start(self, test): pass |
2898 | 904 | def addNotSupported(self, test, feature): | 924 | def addNotSupported(self, test, feature): |
2899 | @@ -981,11 +1001,14 @@ | |||
2900 | 981 | because of our use of global state. | 1001 | because of our use of global state. |
2901 | 982 | """ | 1002 | """ |
2902 | 983 | old_root = tests.TestCaseInTempDir.TEST_ROOT | 1003 | old_root = tests.TestCaseInTempDir.TEST_ROOT |
2903 | 1004 | old_leak = tests.TestCase._first_thread_leaker_id | ||
2904 | 984 | try: | 1005 | try: |
2905 | 985 | tests.TestCaseInTempDir.TEST_ROOT = None | 1006 | tests.TestCaseInTempDir.TEST_ROOT = None |
2906 | 1007 | tests.TestCase._first_thread_leaker_id = None | ||
2907 | 986 | return testrunner.run(test) | 1008 | return testrunner.run(test) |
2908 | 987 | finally: | 1009 | finally: |
2909 | 988 | tests.TestCaseInTempDir.TEST_ROOT = old_root | 1010 | tests.TestCaseInTempDir.TEST_ROOT = old_root |
2910 | 1011 | tests.TestCase._first_thread_leaker_id = old_leak | ||
2911 | 989 | 1012 | ||
2912 | 990 | def test_known_failure_failed_run(self): | 1013 | def test_known_failure_failed_run(self): |
2913 | 991 | # run a test that generates a known failure which should be printed in | 1014 | # run a test that generates a known failure which should be printed in |
2914 | @@ -1031,6 +1054,20 @@ | |||
2915 | 1031 | '\n' | 1054 | '\n' |
2916 | 1032 | 'OK \\(known_failures=1\\)\n') | 1055 | 'OK \\(known_failures=1\\)\n') |
2917 | 1033 | 1056 | ||
2918 | 1057 | def test_result_decorator(self): | ||
2919 | 1058 | # decorate results | ||
2920 | 1059 | calls = [] | ||
2921 | 1060 | class LoggingDecorator(tests.ForwardingResult): | ||
2922 | 1061 | def startTest(self, test): | ||
2923 | 1062 | tests.ForwardingResult.startTest(self, test) | ||
2924 | 1063 | calls.append('start') | ||
2925 | 1064 | test = unittest.FunctionTestCase(lambda:None) | ||
2926 | 1065 | stream = StringIO() | ||
2927 | 1066 | runner = tests.TextTestRunner(stream=stream, | ||
2928 | 1067 | result_decorators=[LoggingDecorator]) | ||
2929 | 1068 | result = self.run_test_runner(runner, test) | ||
2930 | 1069 | self.assertLength(1, calls) | ||
2931 | 1070 | |||
2932 | 1034 | def test_skipped_test(self): | 1071 | def test_skipped_test(self): |
2933 | 1035 | # run a test that is skipped, and check the suite as a whole still | 1072 | # run a test that is skipped, and check the suite as a whole still |
2934 | 1036 | # succeeds. | 1073 | # succeeds. |
2935 | @@ -1103,10 +1140,6 @@ | |||
2936 | 1103 | self.assertContainsRe(out.getvalue(), | 1140 | self.assertContainsRe(out.getvalue(), |
2937 | 1104 | r'(?m)^ this test never runs') | 1141 | r'(?m)^ this test never runs') |
2938 | 1105 | 1142 | ||
2939 | 1106 | def test_not_applicable_demo(self): | ||
2940 | 1107 | # just so you can see it in the test output | ||
2941 | 1108 | raise tests.TestNotApplicable('this test is just a demonstation') | ||
2942 | 1109 | |||
2943 | 1110 | def test_unsupported_features_listed(self): | 1143 | def test_unsupported_features_listed(self): |
2944 | 1111 | """When unsupported features are encountered they are detailed.""" | 1144 | """When unsupported features are encountered they are detailed.""" |
2945 | 1112 | class Feature1(tests.Feature): | 1145 | class Feature1(tests.Feature): |
2946 | @@ -1261,6 +1294,34 @@ | |||
2947 | 1261 | self.assertContainsRe(log, 'this will be kept') | 1294 | self.assertContainsRe(log, 'this will be kept') |
2948 | 1262 | self.assertEqual(log, test._log_contents) | 1295 | self.assertEqual(log, test._log_contents) |
2949 | 1263 | 1296 | ||
2950 | 1297 | def test_startTestRun(self): | ||
2951 | 1298 | """run should call result.startTestRun()""" | ||
2952 | 1299 | calls = [] | ||
2953 | 1300 | class LoggingDecorator(tests.ForwardingResult): | ||
2954 | 1301 | def startTestRun(self): | ||
2955 | 1302 | tests.ForwardingResult.startTestRun(self) | ||
2956 | 1303 | calls.append('startTestRun') | ||
2957 | 1304 | test = unittest.FunctionTestCase(lambda:None) | ||
2958 | 1305 | stream = StringIO() | ||
2959 | 1306 | runner = tests.TextTestRunner(stream=stream, | ||
2960 | 1307 | result_decorators=[LoggingDecorator]) | ||
2961 | 1308 | result = self.run_test_runner(runner, test) | ||
2962 | 1309 | self.assertLength(1, calls) | ||
2963 | 1310 | |||
2964 | 1311 | def test_stopTestRun(self): | ||
2965 | 1312 | """run should call result.stopTestRun()""" | ||
2966 | 1313 | calls = [] | ||
2967 | 1314 | class LoggingDecorator(tests.ForwardingResult): | ||
2968 | 1315 | def stopTestRun(self): | ||
2969 | 1316 | tests.ForwardingResult.stopTestRun(self) | ||
2970 | 1317 | calls.append('stopTestRun') | ||
2971 | 1318 | test = unittest.FunctionTestCase(lambda:None) | ||
2972 | 1319 | stream = StringIO() | ||
2973 | 1320 | runner = tests.TextTestRunner(stream=stream, | ||
2974 | 1321 | result_decorators=[LoggingDecorator]) | ||
2975 | 1322 | result = self.run_test_runner(runner, test) | ||
2976 | 1323 | self.assertLength(1, calls) | ||
2977 | 1324 | |||
2978 | 1264 | 1325 | ||
2979 | 1265 | class SampleTestCase(tests.TestCase): | 1326 | class SampleTestCase(tests.TestCase): |
2980 | 1266 | 1327 | ||
2981 | @@ -1480,6 +1541,7 @@ | |||
2982 | 1480 | self.assertEqual((time.sleep, (0.003,), {}), self._benchcalls[1][0]) | 1541 | self.assertEqual((time.sleep, (0.003,), {}), self._benchcalls[1][0]) |
2983 | 1481 | self.assertIsInstance(self._benchcalls[0][1], bzrlib.lsprof.Stats) | 1542 | self.assertIsInstance(self._benchcalls[0][1], bzrlib.lsprof.Stats) |
2984 | 1482 | self.assertIsInstance(self._benchcalls[1][1], bzrlib.lsprof.Stats) | 1543 | self.assertIsInstance(self._benchcalls[1][1], bzrlib.lsprof.Stats) |
2985 | 1544 | del self._benchcalls[:] | ||
2986 | 1483 | 1545 | ||
2987 | 1484 | def test_knownFailure(self): | 1546 | def test_knownFailure(self): |
2988 | 1485 | """Self.knownFailure() should raise a KnownFailure exception.""" | 1547 | """Self.knownFailure() should raise a KnownFailure exception.""" |
2989 | @@ -1742,16 +1804,16 @@ | |||
2990 | 1742 | tree = self.make_branch_and_memory_tree('a') | 1804 | tree = self.make_branch_and_memory_tree('a') |
2991 | 1743 | self.assertIsInstance(tree, bzrlib.memorytree.MemoryTree) | 1805 | self.assertIsInstance(tree, bzrlib.memorytree.MemoryTree) |
2992 | 1744 | 1806 | ||
2999 | 1745 | 1807 | def test_make_tree_for_local_vfs_backed_transport(self): | |
3000 | 1746 | class TestSFTPMakeBranchAndTree(test_sftp_transport.TestCaseWithSFTPServer): | 1808 | # make_branch_and_tree has to use local branch and repositories |
3001 | 1747 | 1809 | # when the vfs transport and local disk are colocated, even if | |
3002 | 1748 | def test_make_tree_for_sftp_branch(self): | 1810 | # a different transport is in use for url generation. |
3003 | 1749 | """Transports backed by local directories create local trees.""" | 1811 | from bzrlib.transport.fakevfat import FakeVFATServer |
3004 | 1750 | # NB: This is arguably a bug in the definition of make_branch_and_tree. | 1812 | self.transport_server = FakeVFATServer |
3005 | 1813 | self.assertFalse(self.get_url('t1').startswith('file://')) | ||
3006 | 1751 | tree = self.make_branch_and_tree('t1') | 1814 | tree = self.make_branch_and_tree('t1') |
3007 | 1752 | base = tree.bzrdir.root_transport.base | 1815 | base = tree.bzrdir.root_transport.base |
3010 | 1753 | self.failIf(base.startswith('sftp'), | 1816 | self.assertStartsWith(base, 'file://') |
3009 | 1754 | 'base %r is on sftp but should be local' % base) | ||
3011 | 1755 | self.assertEquals(tree.bzrdir.root_transport, | 1817 | self.assertEquals(tree.bzrdir.root_transport, |
3012 | 1756 | tree.branch.bzrdir.root_transport) | 1818 | tree.branch.bzrdir.root_transport) |
3013 | 1757 | self.assertEquals(tree.bzrdir.root_transport, | 1819 | self.assertEquals(tree.bzrdir.root_transport, |
3014 | @@ -1817,6 +1879,20 @@ | |||
3015 | 1817 | self.assertNotContainsRe("Test.b", output.getvalue()) | 1879 | self.assertNotContainsRe("Test.b", output.getvalue()) |
3016 | 1818 | self.assertLength(2, output.readlines()) | 1880 | self.assertLength(2, output.readlines()) |
3017 | 1819 | 1881 | ||
3018 | 1882 | def test_lsprof_tests(self): | ||
3019 | 1883 | self.requireFeature(test_lsprof.LSProfFeature) | ||
3020 | 1884 | calls = [] | ||
3021 | 1885 | class Test(object): | ||
3022 | 1886 | def __call__(test, result): | ||
3023 | 1887 | test.run(result) | ||
3024 | 1888 | def run(test, result): | ||
3025 | 1889 | self.assertIsInstance(result, tests.ForwardingResult) | ||
3026 | 1890 | calls.append("called") | ||
3027 | 1891 | def countTestCases(self): | ||
3028 | 1892 | return 1 | ||
3029 | 1893 | self.run_selftest(test_suite_factory=Test, lsprof_tests=True) | ||
3030 | 1894 | self.assertLength(1, calls) | ||
3031 | 1895 | |||
3032 | 1820 | def test_random(self): | 1896 | def test_random(self): |
3033 | 1821 | # test randomising by listing a number of tests. | 1897 | # test randomising by listing a number of tests. |
3034 | 1822 | output_123 = self.run_selftest(test_suite_factory=self.factory, | 1898 | output_123 = self.run_selftest(test_suite_factory=self.factory, |
3035 | @@ -1877,8 +1953,8 @@ | |||
3036 | 1877 | def test_transport_sftp(self): | 1953 | def test_transport_sftp(self): |
3037 | 1878 | try: | 1954 | try: |
3038 | 1879 | import bzrlib.transport.sftp | 1955 | import bzrlib.transport.sftp |
3041 | 1880 | except ParamikoNotPresent: | 1956 | except errors.ParamikoNotPresent: |
3042 | 1881 | raise TestSkipped("Paramiko not present") | 1957 | raise tests.TestSkipped("Paramiko not present") |
3043 | 1882 | self.check_transport_set(bzrlib.transport.sftp.SFTPAbsoluteServer) | 1958 | self.check_transport_set(bzrlib.transport.sftp.SFTPAbsoluteServer) |
3044 | 1883 | 1959 | ||
3045 | 1884 | def test_transport_memory(self): | 1960 | def test_transport_memory(self): |
3046 | @@ -2072,7 +2148,8 @@ | |||
3047 | 2072 | return self.out, self.err | 2148 | return self.out, self.err |
3048 | 2073 | 2149 | ||
3049 | 2074 | 2150 | ||
3051 | 2075 | class TestRunBzrSubprocess(tests.TestCaseWithTransport): | 2151 | class TestWithFakedStartBzrSubprocess(tests.TestCaseWithTransport): |
3052 | 2152 | """Base class for tests testing how we might run bzr.""" | ||
3053 | 2076 | 2153 | ||
3054 | 2077 | def setUp(self): | 2154 | def setUp(self): |
3055 | 2078 | tests.TestCaseWithTransport.setUp(self) | 2155 | tests.TestCaseWithTransport.setUp(self) |
3056 | @@ -2089,6 +2166,9 @@ | |||
3057 | 2089 | 'working_dir':working_dir, 'allow_plugins':allow_plugins}) | 2166 | 'working_dir':working_dir, 'allow_plugins':allow_plugins}) |
3058 | 2090 | return self.next_subprocess | 2167 | return self.next_subprocess |
3059 | 2091 | 2168 | ||
3060 | 2169 | |||
3061 | 2170 | class TestRunBzrSubprocess(TestWithFakedStartBzrSubprocess): | ||
3062 | 2171 | |||
3063 | 2092 | def assertRunBzrSubprocess(self, expected_args, process, *args, **kwargs): | 2172 | def assertRunBzrSubprocess(self, expected_args, process, *args, **kwargs): |
3064 | 2093 | """Run run_bzr_subprocess with args and kwargs using a stubbed process. | 2173 | """Run run_bzr_subprocess with args and kwargs using a stubbed process. |
3065 | 2094 | 2174 | ||
3066 | @@ -2157,6 +2237,32 @@ | |||
3067 | 2157 | StubProcess(), '', allow_plugins=True) | 2237 | StubProcess(), '', allow_plugins=True) |
3068 | 2158 | 2238 | ||
3069 | 2159 | 2239 | ||
3070 | 2240 | class TestFinishBzrSubprocess(TestWithFakedStartBzrSubprocess): | ||
3071 | 2241 | |||
3072 | 2242 | def test_finish_bzr_subprocess_with_error(self): | ||
3073 | 2243 | """finish_bzr_subprocess allows specification of the desired exit code. | ||
3074 | 2244 | """ | ||
3075 | 2245 | process = StubProcess(err="unknown command", retcode=3) | ||
3076 | 2246 | result = self.finish_bzr_subprocess(process, retcode=3) | ||
3077 | 2247 | self.assertEqual('', result[0]) | ||
3078 | 2248 | self.assertContainsRe(result[1], 'unknown command') | ||
3079 | 2249 | |||
3080 | 2250 | def test_finish_bzr_subprocess_ignoring_retcode(self): | ||
3081 | 2251 | """finish_bzr_subprocess allows the exit code to be ignored.""" | ||
3082 | 2252 | process = StubProcess(err="unknown command", retcode=3) | ||
3083 | 2253 | result = self.finish_bzr_subprocess(process, retcode=None) | ||
3084 | 2254 | self.assertEqual('', result[0]) | ||
3085 | 2255 | self.assertContainsRe(result[1], 'unknown command') | ||
3086 | 2256 | |||
3087 | 2257 | def test_finish_subprocess_with_unexpected_retcode(self): | ||
3088 | 2258 | """finish_bzr_subprocess raises self.failureException if the retcode is | ||
3089 | 2259 | not the expected one. | ||
3090 | 2260 | """ | ||
3091 | 2261 | process = StubProcess(err="unknown command", retcode=3) | ||
3092 | 2262 | self.assertRaises(self.failureException, self.finish_bzr_subprocess, | ||
3093 | 2263 | process) | ||
3094 | 2264 | |||
3095 | 2265 | |||
3096 | 2160 | class _DontSpawnProcess(Exception): | 2266 | class _DontSpawnProcess(Exception): |
3097 | 2161 | """A simple exception which just allows us to skip unnecessary steps""" | 2267 | """A simple exception which just allows us to skip unnecessary steps""" |
3098 | 2162 | 2268 | ||
3099 | @@ -2240,39 +2346,8 @@ | |||
3100 | 2240 | self.assertEqual(['foo', 'current'], chdirs) | 2346 | self.assertEqual(['foo', 'current'], chdirs) |
3101 | 2241 | 2347 | ||
3102 | 2242 | 2348 | ||
3136 | 2243 | class TestBzrSubprocess(tests.TestCaseWithTransport): | 2349 | class TestActuallyStartBzrSubprocess(tests.TestCaseWithTransport): |
3137 | 2244 | 2350 | """Tests that really need to do things with an external bzr.""" | |
3105 | 2245 | def test_start_and_stop_bzr_subprocess(self): | ||
3106 | 2246 | """We can start and perform other test actions while that process is | ||
3107 | 2247 | still alive. | ||
3108 | 2248 | """ | ||
3109 | 2249 | process = self.start_bzr_subprocess(['--version']) | ||
3110 | 2250 | result = self.finish_bzr_subprocess(process) | ||
3111 | 2251 | self.assertContainsRe(result[0], 'is free software') | ||
3112 | 2252 | self.assertEqual('', result[1]) | ||
3113 | 2253 | |||
3114 | 2254 | def test_start_and_stop_bzr_subprocess_with_error(self): | ||
3115 | 2255 | """finish_bzr_subprocess allows specification of the desired exit code. | ||
3116 | 2256 | """ | ||
3117 | 2257 | process = self.start_bzr_subprocess(['--versionn']) | ||
3118 | 2258 | result = self.finish_bzr_subprocess(process, retcode=3) | ||
3119 | 2259 | self.assertEqual('', result[0]) | ||
3120 | 2260 | self.assertContainsRe(result[1], 'unknown command') | ||
3121 | 2261 | |||
3122 | 2262 | def test_start_and_stop_bzr_subprocess_ignoring_retcode(self): | ||
3123 | 2263 | """finish_bzr_subprocess allows the exit code to be ignored.""" | ||
3124 | 2264 | process = self.start_bzr_subprocess(['--versionn']) | ||
3125 | 2265 | result = self.finish_bzr_subprocess(process, retcode=None) | ||
3126 | 2266 | self.assertEqual('', result[0]) | ||
3127 | 2267 | self.assertContainsRe(result[1], 'unknown command') | ||
3128 | 2268 | |||
3129 | 2269 | def test_start_and_stop_bzr_subprocess_with_unexpected_retcode(self): | ||
3130 | 2270 | """finish_bzr_subprocess raises self.failureException if the retcode is | ||
3131 | 2271 | not the expected one. | ||
3132 | 2272 | """ | ||
3133 | 2273 | process = self.start_bzr_subprocess(['--versionn']) | ||
3134 | 2274 | self.assertRaises(self.failureException, self.finish_bzr_subprocess, | ||
3135 | 2275 | process) | ||
3138 | 2276 | 2351 | ||
3139 | 2277 | def test_start_and_stop_bzr_subprocess_send_signal(self): | 2352 | def test_start_and_stop_bzr_subprocess_send_signal(self): |
3140 | 2278 | """finish_bzr_subprocess raises self.failureException if the retcode is | 2353 | """finish_bzr_subprocess raises self.failureException if the retcode is |
3141 | @@ -2286,14 +2361,6 @@ | |||
3142 | 2286 | self.assertEqual('', result[0]) | 2361 | self.assertEqual('', result[0]) |
3143 | 2287 | self.assertEqual('bzr: interrupted\n', result[1]) | 2362 | self.assertEqual('bzr: interrupted\n', result[1]) |
3144 | 2288 | 2363 | ||
3145 | 2289 | def test_start_and_stop_working_dir(self): | ||
3146 | 2290 | cwd = osutils.getcwd() | ||
3147 | 2291 | self.make_branch_and_tree('one') | ||
3148 | 2292 | process = self.start_bzr_subprocess(['root'], working_dir='one') | ||
3149 | 2293 | result = self.finish_bzr_subprocess(process, universal_newlines=True) | ||
3150 | 2294 | self.assertEndsWith(result[0], 'one\n') | ||
3151 | 2295 | self.assertEqual('', result[1]) | ||
3152 | 2296 | |||
3153 | 2297 | 2364 | ||
3154 | 2298 | class TestKnownFailure(tests.TestCase): | 2365 | class TestKnownFailure(tests.TestCase): |
3155 | 2299 | 2366 | ||
3156 | @@ -2681,10 +2748,52 @@ | |||
3157 | 2681 | 2748 | ||
3158 | 2682 | class TestTestSuite(tests.TestCase): | 2749 | class TestTestSuite(tests.TestCase): |
3159 | 2683 | 2750 | ||
3160 | 2751 | def test__test_suite_testmod_names(self): | ||
3161 | 2752 | # Test that a plausible list of test module names are returned | ||
3162 | 2753 | # by _test_suite_testmod_names. | ||
3163 | 2754 | test_list = tests._test_suite_testmod_names() | ||
3164 | 2755 | self.assertSubset([ | ||
3165 | 2756 | 'bzrlib.tests.blackbox', | ||
3166 | 2757 | 'bzrlib.tests.per_transport', | ||
3167 | 2758 | 'bzrlib.tests.test_selftest', | ||
3168 | 2759 | ], | ||
3169 | 2760 | test_list) | ||
3170 | 2761 | |||
3171 | 2762 | def test__test_suite_modules_to_doctest(self): | ||
3172 | 2763 | # Test that a plausible list of modules to doctest is returned | ||
3173 | 2764 | # by _test_suite_modules_to_doctest. | ||
3174 | 2765 | test_list = tests._test_suite_modules_to_doctest() | ||
3175 | 2766 | self.assertSubset([ | ||
3176 | 2767 | 'bzrlib.timestamp', | ||
3177 | 2768 | ], | ||
3178 | 2769 | test_list) | ||
3179 | 2770 | |||
3180 | 2684 | def test_test_suite(self): | 2771 | def test_test_suite(self): |
3184 | 2685 | # This test is slow - it loads the entire test suite to operate, so we | 2772 | # test_suite() loads the entire test suite to operate. To avoid this |
3185 | 2686 | # do a single test with one test in each category | 2773 | # overhead, and yet still be confident that things are happening, |
3186 | 2687 | test_list = [ | 2774 | # we temporarily replace two functions used by test_suite with |
3187 | 2775 | # test doubles that supply a few sample tests to load, and check they | ||
3188 | 2776 | # are loaded. | ||
3189 | 2777 | calls = [] | ||
3190 | 2778 | def _test_suite_testmod_names(): | ||
3191 | 2779 | calls.append("testmod_names") | ||
3192 | 2780 | return [ | ||
3193 | 2781 | 'bzrlib.tests.blackbox.test_branch', | ||
3194 | 2782 | 'bzrlib.tests.per_transport', | ||
3195 | 2783 | 'bzrlib.tests.test_selftest', | ||
3196 | 2784 | ] | ||
3197 | 2785 | original_testmod_names = tests._test_suite_testmod_names | ||
3198 | 2786 | def _test_suite_modules_to_doctest(): | ||
3199 | 2787 | calls.append("modules_to_doctest") | ||
3200 | 2788 | return ['bzrlib.timestamp'] | ||
3201 | 2789 | orig_modules_to_doctest = tests._test_suite_modules_to_doctest | ||
3202 | 2790 | def restore_names(): | ||
3203 | 2791 | tests._test_suite_testmod_names = original_testmod_names | ||
3204 | 2792 | tests._test_suite_modules_to_doctest = orig_modules_to_doctest | ||
3205 | 2793 | self.addCleanup(restore_names) | ||
3206 | 2794 | tests._test_suite_testmod_names = _test_suite_testmod_names | ||
3207 | 2795 | tests._test_suite_modules_to_doctest = _test_suite_modules_to_doctest | ||
3208 | 2796 | expected_test_list = [ | ||
3209 | 2688 | # testmod_names | 2797 | # testmod_names |
3210 | 2689 | 'bzrlib.tests.blackbox.test_branch.TestBranch.test_branch', | 2798 | 'bzrlib.tests.blackbox.test_branch.TestBranch.test_branch', |
3211 | 2690 | ('bzrlib.tests.per_transport.TransportTests' | 2799 | ('bzrlib.tests.per_transport.TransportTests' |
3212 | @@ -2695,13 +2804,16 @@ | |||
3213 | 2695 | # plugins can't be tested that way since selftest may be run with | 2804 | # plugins can't be tested that way since selftest may be run with |
3214 | 2696 | # --no-plugins | 2805 | # --no-plugins |
3215 | 2697 | ] | 2806 | ] |
3218 | 2698 | suite = tests.test_suite(test_list) | 2807 | suite = tests.test_suite() |
3219 | 2699 | self.assertEquals(test_list, _test_ids(suite)) | 2808 | self.assertEqual(set(["testmod_names", "modules_to_doctest"]), |
3220 | 2809 | set(calls)) | ||
3221 | 2810 | self.assertSubset(expected_test_list, _test_ids(suite)) | ||
3222 | 2700 | 2811 | ||
3223 | 2701 | def test_test_suite_list_and_start(self): | 2812 | def test_test_suite_list_and_start(self): |
3224 | 2702 | # We cannot test this at the same time as the main load, because we want | 2813 | # We cannot test this at the same time as the main load, because we want |
3227 | 2703 | # to know that starting_with == None works. So a second full load is | 2814 | # to know that starting_with == None works. So a second load is |
3228 | 2704 | # incurred. | 2815 | # incurred - note that the starting_with parameter causes a partial load |
3229 | 2816 | # rather than a full load so this test should be pretty quick. | ||
3230 | 2705 | test_list = ['bzrlib.tests.test_selftest.TestTestSuite.test_test_suite'] | 2817 | test_list = ['bzrlib.tests.test_selftest.TestTestSuite.test_test_suite'] |
3231 | 2706 | suite = tests.test_suite(test_list, | 2818 | suite = tests.test_suite(test_list, |
3232 | 2707 | ['bzrlib.tests.test_selftest.TestTestSuite']) | 2819 | ['bzrlib.tests.test_selftest.TestTestSuite']) |
3233 | @@ -2853,19 +2965,3 @@ | |||
3234 | 2853 | self.verbosity) | 2965 | self.verbosity) |
3235 | 2854 | tests.run_suite(suite, runner_class=MyRunner, stream=StringIO()) | 2966 | tests.run_suite(suite, runner_class=MyRunner, stream=StringIO()) |
3236 | 2855 | self.assertLength(1, calls) | 2967 | self.assertLength(1, calls) |
3237 | 2856 | |||
3238 | 2857 | def test_done(self): | ||
3239 | 2858 | """run_suite should call result.done()""" | ||
3240 | 2859 | self.calls = 0 | ||
3241 | 2860 | def one_more_call(): self.calls += 1 | ||
3242 | 2861 | def test_function(): | ||
3243 | 2862 | pass | ||
3244 | 2863 | test = unittest.FunctionTestCase(test_function) | ||
3245 | 2864 | class InstrumentedTestResult(tests.ExtendedTestResult): | ||
3246 | 2865 | def done(self): one_more_call() | ||
3247 | 2866 | class MyRunner(tests.TextTestRunner): | ||
3248 | 2867 | def run(self, test): | ||
3249 | 2868 | return InstrumentedTestResult(self.stream, self.descriptions, | ||
3250 | 2869 | self.verbosity) | ||
3251 | 2870 | tests.run_suite(test, runner_class=MyRunner, stream=StringIO()) | ||
3252 | 2871 | self.assertEquals(1, self.calls) | ||
3253 | 2872 | 2968 | ||
3254 | === modified file 'bzrlib/tests/test_shelf.py' | |||
3255 | --- bzrlib/tests/test_shelf.py 2009-08-26 07:40:38 +0000 | |||
3256 | +++ bzrlib/tests/test_shelf.py 2009-08-28 05:00:33 +0000 | |||
3257 | @@ -476,6 +476,8 @@ | |||
3258 | 476 | def test_shelve_skips_added_root(self): | 476 | def test_shelve_skips_added_root(self): |
3259 | 477 | """Skip adds of the root when iterating through shelvable changes.""" | 477 | """Skip adds of the root when iterating through shelvable changes.""" |
3260 | 478 | tree = self.make_branch_and_tree('tree') | 478 | tree = self.make_branch_and_tree('tree') |
3261 | 479 | tree.lock_tree_write() | ||
3262 | 480 | self.addCleanup(tree.unlock) | ||
3263 | 479 | creator = shelf.ShelfCreator(tree, tree.basis_tree()) | 481 | creator = shelf.ShelfCreator(tree, tree.basis_tree()) |
3264 | 480 | self.addCleanup(creator.finalize) | 482 | self.addCleanup(creator.finalize) |
3265 | 481 | self.assertEqual([], list(creator.iter_shelvable())) | 483 | self.assertEqual([], list(creator.iter_shelvable())) |
3266 | 482 | 484 | ||
3267 | === modified file 'bzrlib/tests/test_smart.py' | |||
3268 | --- bzrlib/tests/test_smart.py 2009-08-17 23:15:55 +0000 | |||
3269 | +++ bzrlib/tests/test_smart.py 2009-09-03 15:26:27 +0000 | |||
3270 | @@ -36,6 +36,7 @@ | |||
3271 | 36 | smart, | 36 | smart, |
3272 | 37 | tests, | 37 | tests, |
3273 | 38 | urlutils, | 38 | urlutils, |
3274 | 39 | versionedfile, | ||
3275 | 39 | ) | 40 | ) |
3276 | 40 | from bzrlib.branch import Branch, BranchReferenceFormat | 41 | from bzrlib.branch import Branch, BranchReferenceFormat |
3277 | 41 | import bzrlib.smart.branch | 42 | import bzrlib.smart.branch |
3278 | @@ -87,8 +88,7 @@ | |||
3279 | 87 | if self._chroot_server is None: | 88 | if self._chroot_server is None: |
3280 | 88 | backing_transport = tests.TestCaseWithTransport.get_transport(self) | 89 | backing_transport = tests.TestCaseWithTransport.get_transport(self) |
3281 | 89 | self._chroot_server = chroot.ChrootServer(backing_transport) | 90 | self._chroot_server = chroot.ChrootServer(backing_transport) |
3284 | 90 | self._chroot_server.setUp() | 91 | self.start_server(self._chroot_server) |
3283 | 91 | self.addCleanup(self._chroot_server.tearDown) | ||
3285 | 92 | t = get_transport(self._chroot_server.get_url()) | 92 | t = get_transport(self._chroot_server.get_url()) |
3286 | 93 | if relpath is not None: | 93 | if relpath is not None: |
3287 | 94 | t = t.clone(relpath) | 94 | t = t.clone(relpath) |
3288 | @@ -113,6 +113,25 @@ | |||
3289 | 113 | return self.get_transport().get_smart_medium() | 113 | return self.get_transport().get_smart_medium() |
3290 | 114 | 114 | ||
3291 | 115 | 115 | ||
3292 | 116 | class TestByteStreamToStream(tests.TestCase): | ||
3293 | 117 | |||
3294 | 118 | def test_repeated_substreams_same_kind_are_one_stream(self): | ||
3295 | 119 | # Make a stream - an iterable of bytestrings. | ||
3296 | 120 | stream = [('text', [versionedfile.FulltextContentFactory(('k1',), None, | ||
3297 | 121 | None, 'foo')]),('text', [ | ||
3298 | 122 | versionedfile.FulltextContentFactory(('k2',), None, None, 'bar')])] | ||
3299 | 123 | fmt = bzrdir.format_registry.get('pack-0.92')().repository_format | ||
3300 | 124 | bytes = smart.repository._stream_to_byte_stream(stream, fmt) | ||
3301 | 125 | streams = [] | ||
3302 | 126 | # Iterate the resulting iterable; checking that we get only one stream | ||
3303 | 127 | # out. | ||
3304 | 128 | fmt, stream = smart.repository._byte_stream_to_stream(bytes) | ||
3305 | 129 | for kind, substream in stream: | ||
3306 | 130 | streams.append((kind, list(substream))) | ||
3307 | 131 | self.assertLength(1, streams) | ||
3308 | 132 | self.assertLength(2, streams[0][1]) | ||
3309 | 133 | |||
3310 | 134 | |||
3311 | 116 | class TestSmartServerResponse(tests.TestCase): | 135 | class TestSmartServerResponse(tests.TestCase): |
3312 | 117 | 136 | ||
3313 | 118 | def test__eq__(self): | 137 | def test__eq__(self): |
3314 | 119 | 138 | ||
3315 | === modified file 'bzrlib/tests/test_transport.py' | |||
3316 | --- bzrlib/tests/test_transport.py 2009-03-23 14:59:43 +0000 | |||
3317 | +++ bzrlib/tests/test_transport.py 2009-08-27 22:17:35 +0000 | |||
3318 | @@ -363,24 +363,22 @@ | |||
3319 | 363 | def test_abspath(self): | 363 | def test_abspath(self): |
3320 | 364 | # The abspath is always relative to the chroot_url. | 364 | # The abspath is always relative to the chroot_url. |
3321 | 365 | server = ChrootServer(get_transport('memory:///foo/bar/')) | 365 | server = ChrootServer(get_transport('memory:///foo/bar/')) |
3323 | 366 | server.setUp() | 366 | self.start_server(server) |
3324 | 367 | transport = get_transport(server.get_url()) | 367 | transport = get_transport(server.get_url()) |
3325 | 368 | self.assertEqual(server.get_url(), transport.abspath('/')) | 368 | self.assertEqual(server.get_url(), transport.abspath('/')) |
3326 | 369 | 369 | ||
3327 | 370 | subdir_transport = transport.clone('subdir') | 370 | subdir_transport = transport.clone('subdir') |
3328 | 371 | self.assertEqual(server.get_url(), subdir_transport.abspath('/')) | 371 | self.assertEqual(server.get_url(), subdir_transport.abspath('/')) |
3329 | 372 | server.tearDown() | ||
3330 | 373 | 372 | ||
3331 | 374 | def test_clone(self): | 373 | def test_clone(self): |
3332 | 375 | server = ChrootServer(get_transport('memory:///foo/bar/')) | 374 | server = ChrootServer(get_transport('memory:///foo/bar/')) |
3334 | 376 | server.setUp() | 375 | self.start_server(server) |
3335 | 377 | transport = get_transport(server.get_url()) | 376 | transport = get_transport(server.get_url()) |
3336 | 378 | # relpath from root and root path are the same | 377 | # relpath from root and root path are the same |
3337 | 379 | relpath_cloned = transport.clone('foo') | 378 | relpath_cloned = transport.clone('foo') |
3338 | 380 | abspath_cloned = transport.clone('/foo') | 379 | abspath_cloned = transport.clone('/foo') |
3339 | 381 | self.assertEqual(server, relpath_cloned.server) | 380 | self.assertEqual(server, relpath_cloned.server) |
3340 | 382 | self.assertEqual(server, abspath_cloned.server) | 381 | self.assertEqual(server, abspath_cloned.server) |
3341 | 383 | server.tearDown() | ||
3342 | 384 | 382 | ||
3343 | 385 | def test_chroot_url_preserves_chroot(self): | 383 | def test_chroot_url_preserves_chroot(self): |
3344 | 386 | """Calling get_transport on a chroot transport's base should produce a | 384 | """Calling get_transport on a chroot transport's base should produce a |
3345 | @@ -393,12 +391,11 @@ | |||
3346 | 393 | new_transport = get_transport(parent_url) | 391 | new_transport = get_transport(parent_url) |
3347 | 394 | """ | 392 | """ |
3348 | 395 | server = ChrootServer(get_transport('memory:///path/subpath')) | 393 | server = ChrootServer(get_transport('memory:///path/subpath')) |
3350 | 396 | server.setUp() | 394 | self.start_server(server) |
3351 | 397 | transport = get_transport(server.get_url()) | 395 | transport = get_transport(server.get_url()) |
3352 | 398 | new_transport = get_transport(transport.base) | 396 | new_transport = get_transport(transport.base) |
3353 | 399 | self.assertEqual(transport.server, new_transport.server) | 397 | self.assertEqual(transport.server, new_transport.server) |
3354 | 400 | self.assertEqual(transport.base, new_transport.base) | 398 | self.assertEqual(transport.base, new_transport.base) |
3355 | 401 | server.tearDown() | ||
3356 | 402 | 399 | ||
3357 | 403 | def test_urljoin_preserves_chroot(self): | 400 | def test_urljoin_preserves_chroot(self): |
3358 | 404 | """Using urlutils.join(url, '..') on a chroot URL should not produce a | 401 | """Using urlutils.join(url, '..') on a chroot URL should not produce a |
3359 | @@ -410,11 +407,10 @@ | |||
3360 | 410 | new_transport = get_transport(parent_url) | 407 | new_transport = get_transport(parent_url) |
3361 | 411 | """ | 408 | """ |
3362 | 412 | server = ChrootServer(get_transport('memory:///path/')) | 409 | server = ChrootServer(get_transport('memory:///path/')) |
3364 | 413 | server.setUp() | 410 | self.start_server(server) |
3365 | 414 | transport = get_transport(server.get_url()) | 411 | transport = get_transport(server.get_url()) |
3366 | 415 | self.assertRaises( | 412 | self.assertRaises( |
3367 | 416 | InvalidURLJoin, urlutils.join, transport.base, '..') | 413 | InvalidURLJoin, urlutils.join, transport.base, '..') |
3368 | 417 | server.tearDown() | ||
3369 | 418 | 414 | ||
3370 | 419 | 415 | ||
3371 | 420 | class ChrootServerTest(TestCase): | 416 | class ChrootServerTest(TestCase): |
3372 | @@ -428,7 +424,10 @@ | |||
3373 | 428 | backing_transport = MemoryTransport() | 424 | backing_transport = MemoryTransport() |
3374 | 429 | server = ChrootServer(backing_transport) | 425 | server = ChrootServer(backing_transport) |
3375 | 430 | server.setUp() | 426 | server.setUp() |
3377 | 431 | self.assertTrue(server.scheme in _get_protocol_handlers().keys()) | 427 | try: |
3378 | 428 | self.assertTrue(server.scheme in _get_protocol_handlers().keys()) | ||
3379 | 429 | finally: | ||
3380 | 430 | server.tearDown() | ||
3381 | 432 | 431 | ||
3382 | 433 | def test_tearDown(self): | 432 | def test_tearDown(self): |
3383 | 434 | backing_transport = MemoryTransport() | 433 | backing_transport = MemoryTransport() |
3384 | @@ -441,8 +440,10 @@ | |||
3385 | 441 | backing_transport = MemoryTransport() | 440 | backing_transport = MemoryTransport() |
3386 | 442 | server = ChrootServer(backing_transport) | 441 | server = ChrootServer(backing_transport) |
3387 | 443 | server.setUp() | 442 | server.setUp() |
3390 | 444 | self.assertEqual('chroot-%d:///' % id(server), server.get_url()) | 443 | try: |
3391 | 445 | server.tearDown() | 444 | self.assertEqual('chroot-%d:///' % id(server), server.get_url()) |
3392 | 445 | finally: | ||
3393 | 446 | server.tearDown() | ||
3394 | 446 | 447 | ||
3395 | 447 | 448 | ||
3396 | 448 | class ReadonlyDecoratorTransportTest(TestCase): | 449 | class ReadonlyDecoratorTransportTest(TestCase): |
3397 | @@ -460,15 +461,12 @@ | |||
3398 | 460 | import bzrlib.transport.readonly as readonly | 461 | import bzrlib.transport.readonly as readonly |
3399 | 461 | # connect to '.' via http which is not listable | 462 | # connect to '.' via http which is not listable |
3400 | 462 | server = HttpServer() | 463 | server = HttpServer() |
3410 | 463 | server.setUp() | 464 | self.start_server(server) |
3411 | 464 | try: | 465 | transport = get_transport('readonly+' + server.get_url()) |
3412 | 465 | transport = get_transport('readonly+' + server.get_url()) | 466 | self.failUnless(isinstance(transport, |
3413 | 466 | self.failUnless(isinstance(transport, | 467 | readonly.ReadonlyTransportDecorator)) |
3414 | 467 | readonly.ReadonlyTransportDecorator)) | 468 | self.assertEqual(False, transport.listable()) |
3415 | 468 | self.assertEqual(False, transport.listable()) | 469 | self.assertEqual(True, transport.is_readonly()) |
3407 | 469 | self.assertEqual(True, transport.is_readonly()) | ||
3408 | 470 | finally: | ||
3409 | 471 | server.tearDown() | ||
3416 | 472 | 470 | ||
3417 | 473 | 471 | ||
3418 | 474 | class FakeNFSDecoratorTests(TestCaseInTempDir): | 472 | class FakeNFSDecoratorTests(TestCaseInTempDir): |
3419 | @@ -492,31 +490,24 @@ | |||
3420 | 492 | from bzrlib.tests.http_server import HttpServer | 490 | from bzrlib.tests.http_server import HttpServer |
3421 | 493 | # connect to '.' via http which is not listable | 491 | # connect to '.' via http which is not listable |
3422 | 494 | server = HttpServer() | 492 | server = HttpServer() |
3432 | 495 | server.setUp() | 493 | self.start_server(server) |
3433 | 496 | try: | 494 | transport = self.get_nfs_transport(server.get_url()) |
3434 | 497 | transport = self.get_nfs_transport(server.get_url()) | 495 | self.assertIsInstance( |
3435 | 498 | self.assertIsInstance( | 496 | transport, bzrlib.transport.fakenfs.FakeNFSTransportDecorator) |
3436 | 499 | transport, bzrlib.transport.fakenfs.FakeNFSTransportDecorator) | 497 | self.assertEqual(False, transport.listable()) |
3437 | 500 | self.assertEqual(False, transport.listable()) | 498 | self.assertEqual(True, transport.is_readonly()) |
3429 | 501 | self.assertEqual(True, transport.is_readonly()) | ||
3430 | 502 | finally: | ||
3431 | 503 | server.tearDown() | ||
3438 | 504 | 499 | ||
3439 | 505 | def test_fakenfs_server_default(self): | 500 | def test_fakenfs_server_default(self): |
3440 | 506 | # a FakeNFSServer() should bring up a local relpath server for itself | 501 | # a FakeNFSServer() should bring up a local relpath server for itself |
3441 | 507 | import bzrlib.transport.fakenfs as fakenfs | 502 | import bzrlib.transport.fakenfs as fakenfs |
3442 | 508 | server = fakenfs.FakeNFSServer() | 503 | server = fakenfs.FakeNFSServer() |
3454 | 509 | server.setUp() | 504 | self.start_server(server) |
3455 | 510 | try: | 505 | # the url should be decorated appropriately |
3456 | 511 | # the url should be decorated appropriately | 506 | self.assertStartsWith(server.get_url(), 'fakenfs+') |
3457 | 512 | self.assertStartsWith(server.get_url(), 'fakenfs+') | 507 | # and we should be able to get a transport for it |
3458 | 513 | # and we should be able to get a transport for it | 508 | transport = get_transport(server.get_url()) |
3459 | 514 | transport = get_transport(server.get_url()) | 509 | # which must be a FakeNFSTransportDecorator instance. |
3460 | 515 | # which must be a FakeNFSTransportDecorator instance. | 510 | self.assertIsInstance(transport, fakenfs.FakeNFSTransportDecorator) |
3450 | 516 | self.assertIsInstance( | ||
3451 | 517 | transport, fakenfs.FakeNFSTransportDecorator) | ||
3452 | 518 | finally: | ||
3453 | 519 | server.tearDown() | ||
3461 | 520 | 511 | ||
3462 | 521 | def test_fakenfs_rename_semantics(self): | 512 | def test_fakenfs_rename_semantics(self): |
3463 | 522 | # a FakeNFS transport must mangle the way rename errors occur to | 513 | # a FakeNFS transport must mangle the way rename errors occur to |
3464 | @@ -587,8 +578,7 @@ | |||
3465 | 587 | def setUp(self): | 578 | def setUp(self): |
3466 | 588 | super(TestTransportImplementation, self).setUp() | 579 | super(TestTransportImplementation, self).setUp() |
3467 | 589 | self._server = self.transport_server() | 580 | self._server = self.transport_server() |
3470 | 590 | self._server.setUp() | 581 | self.start_server(self._server) |
3469 | 591 | self.addCleanup(self._server.tearDown) | ||
3471 | 592 | 582 | ||
3472 | 593 | def get_transport(self, relpath=None): | 583 | def get_transport(self, relpath=None): |
3473 | 594 | """Return a connected transport to the local directory. | 584 | """Return a connected transport to the local directory. |
3474 | 595 | 585 | ||
3475 | === modified file 'doc/_templates/index.html' | |||
3476 | --- doc/_templates/index.html 2009-07-22 14:36:38 +0000 | |||
3477 | +++ doc/_templates/index.html 2009-08-18 00:10:19 +0000 | |||
3478 | @@ -26,19 +26,17 @@ | |||
3479 | 26 | <p class="biglink"><a class="biglink" href="{{ pathto("en/upgrade-guide/index") }}">Upgrade Guide</a><br/> | 26 | <p class="biglink"><a class="biglink" href="{{ pathto("en/upgrade-guide/index") }}">Upgrade Guide</a><br/> |
3480 | 27 | <span class="linkdescr">moving to Bazaar 2.x</span> | 27 | <span class="linkdescr">moving to Bazaar 2.x</span> |
3481 | 28 | </p> | 28 | </p> |
3483 | 29 | <p class="biglink"><a class="biglink" href="{{ pathto("en/migration/index") }}">Migration Docs</a><br/> | 29 | <p class="biglink"><a class="biglink" href="http://doc.bazaar-vcs.org/migration/en/">Migration Docs</a><br/> |
3484 | 30 | <span class="linkdescr">for refugees of other tools</span> | 30 | <span class="linkdescr">for refugees of other tools</span> |
3485 | 31 | </p> | 31 | </p> |
3488 | 32 | <p class="biglink"><a class="biglink" href="{{ pathto("developers/index") }}">Developer Docs</a><br/> | 32 | <p class="biglink"><a class="biglink" href="http://doc.bazaar-vcs.org/plugins/en/">Plugins Guide</a><br/> |
3489 | 33 | <span class="linkdescr">polices and tools for giving back</span> | 33 | <span class="linkdescr">help on popular plugins</span> |
3490 | 34 | </p> | 34 | </p> |
3491 | 35 | </td></tr> | 35 | </td></tr> |
3492 | 36 | </table> | 36 | </table> |
3493 | 37 | 37 | ||
3498 | 38 | <p>Other languages: | 38 | <p>Keen to help? See the <a href="{{ pathto("developers/index") }}">Developer Docs</a> |
3499 | 39 | <a href="{{ pathto("index.es") }}">Spanish</a>, | 39 | for policies and tools on contributing code, tests and documentation.</p> |
3496 | 40 | <a href="{{ pathto("index.ru") }}">Russian</a> | ||
3497 | 41 | </p> | ||
3500 | 42 | 40 | ||
3501 | 43 | 41 | ||
3502 | 44 | <h2>Related Links</h2> | 42 | <h2>Related Links</h2> |
3503 | @@ -59,4 +57,9 @@ | |||
3504 | 59 | </td></tr> | 57 | </td></tr> |
3505 | 60 | </table> | 58 | </table> |
3506 | 61 | 59 | ||
3507 | 60 | <hr> | ||
3508 | 61 | <p>Other languages: | ||
3509 | 62 | <a href="{{ pathto("index.es") }}">Spanish</a>, | ||
3510 | 63 | <a href="{{ pathto("index.ru") }}">Russian</a> | ||
3511 | 64 | </p> | ||
3512 | 62 | {% endblock %} | 65 | {% endblock %} |
3513 | 63 | 66 | ||
3514 | === modified file 'doc/contents.txt' | |||
3515 | --- doc/contents.txt 2009-07-22 13:41:01 +0000 | |||
3516 | +++ doc/contents.txt 2009-08-18 00:10:19 +0000 | |||
3517 | @@ -20,7 +20,6 @@ | |||
3518 | 20 | 20 | ||
3519 | 21 | en/release-notes/index | 21 | en/release-notes/index |
3520 | 22 | en/upgrade-guide/index | 22 | en/upgrade-guide/index |
3521 | 23 | en/migration/index | ||
3522 | 24 | developers/index | 23 | developers/index |
3523 | 25 | 24 | ||
3524 | 26 | 25 | ||
3525 | 27 | 26 | ||
3526 | === modified file 'doc/developers/bug-handling.txt' | |||
3527 | --- doc/developers/bug-handling.txt 2009-08-24 00:29:31 +0000 | |||
3528 | +++ doc/developers/bug-handling.txt 2009-08-24 20:16:15 +0000 | |||
3529 | @@ -142,12 +142,8 @@ | |||
3530 | 142 | it's not a good idea for a developer to spend time reproducing the bug | 142 | it's not a good idea for a developer to spend time reproducing the bug |
3531 | 143 | until they're going to work on it.) | 143 | until they're going to work on it.) |
3532 | 144 | Triaged | 144 | Triaged |
3539 | 145 | This is an odd state - one we consider a bug in launchpad, as it really | 145 | We don't use this status. If it is set, it means the same as |
3540 | 146 | means "Importance has been set". We use this to mean the same thing | 146 | Confirmed. |
3535 | 147 | as confirmed, and set no preference on whether Confirmed or Triaged are | ||
3536 | 148 | used. Please do not change a "Confirmed" bug to "Triaged" or vice verca - | ||
3537 | 149 | any reports we create or use will always search for both "Confirmed" and | ||
3538 | 150 | "Triaged" or neither "Confirmed" nor "Triaged". | ||
3541 | 151 | In Progress | 147 | In Progress |
3542 | 152 | Someone has started working on this. | 148 | Someone has started working on this. |
3543 | 153 | Won't Fix | 149 | Won't Fix |
3544 | 154 | 150 | ||
3545 | === removed directory 'doc/en/migration' | |||
3546 | === removed file 'doc/en/migration/index.txt' | |||
3547 | --- doc/en/migration/index.txt 2009-07-22 13:41:01 +0000 | |||
3548 | +++ doc/en/migration/index.txt 1970-01-01 00:00:00 +0000 | |||
3549 | @@ -1,6 +0,0 @@ | |||
3550 | 1 | Bazaar Migration Guide | ||
3551 | 2 | ====================== | ||
3552 | 3 | |||
3553 | 4 | This guide is under development. For notes collected so far, see | ||
3554 | 5 | http://bazaar-vcs.org/BzrMigration/. | ||
3555 | 6 |
This adds 'pack-on-the-fly' support for gc streaming.
1) It restores 'groupcompress' sorting for the requested inventories and texts. entManager. check_is_ well_utilized( )'
2) It uses a heuristic that is approximately:
if a given block is less than 75% the size of a 'fully utilized' block, then don't re-use the
content directly, but schedule it to be packed into a new block.
The specifics are in '_LazyGroupCont
3) I did some real-world testing, and the results seem pretty good.
To start with, the copy of bzr.dev on Launchpad is currently very poorly packed, taking up >90MB of disk space for a single pack file. After branching that using bzr.dev, I get a 101MB repository locally. If I 'bzr pack', I end up with 39MB (30MB in .pack, and 8.8MB in indices)
101MB poorly- packed- from-lp
101MB post 'bzr.dev branch new-repo' (takes 1m0s locally)
39MB post 'bzr pack' (takes 2m0s locally)
I then tested the results of using the pack-on-the-fly
41MB post 'bzr-pack branch new-repo' (takes 1m43s locally)
41MB post 'bzr-pack branch new-repo new-repo2) (takes 1m0s)
Which means that pack-on-the-fly is working as we hoped it would. It
a) Gives almost as good of pack results as if we had issued 'bzr pack'
b) Takes a bit of extra time when the source is poorly packed (1m => 1m45s)
c) Takes no extra time when the source is already properly packed (1m => 1m)
4) Unfortunately this was built on top of bzr.dev, but we can land it there, and then cherrypick it back to 2.0. I'll still submit a merge request for 2.0.