Merge lp:~jameinel/bzr/2.1b1-pack-on-the-fly into lp:bzr

Proposed by John A Meinel
Status: Merged
Merged at revision: not available
Proposed branch: lp:~jameinel/bzr/2.1b1-pack-on-the-fly
Merge into: lp:bzr
Diff against target: None lines
To merge this branch: bzr merge lp:~jameinel/bzr/2.1b1-pack-on-the-fly
Reviewer Review Type Date Requested Status
bzr-core Pending
Review via email: mp+11162@code.launchpad.net
To post a comment you must log in.
Revision history for this message
John A Meinel (jameinel) wrote :

This adds 'pack-on-the-fly' support for gc streaming.

1) It restores 'groupcompress' sorting for the requested inventories and texts.
2) It uses a heuristic that is approximately:
  if a given block is less than 75% the size of a 'fully utilized' block, then don't re-use the
  content directly, but schedule it to be packed into a new block.
  The specifics are in '_LazyGroupContentManager.check_is_well_utilized()'
3) I did some real-world testing, and the results seem pretty good.
   To start with, the copy of bzr.dev on Launchpad is currently very poorly packed, taking up >90MB of disk space for a single pack file. After branching that using bzr.dev, I get a 101MB repository locally. If I 'bzr pack', I end up with 39MB (30MB in .pack, and 8.8MB in indices)

101MB poorly-packed-from-lp
101MB post 'bzr.dev branch new-repo' (takes 1m0s locally)
 39MB post 'bzr pack' (takes 2m0s locally)

I then tested the results of using the pack-on-the-fly
 41MB post 'bzr-pack branch new-repo' (takes 1m43s locally)
 41MB post 'bzr-pack branch new-repo new-repo2) (takes 1m0s)

Which means that pack-on-the-fly is working as we hoped it would. It
 a) Gives almost as good of pack results as if we had issued 'bzr pack'
 b) Takes a bit of extra time when the source is poorly packed (1m => 1m45s)
 c) Takes no extra time when the source is already properly packed (1m => 1m)

4) Unfortunately this was built on top of bzr.dev, but we can land it there, and then cherrypick it back to 2.0. I'll still submit a merge request for 2.0.

Revision history for this message
Robert Collins (lifeless) wrote :

Conceptually great; I'm looking now.

The review merge is bong; I'm going to pull locally, sync up and get a clean diff.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2009-08-03 20:38:39 +0000
+++ Makefile 2009-08-27 00:53:27 +0000
@@ -1,4 +1,4 @@
1# Copyright (C) 2005, 2006, 2007, 2008 Canonical Ltd1# Copyright (C) 2005, 2006, 2007, 2008, 2009 Canonical Ltd
2#2#
3# This program is free software; you can redistribute it and/or modify3# This program is free software; you can redistribute it and/or modify
4# it under the terms of the GNU General Public License as published by4# it under the terms of the GNU General Public License as published by
@@ -40,8 +40,6 @@
4040
41check-nodocs: extensions41check-nodocs: extensions
42 $(PYTHON) -Werror -O ./bzr selftest -1v $(tests)42 $(PYTHON) -Werror -O ./bzr selftest -1v $(tests)
43 @echo "Running all tests with no locale."
44 LC_CTYPE= LANG=C LC_ALL= ./bzr selftest -1v $(tests) 2>&1 | sed -e 's/^/[ascii] /'
4543
46# Run Python style checker (apt-get install pyflakes)44# Run Python style checker (apt-get install pyflakes)
47#45#
4846
=== modified file 'NEWS'
--- NEWS 2009-08-30 22:02:45 +0000
+++ NEWS 2009-09-03 21:04:22 +0000
@@ -6,6 +6,55 @@
6.. contents:: List of Releases6.. contents:: List of Releases
7 :depth: 17 :depth: 1
88
9In Development
10##############
11
12Compatibility Breaks
13********************
14
15New Features
16************
17
18Bug Fixes
19*********
20
21* ``bzr check`` in pack-0.92, 1.6 and 1.9 format repositories will no
22 longer report incorrect errors about ``Missing inventory ('TREE_ROOT', ...)``
23 (Robert Collins, #416732)
24
25* Don't restrict the command name used to run the test suite.
26 (Vincent Ladeuil, #419950)
27
28Improvements
29************
30
31Documentation
32*************
33
34API Changes
35***********
36
37* ``bzrlib.tests`` now uses ``stopTestRun`` for its ``TestResult``
38 subclasses - the same as python's unittest module. (Robert Collins)
39
40Internals
41*********
42
43* The ``bzrlib.lsprof`` module has a new class ``BzrProfiler`` which makes
44 profiling in some situations like callbacks and generators easier.
45 (Robert Collins)
46
47Testing
48*******
49
50* Passing ``--lsprof-tests -v`` to bzr selftest will cause lsprof output to
51 be output for every test. Note that this is very verbose! (Robert Collins)
52
53* Test parameterisation now does a shallow copy, not a deep copy of the test
54 to be parameterised. This is not expected to break external use of test
55 parameterisation, and is substantially faster. (Robert Collins)
56
57
9bzr 2.0rc258bzr 2.0rc2
10##########59##########
1160
@@ -20,10 +69,34 @@
20 revisions that are in the fallback repository. (Regressed in 2.0rc1).69 revisions that are in the fallback repository. (Regressed in 2.0rc1).
21 (John Arbash Meinel, #419241)70 (John Arbash Meinel, #419241)
2271
72* Fetches from 2a to 2a are now again requested in 'groupcompress' order.
73 Groups that are seen as 'underutilized' will be repacked on-the-fly.
74 This means that when the source is fully packed, there is minimal
75 overhead during the fetch, but if the source is poorly packed the result
76 is a fairly well packed repository (not as good as 'bzr pack' but
77 good-enough.) (Robert Collins, John Arbash Meinel, #402652)
78
23* Fix a segmentation fault when computing the ``merge_sort`` of a graph79* Fix a segmentation fault when computing the ``merge_sort`` of a graph
24 that has a ghost in the mainline ancestry.80 that has a ghost in the mainline ancestry.
25 (John Arbash Meinel, #419241)81 (John Arbash Meinel, #419241)
2682
83* ``groupcompress`` sort order is now more stable, rather than relying on
84 ``topo_sort`` ordering. The implementation is now
85 ``KnownGraph.gc_sort``. (John Arbash Meinel)
86
87* Local data conversion will generate correct deltas. This is a critical
88 bugfix vs 2.0rc1, and all 2.0rc1 users should upgrade to 2.0rc2 before
89 converting repositories. (Robert Collins, #422849)
90
91* Network streams now decode adjacent records of the same type into a
92 single stream, reducing layering churn. (Robert Collins)
93
94Documentation
95*************
96
97* The main table of contents now provides links to the new Migration Docs
98 and Plugins Guide. (Ian Clatworthy)
99
27100
28bzr 2.0rc1101bzr 2.0rc1
29##########102##########
@@ -64,6 +137,9 @@
64Bug Fixes137Bug Fixes
65*********138*********
66139
140* Further tweaks to handling of ``bzr add`` messages about ignored files.
141 (Jason Spashett, #76616)
142
67* Fetches were being requested in 'groupcompress' order, but weren't143* Fetches were being requested in 'groupcompress' order, but weren't
68 recombining the groups. Thus they would 'fragment' to get the correct144 recombining the groups. Thus they would 'fragment' to get the correct
69 order, but not 'recombine' to actually benefit from it. Until we get145 order, but not 'recombine' to actually benefit from it. Until we get
@@ -133,9 +209,6 @@
133 classes changed to manage lock lifetime of the trees they open in a way209 classes changed to manage lock lifetime of the trees they open in a way
134 consistent with reader-exclusive locks. (Robert Collins, #305006)210 consistent with reader-exclusive locks. (Robert Collins, #305006)
135211
136Internals
137*********
138
139Testing212Testing
140*******213*******
141214
@@ -149,13 +222,29 @@
149 conversion will commit too many copies a file.222 conversion will commit too many copies a file.
150 (Martin Pool, #415508)223 (Martin Pool, #415508)
151224
225Improvements
226************
227
228* ``bzr push`` locally on windows will no longer give a locking error with
229 dirstate based formats. (Robert Collins)
230
231* ``bzr shelve`` and ``bzr unshelve`` now work on windows.
232 (Robert Collins, #305006)
233
152API Changes234API Changes
153***********235***********
154236
237* ``bzrlib.shelf_ui`` has had the ``from_args`` convenience methods of its
238 classes changed to manage lock lifetime of the trees they open in a way
239 consistent with reader-exclusive locks. (Robert Collins, #305006)
240
155* ``Tree.path_content_summary`` may return a size of None, when called on241* ``Tree.path_content_summary`` may return a size of None, when called on
156 a tree with content filtering where the size of the canonical form242 a tree with content filtering where the size of the canonical form
157 cannot be cheaply determined. (Martin Pool)243 cannot be cheaply determined. (Martin Pool)
158244
245* When manually creating transport servers in test cases, a new helper
246 ``TestCase.start_server`` that registers a cleanup and starts the server
247 should be used. (Robert Collins)
159248
160bzr 1.18249bzr 1.18
161########250########
@@ -493,6 +582,17 @@
493 ``countTestsCases``. (Robert Collins)582 ``countTestsCases``. (Robert Collins)
494583
495584
585bzr 1.17.1 (unreleased)
586#######################
587
588Bug Fixes
589*********
590
591* The optional ``_knit_load_data_pyx`` C extension was never being
592 imported. This caused significant slowdowns when reading data from
593 knit format repositories. (Andrew Bennetts, #405653)
594
595
496bzr 1.17 "So late it's brunch" 2009-07-20596bzr 1.17 "So late it's brunch" 2009-07-20
497#########################################597#########################################
498:Codename: so-late-its-brunch598:Codename: so-late-its-brunch
@@ -991,6 +1091,9 @@
991Testing1091Testing
992*******1092*******
9931093
1094* ``make check`` no longer repeats the test run in ``LANG=C``.
1095 (Martin Pool, #386180)
1096
994* The number of cores is now correctly detected on OSX. (John Szakmeister)1097* The number of cores is now correctly detected on OSX. (John Szakmeister)
9951098
996* The number of cores is also detected on Solaris and win32. (Vincent Ladeuil)1099* The number of cores is also detected on Solaris and win32. (Vincent Ladeuil)
@@ -4971,7 +5074,7 @@
4971 checkouts. (Aaron Bentley, #182040)5074 checkouts. (Aaron Bentley, #182040)
49725075
4973* Stop polluting /tmp when running selftest.5076* Stop polluting /tmp when running selftest.
4974 (Vincent Ladeuil, #123623)5077 (Vincent Ladeuil, #123363)
49755078
4976* Switch from NFKC => NFC for normalization checks. NFC allows a few5079* Switch from NFKC => NFC for normalization checks. NFC allows a few
4977 more characters which should be considered valid.5080 more characters which should be considered valid.
49785081
=== modified file 'bzr'
--- bzr 2009-08-11 03:02:56 +0000
+++ bzr 2009-08-28 05:11:10 +0000
@@ -23,7 +23,7 @@
23import warnings23import warnings
2424
25# update this on each release25# update this on each release
26_script_version = (2, 0, 0)26_script_version = (2, 1, 0)
2727
28if __doc__ is None:28if __doc__ is None:
29 print "bzr does not support python -OO."29 print "bzr does not support python -OO."
3030
=== modified file 'bzrlib/__init__.py'
--- bzrlib/__init__.py 2009-08-27 07:49:53 +0000
+++ bzrlib/__init__.py 2009-08-30 21:34:42 +0000
@@ -50,7 +50,7 @@
50# Python version 2.0 is (2, 0, 0, 'final', 0)." Additionally we use a50# Python version 2.0 is (2, 0, 0, 'final', 0)." Additionally we use a
51# releaselevel of 'dev' for unreleased under-development code.51# releaselevel of 'dev' for unreleased under-development code.
5252
53version_info = (2, 0, 0, 'candidate', 1)53version_info = (2, 1, 0, 'dev', 0)
5454
55# API compatibility version: bzrlib is currently API compatible with 1.15.55# API compatibility version: bzrlib is currently API compatible with 1.15.
56api_minimum_version = (1, 17, 0)56api_minimum_version = (1, 17, 0)
5757
=== modified file 'bzrlib/_known_graph_py.py'
--- bzrlib/_known_graph_py.py 2009-08-17 20:41:26 +0000
+++ bzrlib/_known_graph_py.py 2009-08-25 18:45:40 +0000
@@ -97,6 +97,10 @@
97 return [node for node in self._nodes.itervalues()97 return [node for node in self._nodes.itervalues()
98 if not node.parent_keys]98 if not node.parent_keys]
9999
100 def _find_tips(self):
101 return [node for node in self._nodes.itervalues()
102 if not node.child_keys]
103
100 def _find_gdfo(self):104 def _find_gdfo(self):
101 nodes = self._nodes105 nodes = self._nodes
102 known_parent_gdfos = {}106 known_parent_gdfos = {}
@@ -218,6 +222,51 @@
218 # We started from the parents, so we don't need to do anymore work222 # We started from the parents, so we don't need to do anymore work
219 return topo_order223 return topo_order
220224
225 def gc_sort(self):
226 """Return a reverse topological ordering which is 'stable'.
227
228 There are a few constraints:
229 1) Reverse topological (all children before all parents)
230 2) Grouped by prefix
231 3) 'stable' sorting, so that we get the same result, independent of
232 machine, or extra data.
233 To do this, we use the same basic algorithm as topo_sort, but when we
234 aren't sure what node to access next, we sort them lexicographically.
235 """
236 tips = self._find_tips()
237 # Split the tips based on prefix
238 prefix_tips = {}
239 for node in tips:
240 if node.key.__class__ is str or len(node.key) == 1:
241 prefix = ''
242 else:
243 prefix = node.key[0]
244 prefix_tips.setdefault(prefix, []).append(node)
245
246 num_seen_children = dict.fromkeys(self._nodes, 0)
247
248 result = []
249 for prefix in sorted(prefix_tips):
250 pending = sorted(prefix_tips[prefix], key=lambda n:n.key,
251 reverse=True)
252 while pending:
253 node = pending.pop()
254 if node.parent_keys is None:
255 # Ghost node, skip it
256 continue
257 result.append(node.key)
258 for parent_key in sorted(node.parent_keys, reverse=True):
259 parent_node = self._nodes[parent_key]
260 seen_children = num_seen_children[parent_key] + 1
261 if seen_children == len(parent_node.child_keys):
262 # All children have been processed, enqueue this parent
263 pending.append(parent_node)
264 # This has been queued up, stop tracking it
265 del num_seen_children[parent_key]
266 else:
267 num_seen_children[parent_key] = seen_children
268 return result
269
221 def merge_sort(self, tip_key):270 def merge_sort(self, tip_key):
222 """Compute the merge sorted graph output."""271 """Compute the merge sorted graph output."""
223 from bzrlib import tsort272 from bzrlib import tsort
224273
=== modified file 'bzrlib/_known_graph_pyx.pyx'
--- bzrlib/_known_graph_pyx.pyx 2009-08-26 16:03:59 +0000
+++ bzrlib/_known_graph_pyx.pyx 2009-09-02 13:32:52 +0000
@@ -25,11 +25,18 @@
25 ctypedef struct PyObject:25 ctypedef struct PyObject:
26 pass26 pass
2727
28 int PyString_CheckExact(object)
29
30 int PyObject_RichCompareBool(object, object, int)
31 int Py_LT
32
33 int PyTuple_CheckExact(object)
28 object PyTuple_New(Py_ssize_t n)34 object PyTuple_New(Py_ssize_t n)
29 Py_ssize_t PyTuple_GET_SIZE(object t)35 Py_ssize_t PyTuple_GET_SIZE(object t)
30 PyObject * PyTuple_GET_ITEM(object t, Py_ssize_t o)36 PyObject * PyTuple_GET_ITEM(object t, Py_ssize_t o)
31 void PyTuple_SET_ITEM(object t, Py_ssize_t o, object v)37 void PyTuple_SET_ITEM(object t, Py_ssize_t o, object v)
3238
39 int PyList_CheckExact(object)
33 Py_ssize_t PyList_GET_SIZE(object l)40 Py_ssize_t PyList_GET_SIZE(object l)
34 PyObject * PyList_GET_ITEM(object l, Py_ssize_t o)41 PyObject * PyList_GET_ITEM(object l, Py_ssize_t o)
35 int PyList_SetItem(object l, Py_ssize_t o, object l) except -142 int PyList_SetItem(object l, Py_ssize_t o, object l) except -1
@@ -108,14 +115,65 @@
108 return <_KnownGraphNode>temp_node115 return <_KnownGraphNode>temp_node
109116
110117
111cdef _KnownGraphNode _get_parent(parents, Py_ssize_t pos):118cdef _KnownGraphNode _get_tuple_node(tpl, Py_ssize_t pos):
112 cdef PyObject *temp_node119 cdef PyObject *temp_node
113 cdef _KnownGraphNode node
114120
115 temp_node = PyTuple_GET_ITEM(parents, pos)121 temp_node = PyTuple_GET_ITEM(tpl, pos)
116 return <_KnownGraphNode>temp_node122 return <_KnownGraphNode>temp_node
117123
118124
125def get_key(node):
126 cdef _KnownGraphNode real_node
127 real_node = node
128 return real_node.key
129
130
131cdef object _sort_list_nodes(object lst_or_tpl, int reverse):
132 """Sort a list of _KnownGraphNode objects.
133
134 If lst_or_tpl is a list, it is allowed to mutate in place. It may also
135 just return the input list if everything is already sorted.
136 """
137 cdef _KnownGraphNode node1, node2
138 cdef int do_swap, is_tuple
139 cdef Py_ssize_t length
140
141 is_tuple = PyTuple_CheckExact(lst_or_tpl)
142 if not (is_tuple or PyList_CheckExact(lst_or_tpl)):
143 raise TypeError('lst_or_tpl must be a list or tuple.')
144 length = len(lst_or_tpl)
145 if length == 0 or length == 1:
146 return lst_or_tpl
147 if length == 2:
148 if is_tuple:
149 node1 = _get_tuple_node(lst_or_tpl, 0)
150 node2 = _get_tuple_node(lst_or_tpl, 1)
151 else:
152 node1 = _get_list_node(lst_or_tpl, 0)
153 node2 = _get_list_node(lst_or_tpl, 1)
154 if reverse:
155 do_swap = PyObject_RichCompareBool(node1.key, node2.key, Py_LT)
156 else:
157 do_swap = PyObject_RichCompareBool(node2.key, node1.key, Py_LT)
158 if not do_swap:
159 return lst_or_tpl
160 if is_tuple:
161 return (node2, node1)
162 else:
163 # Swap 'in-place', since lists are mutable
164 Py_INCREF(node1)
165 PyList_SetItem(lst_or_tpl, 1, node1)
166 Py_INCREF(node2)
167 PyList_SetItem(lst_or_tpl, 0, node2)
168 return lst_or_tpl
169 # For all other sizes, we just use 'sorted()'
170 if is_tuple:
171 # Note that sorted() is just list(iterable).sort()
172 lst_or_tpl = list(lst_or_tpl)
173 lst_or_tpl.sort(key=get_key, reverse=reverse)
174 return lst_or_tpl
175
176
119cdef class _MergeSorter177cdef class _MergeSorter
120178
121cdef class KnownGraph:179cdef class KnownGraph:
@@ -216,6 +274,19 @@
216 PyList_Append(tails, node)274 PyList_Append(tails, node)
217 return tails275 return tails
218276
277 def _find_tips(self):
278 cdef PyObject *temp_node
279 cdef _KnownGraphNode node
280 cdef Py_ssize_t pos
281
282 tips = []
283 pos = 0
284 while PyDict_Next(self._nodes, &pos, NULL, &temp_node):
285 node = <_KnownGraphNode>temp_node
286 if PyList_GET_SIZE(node.children) == 0:
287 PyList_Append(tips, node)
288 return tips
289
219 def _find_gdfo(self):290 def _find_gdfo(self):
220 cdef _KnownGraphNode node291 cdef _KnownGraphNode node
221 cdef _KnownGraphNode child292 cdef _KnownGraphNode child
@@ -322,7 +393,7 @@
322 continue393 continue
323 if node.parents is not None and PyTuple_GET_SIZE(node.parents) > 0:394 if node.parents is not None and PyTuple_GET_SIZE(node.parents) > 0:
324 for pos from 0 <= pos < PyTuple_GET_SIZE(node.parents):395 for pos from 0 <= pos < PyTuple_GET_SIZE(node.parents):
325 parent_node = _get_parent(node.parents, pos)396 parent_node = _get_tuple_node(node.parents, pos)
326 last_item = last_item + 1397 last_item = last_item + 1
327 if last_item < PyList_GET_SIZE(pending):398 if last_item < PyList_GET_SIZE(pending):
328 Py_INCREF(parent_node) # SetItem steals a ref399 Py_INCREF(parent_node) # SetItem steals a ref
@@ -397,6 +468,77 @@
397 # We started from the parents, so we don't need to do anymore work468 # We started from the parents, so we don't need to do anymore work
398 return topo_order469 return topo_order
399470
471 def gc_sort(self):
472 """Return a reverse topological ordering which is 'stable'.
473
474 There are a few constraints:
475 1) Reverse topological (all children before all parents)
476 2) Grouped by prefix
477 3) 'stable' sorting, so that we get the same result, independent of
478 machine, or extra data.
479 To do this, we use the same basic algorithm as topo_sort, but when we
480 aren't sure what node to access next, we sort them lexicographically.
481 """
482 cdef PyObject *temp
483 cdef Py_ssize_t pos, last_item
484 cdef _KnownGraphNode node, node2, parent_node
485
486 tips = self._find_tips()
487 # Split the tips based on prefix
488 prefix_tips = {}
489 for pos from 0 <= pos < PyList_GET_SIZE(tips):
490 node = _get_list_node(tips, pos)
491 if PyString_CheckExact(node.key) or len(node.key) == 1:
492 prefix = ''
493 else:
494 prefix = node.key[0]
495 temp = PyDict_GetItem(prefix_tips, prefix)
496 if temp == NULL:
497 prefix_tips[prefix] = [node]
498 else:
499 tip_nodes = <object>temp
500 PyList_Append(tip_nodes, node)
501
502 result = []
503 for prefix in sorted(prefix_tips):
504 temp = PyDict_GetItem(prefix_tips, prefix)
505 assert temp != NULL
506 tip_nodes = <object>temp
507 pending = _sort_list_nodes(tip_nodes, 1)
508 last_item = PyList_GET_SIZE(pending) - 1
509 while last_item >= 0:
510 node = _get_list_node(pending, last_item)
511 last_item = last_item - 1
512 if node.parents is None:
513 # Ghost
514 continue
515 PyList_Append(result, node.key)
516 # Sorting the parent keys isn't strictly necessary for stable
517 # sorting of a given graph. But it does help minimize the
518 # differences between graphs
519 # For bzr.dev ancestry:
520 # 4.73ms no sort
521 # 7.73ms RichCompareBool sort
522 parents = _sort_list_nodes(node.parents, 1)
523 for pos from 0 <= pos < len(parents):
524 if PyTuple_CheckExact(parents):
525 parent_node = _get_tuple_node(parents, pos)
526 else:
527 parent_node = _get_list_node(parents, pos)
528 # TODO: GraphCycle detection
529 parent_node.seen = parent_node.seen + 1
530 if (parent_node.seen
531 == PyList_GET_SIZE(parent_node.children)):
532 # All children have been processed, queue up this
533 # parent
534 last_item = last_item + 1
535 if last_item < PyList_GET_SIZE(pending):
536 Py_INCREF(parent_node) # SetItem steals a ref
537 PyList_SetItem(pending, last_item, parent_node)
538 else:
539 PyList_Append(pending, parent_node)
540 parent_node.seen = 0
541 return result
400542
401 def merge_sort(self, tip_key):543 def merge_sort(self, tip_key):
402 """Compute the merge sorted graph output."""544 """Compute the merge sorted graph output."""
@@ -522,7 +664,7 @@
522 raise RuntimeError('ghost nodes should not be pushed'664 raise RuntimeError('ghost nodes should not be pushed'
523 ' onto the stack: %s' % (node,))665 ' onto the stack: %s' % (node,))
524 if PyTuple_GET_SIZE(node.parents) > 0:666 if PyTuple_GET_SIZE(node.parents) > 0:
525 parent_node = _get_parent(node.parents, 0)667 parent_node = _get_tuple_node(node.parents, 0)
526 ms_node.left_parent = parent_node668 ms_node.left_parent = parent_node
527 if parent_node.parents is None: # left-hand ghost669 if parent_node.parents is None: # left-hand ghost
528 ms_node.left_pending_parent = None670 ms_node.left_pending_parent = None
@@ -532,7 +674,7 @@
532 if PyTuple_GET_SIZE(node.parents) > 1:674 if PyTuple_GET_SIZE(node.parents) > 1:
533 ms_node.pending_parents = []675 ms_node.pending_parents = []
534 for pos from 1 <= pos < PyTuple_GET_SIZE(node.parents):676 for pos from 1 <= pos < PyTuple_GET_SIZE(node.parents):
535 parent_node = _get_parent(node.parents, pos)677 parent_node = _get_tuple_node(node.parents, pos)
536 if parent_node.parents is None: # ghost678 if parent_node.parents is None: # ghost
537 continue679 continue
538 PyList_Append(ms_node.pending_parents, parent_node)680 PyList_Append(ms_node.pending_parents, parent_node)
539681
=== modified file 'bzrlib/builtins.py'
--- bzrlib/builtins.py 2009-08-26 03:20:32 +0000
+++ bzrlib/builtins.py 2009-08-28 05:00:33 +0000
@@ -3382,6 +3382,8 @@
3382 Option('lsprof-timed',3382 Option('lsprof-timed',
3383 help='Generate lsprof output for benchmarked'3383 help='Generate lsprof output for benchmarked'
3384 ' sections of code.'),3384 ' sections of code.'),
3385 Option('lsprof-tests',
3386 help='Generate lsprof output for each test.'),
3385 Option('cache-dir', type=str,3387 Option('cache-dir', type=str,
3386 help='Cache intermediate benchmark output in this '3388 help='Cache intermediate benchmark output in this '
3387 'directory.'),3389 'directory.'),
@@ -3428,7 +3430,7 @@
3428 first=False, list_only=False,3430 first=False, list_only=False,
3429 randomize=None, exclude=None, strict=False,3431 randomize=None, exclude=None, strict=False,
3430 load_list=None, debugflag=None, starting_with=None, subunit=False,3432 load_list=None, debugflag=None, starting_with=None, subunit=False,
3431 parallel=None):3433 parallel=None, lsprof_tests=False):
3432 from bzrlib.tests import selftest3434 from bzrlib.tests import selftest
3433 import bzrlib.benchmarks as benchmarks3435 import bzrlib.benchmarks as benchmarks
3434 from bzrlib.benchmarks import tree_creator3436 from bzrlib.benchmarks import tree_creator
@@ -3468,6 +3470,7 @@
3468 "transport": transport,3470 "transport": transport,
3469 "test_suite_factory": test_suite_factory,3471 "test_suite_factory": test_suite_factory,
3470 "lsprof_timed": lsprof_timed,3472 "lsprof_timed": lsprof_timed,
3473 "lsprof_tests": lsprof_tests,
3471 "bench_history": benchfile,3474 "bench_history": benchfile,
3472 "matching_tests_first": first,3475 "matching_tests_first": first,
3473 "list_only": list_only,3476 "list_only": list_only,
34743477
=== modified file 'bzrlib/groupcompress.py'
--- bzrlib/groupcompress.py 2009-08-26 16:47:51 +0000
+++ bzrlib/groupcompress.py 2009-09-03 15:25:36 +0000
@@ -457,7 +457,6 @@
457 # There are code paths that first extract as fulltext, and then457 # There are code paths that first extract as fulltext, and then
458 # extract as storage_kind (smart fetch). So we don't break the458 # extract as storage_kind (smart fetch). So we don't break the
459 # refcycle here, but instead in manager.get_record_stream()459 # refcycle here, but instead in manager.get_record_stream()
460 # self._manager = None
461 if storage_kind == 'fulltext':460 if storage_kind == 'fulltext':
462 return self._bytes461 return self._bytes
463 else:462 else:
@@ -469,6 +468,14 @@
469class _LazyGroupContentManager(object):468class _LazyGroupContentManager(object):
470 """This manages a group of _LazyGroupCompressFactory objects."""469 """This manages a group of _LazyGroupCompressFactory objects."""
471470
471 _max_cut_fraction = 0.75 # We allow a block to be trimmed to 75% of
472 # current size, and still be considered
473 # resuable
474 _full_block_size = 4*1024*1024
475 _full_mixed_block_size = 2*1024*1024
476 _full_enough_block_size = 3*1024*1024 # size at which we won't repack
477 _full_enough_mixed_block_size = 2*768*1024 # 1.5MB
478
472 def __init__(self, block):479 def __init__(self, block):
473 self._block = block480 self._block = block
474 # We need to preserve the ordering481 # We need to preserve the ordering
@@ -546,22 +553,23 @@
546 # time (self._block._content) is a little expensive.553 # time (self._block._content) is a little expensive.
547 self._block._ensure_content(self._last_byte)554 self._block._ensure_content(self._last_byte)
548555
549 def _check_rebuild_block(self):556 def _check_rebuild_action(self):
550 """Check to see if our block should be repacked."""557 """Check to see if our block should be repacked."""
551 total_bytes_used = 0558 total_bytes_used = 0
552 last_byte_used = 0559 last_byte_used = 0
553 for factory in self._factories:560 for factory in self._factories:
554 total_bytes_used += factory._end - factory._start561 total_bytes_used += factory._end - factory._start
555 last_byte_used = max(last_byte_used, factory._end)562 if last_byte_used < factory._end:
556 # If we are using most of the bytes from the block, we have nothing563 last_byte_used = factory._end
557 # else to check (currently more that 1/2)564 # If we are using more than half of the bytes from the block, we have
565 # nothing else to check
558 if total_bytes_used * 2 >= self._block._content_length:566 if total_bytes_used * 2 >= self._block._content_length:
559 return567 return None, last_byte_used, total_bytes_used
560 # Can we just strip off the trailing bytes? If we are going to be568 # We are using less than 50% of the content. Is the content we are
561 # transmitting more than 50% of the front of the content, go ahead569 # using at the beginning of the block? If so, we can just trim the
570 # tail, rather than rebuilding from scratch.
562 if total_bytes_used * 2 > last_byte_used:571 if total_bytes_used * 2 > last_byte_used:
563 self._trim_block(last_byte_used)572 return 'trim', last_byte_used, total_bytes_used
564 return
565573
566 # We are using a small amount of the data, and it isn't just packed574 # We are using a small amount of the data, and it isn't just packed
567 # nicely at the front, so rebuild the content.575 # nicely at the front, so rebuild the content.
@@ -574,7 +582,77 @@
574 # expanding many deltas into fulltexts, as well.582 # expanding many deltas into fulltexts, as well.
575 # If we build a cheap enough 'strip', then we could try a strip,583 # If we build a cheap enough 'strip', then we could try a strip,
576 # if that expands the content, we then rebuild.584 # if that expands the content, we then rebuild.
577 self._rebuild_block()585 return 'rebuild', last_byte_used, total_bytes_used
586
587 def check_is_well_utilized(self):
588 """Is the current block considered 'well utilized'?
589
590 This is a bit of a heuristic, but it basically asks if the current
591 block considers itself to be a fully developed group, rather than just
592 a loose collection of data.
593 """
594 if len(self._factories) == 1:
595 # A block of length 1 is never considered 'well utilized' :)
596 return False
597 action, last_byte_used, total_bytes_used = self._check_rebuild_action()
598 block_size = self._block._content_length
599 if total_bytes_used < block_size * self._max_cut_fraction:
600 # This block wants to trim itself small enough that we want to
601 # consider it under-utilized.
602 return False
603 # TODO: This code is meant to be the twin of _insert_record_stream's
604 # 'start_new_block' logic. It would probably be better to factor
605 # out that logic into a shared location, so that it stays
606 # together better
607 # We currently assume a block is properly utilized whenever it is >75%
608 # of the size of a 'full' block. In normal operation, a block is
609 # considered full when it hits 4MB of same-file content. So any block
610 # >3MB is 'full enough'.
611 # The only time this isn't true is when a given block has large-object
612 # content. (a single file >4MB, etc.)
613 # Under these circumstances, we allow a block to grow to
614 # 2 x largest_content. Which means that if a given block had a large
615 # object, it may actually be under-utilized. However, given that this
616 # is 'pack-on-the-fly' it is probably reasonable to not repack large
617 # contet blobs on-the-fly.
618 if block_size >= self._full_enough_block_size:
619 return True
620 # If a block is <3MB, it still may be considered 'full' if it contains
621 # mixed content. The current rule is 2MB of mixed content is considered
622 # full. So check to see if this block contains mixed content, and
623 # set the threshold appropriately.
624 common_prefix = None
625 for factory in self._factories:
626 prefix = factory.key[:-1]
627 if common_prefix is None:
628 common_prefix = prefix
629 elif prefix != common_prefix:
630 # Mixed content, check the size appropriately
631 if block_size >= self._full_enough_mixed_block_size:
632 return True
633 break
634 # The content failed both the mixed check and the single-content check
635 # so obviously it is not fully utilized
636 # TODO: there is one other constraint that isn't being checked
637 # namely, that the entries in the block are in the appropriate
638 # order. For example, you could insert the entries in exactly
639 # reverse groupcompress order, and we would think that is ok.
640 # (all the right objects are in one group, and it is fully
641 # utilized, etc.) For now, we assume that case is rare,
642 # especially since we should always fetch in 'groupcompress'
643 # order.
644 return False
645
646 def _check_rebuild_block(self):
647 action, last_byte_used, total_bytes_used = self._check_rebuild_action()
648 if action is None:
649 return
650 if action == 'trim':
651 self._trim_block(last_byte_used)
652 elif action == 'rebuild':
653 self._rebuild_block()
654 else:
655 raise ValueError('unknown rebuild action: %r' % (action,))
578656
579 def _wire_bytes(self):657 def _wire_bytes(self):
580 """Return a byte stream suitable for transmitting over the wire."""658 """Return a byte stream suitable for transmitting over the wire."""
@@ -1570,6 +1648,7 @@
1570 block_length = None1648 block_length = None
1571 # XXX: TODO: remove this, it is just for safety checking for now1649 # XXX: TODO: remove this, it is just for safety checking for now
1572 inserted_keys = set()1650 inserted_keys = set()
1651 reuse_this_block = reuse_blocks
1573 for record in stream:1652 for record in stream:
1574 # Raise an error when a record is missing.1653 # Raise an error when a record is missing.
1575 if record.storage_kind == 'absent':1654 if record.storage_kind == 'absent':
@@ -1583,10 +1662,20 @@
1583 if reuse_blocks:1662 if reuse_blocks:
1584 # If the reuse_blocks flag is set, check to see if we can just1663 # If the reuse_blocks flag is set, check to see if we can just
1585 # copy a groupcompress block as-is.1664 # copy a groupcompress block as-is.
1665 # We only check on the first record (groupcompress-block) not
1666 # on all of the (groupcompress-block-ref) entries.
1667 # The reuse_this_block flag is then kept for as long as
1668 if record.storage_kind == 'groupcompress-block':
1669 # Check to see if we really want to re-use this block
1670 insert_manager = record._manager
1671 reuse_this_block = insert_manager.check_is_well_utilized()
1672 else:
1673 reuse_this_block = False
1674 if reuse_this_block:
1675 # We still want to reuse this block
1586 if record.storage_kind == 'groupcompress-block':1676 if record.storage_kind == 'groupcompress-block':
1587 # Insert the raw block into the target repo1677 # Insert the raw block into the target repo
1588 insert_manager = record._manager1678 insert_manager = record._manager
1589 insert_manager._check_rebuild_block()
1590 bytes = record._manager._block.to_bytes()1679 bytes = record._manager._block.to_bytes()
1591 _, start, length = self._access.add_raw_records(1680 _, start, length = self._access.add_raw_records(
1592 [(None, len(bytes))], bytes)[0]1681 [(None, len(bytes))], bytes)[0]
@@ -1597,6 +1686,11 @@
1597 'groupcompress-block-ref'):1686 'groupcompress-block-ref'):
1598 if insert_manager is None:1687 if insert_manager is None:
1599 raise AssertionError('No insert_manager set')1688 raise AssertionError('No insert_manager set')
1689 if insert_manager is not record._manager:
1690 raise AssertionError('insert_manager does not match'
1691 ' the current record, we cannot be positive'
1692 ' that the appropriate content was inserted.'
1693 )
1600 value = "%d %d %d %d" % (block_start, block_length,1694 value = "%d %d %d %d" % (block_start, block_length,
1601 record._start, record._end)1695 record._start, record._end)
1602 nodes = [(record.key, value, (record.parents,))]1696 nodes = [(record.key, value, (record.parents,))]
16031697
=== modified file 'bzrlib/lsprof.py'
--- bzrlib/lsprof.py 2009-03-08 06:18:06 +0000
+++ bzrlib/lsprof.py 2009-08-24 21:05:09 +0000
@@ -13,45 +13,74 @@
1313
14__all__ = ['profile', 'Stats']14__all__ = ['profile', 'Stats']
1515
16_g_threadmap = {}
17
18
19def _thread_profile(f, *args, **kwds):
20 # we lose the first profile point for a new thread in order to trampoline
21 # a new Profile object into place
22 global _g_threadmap
23 thr = thread.get_ident()
24 _g_threadmap[thr] = p = Profiler()
25 # this overrides our sys.setprofile hook:
26 p.enable(subcalls=True, builtins=True)
27
28
29def profile(f, *args, **kwds):16def profile(f, *args, **kwds):
30 """Run a function profile.17 """Run a function profile.
3118
32 Exceptions are not caught: If you need stats even when exceptions are to be19 Exceptions are not caught: If you need stats even when exceptions are to be
33 raised, passing in a closure that will catch the exceptions and transform20 raised, pass in a closure that will catch the exceptions and transform them
34 them appropriately for your driver function.21 appropriately for your driver function.
3522
36 :return: The functions return value and a stats object.23 :return: The functions return value and a stats object.
37 """24 """
38 global _g_threadmap25 profiler = BzrProfiler()
39 p = Profiler()26 profiler.start()
40 p.enable(subcalls=True)
41 threading.setprofile(_thread_profile)
42 try:27 try:
43 ret = f(*args, **kwds)28 ret = f(*args, **kwds)
44 finally:29 finally:
45 p.disable()30 stats = profiler.stop()
46 for pp in _g_threadmap.values():31 return ret, stats
32
33
34class BzrProfiler(object):
35 """Bzr utility wrapper around Profiler.
36
37 For most uses the module level 'profile()' function will be suitable.
38 However profiling when a simple wrapped function isn't available may
39 be easier to accomplish using this class.
40
41 To use it, create a BzrProfiler and call start() on it. Some arbitrary
42 time later call stop() to stop profiling and retrieve the statistics
43 from the code executed in the interim.
44 """
45
46 def start(self):
47 """Start profiling.
48
49 This hooks into threading and will record all calls made until
50 stop() is called.
51 """
52 self._g_threadmap = {}
53 self.p = Profiler()
54 self.p.enable(subcalls=True)
55 threading.setprofile(self._thread_profile)
56
57 def stop(self):
58 """Stop profiling.
59
60 This unhooks from threading and cleans up the profiler, returning
61 the gathered Stats object.
62
63 :return: A bzrlib.lsprof.Stats object.
64 """
65 self.p.disable()
66 for pp in self._g_threadmap.values():
47 pp.disable()67 pp.disable()
48 threading.setprofile(None)68 threading.setprofile(None)
69 p = self.p
70 self.p = None
71 threads = {}
72 for tid, pp in self._g_threadmap.items():
73 threads[tid] = Stats(pp.getstats(), {})
74 self._g_threadmap = None
75 return Stats(p.getstats(), threads)
4976
50 threads = {}77 def _thread_profile(self, f, *args, **kwds):
51 for tid, pp in _g_threadmap.items():78 # we lose the first profile point for a new thread in order to
52 threads[tid] = Stats(pp.getstats(), {})79 # trampoline a new Profile object into place
53 _g_threadmap = {}80 thr = thread.get_ident()
54 return ret, Stats(p.getstats(), threads)81 self._g_threadmap[thr] = p = Profiler()
82 # this overrides our sys.setprofile hook:
83 p.enable(subcalls=True, builtins=True)
5584
5685
57class Stats(object):86class Stats(object):
5887
=== modified file 'bzrlib/repofmt/groupcompress_repo.py'
--- bzrlib/repofmt/groupcompress_repo.py 2009-08-24 19:34:13 +0000
+++ bzrlib/repofmt/groupcompress_repo.py 2009-09-01 06:10:24 +0000
@@ -932,7 +932,7 @@
932 super(GroupCHKStreamSource, self).__init__(from_repository, to_format)932 super(GroupCHKStreamSource, self).__init__(from_repository, to_format)
933 self._revision_keys = None933 self._revision_keys = None
934 self._text_keys = None934 self._text_keys = None
935 # self._text_fetch_order = 'unordered'935 self._text_fetch_order = 'groupcompress'
936 self._chk_id_roots = None936 self._chk_id_roots = None
937 self._chk_p_id_roots = None937 self._chk_p_id_roots = None
938938
@@ -949,7 +949,7 @@
949 p_id_roots_set = set()949 p_id_roots_set = set()
950 source_vf = self.from_repository.inventories950 source_vf = self.from_repository.inventories
951 stream = source_vf.get_record_stream(inventory_keys,951 stream = source_vf.get_record_stream(inventory_keys,
952 'unordered', True)952 'groupcompress', True)
953 for record in stream:953 for record in stream:
954 if record.storage_kind == 'absent':954 if record.storage_kind == 'absent':
955 if allow_absent:955 if allow_absent:
956956
=== modified file 'bzrlib/repository.py'
--- bzrlib/repository.py 2009-08-30 22:02:45 +0000
+++ bzrlib/repository.py 2009-09-03 15:26:27 +0000
@@ -3844,6 +3844,9 @@
3844 possible_trees.append((basis_id, cache[basis_id]))3844 possible_trees.append((basis_id, cache[basis_id]))
3845 basis_id, delta = self._get_delta_for_revision(tree, parent_ids,3845 basis_id, delta = self._get_delta_for_revision(tree, parent_ids,
3846 possible_trees)3846 possible_trees)
3847 revision = self.source.get_revision(current_revision_id)
3848 pending_deltas.append((basis_id, delta,
3849 current_revision_id, revision.parent_ids))
3847 if self._converting_to_rich_root:3850 if self._converting_to_rich_root:
3848 self._revision_id_to_root_id[current_revision_id] = \3851 self._revision_id_to_root_id[current_revision_id] = \
3849 tree.get_root_id()3852 tree.get_root_id()
@@ -3878,9 +3881,6 @@
3878 if entry.revision == file_revision:3881 if entry.revision == file_revision:
3879 texts_possibly_new_in_tree.remove(file_key)3882 texts_possibly_new_in_tree.remove(file_key)
3880 text_keys.update(texts_possibly_new_in_tree)3883 text_keys.update(texts_possibly_new_in_tree)
3881 revision = self.source.get_revision(current_revision_id)
3882 pending_deltas.append((basis_id, delta,
3883 current_revision_id, revision.parent_ids))
3884 pending_revisions.append(revision)3884 pending_revisions.append(revision)
3885 cache[current_revision_id] = tree3885 cache[current_revision_id] = tree
3886 basis_id = current_revision_id3886 basis_id = current_revision_id
38873887
=== modified file 'bzrlib/smart/repository.py'
--- bzrlib/smart/repository.py 2009-08-14 00:55:42 +0000
+++ bzrlib/smart/repository.py 2009-09-02 22:29:55 +0000
@@ -519,36 +519,92 @@
519 yield pack_writer.end()519 yield pack_writer.end()
520520
521521
522class _ByteStreamDecoder(object):
523 """Helper for _byte_stream_to_stream.
524
525 Broadly this class has to unwrap two layers of iterators:
526 (type, substream)
527 (substream details)
528
529 This is complicated by wishing to return type, iterator_for_type, but
530 getting the data for iterator_for_type when we find out type: we can't
531 simply pass a generator down to the NetworkRecordStream parser, instead
532 we have a little local state to seed each NetworkRecordStream instance,
533 and gather the type that we'll be yielding.
534
535 :ivar byte_stream: The byte stream being decoded.
536 :ivar stream_decoder: A pack parser used to decode the bytestream
537 :ivar current_type: The current type, used to join adjacent records of the
538 same type into a single stream.
539 :ivar first_bytes: The first bytes to give the next NetworkRecordStream.
540 """
541
542 def __init__(self, byte_stream):
543 """Create a _ByteStreamDecoder."""
544 self.stream_decoder = pack.ContainerPushParser()
545 self.current_type = None
546 self.first_bytes = None
547 self.byte_stream = byte_stream
548
549 def iter_stream_decoder(self):
550 """Iterate the contents of the pack from stream_decoder."""
551 # dequeue pending items
552 for record in self.stream_decoder.read_pending_records():
553 yield record
554 # Pull bytes of the wire, decode them to records, yield those records.
555 for bytes in self.byte_stream:
556 self.stream_decoder.accept_bytes(bytes)
557 for record in self.stream_decoder.read_pending_records():
558 yield record
559
560 def iter_substream_bytes(self):
561 if self.first_bytes is not None:
562 yield self.first_bytes
563 # If we run out of pack records, single the outer layer to stop.
564 self.first_bytes = None
565 for record in self.iter_pack_records:
566 record_names, record_bytes = record
567 record_name, = record_names
568 substream_type = record_name[0]
569 if substream_type != self.current_type:
570 # end of a substream, seed the next substream.
571 self.current_type = substream_type
572 self.first_bytes = record_bytes
573 return
574 yield record_bytes
575
576 def record_stream(self):
577 """Yield substream_type, substream from the byte stream."""
578 self.seed_state()
579 # Make and consume sub generators, one per substream type:
580 while self.first_bytes is not None:
581 substream = NetworkRecordStream(self.iter_substream_bytes())
582 # after substream is fully consumed, self.current_type is set to
583 # the next type, and self.first_bytes is set to the matching bytes.
584 yield self.current_type, substream.read()
585
586 def seed_state(self):
587 """Prepare the _ByteStreamDecoder to decode from the pack stream."""
588 # Set a single generator we can use to get data from the pack stream.
589 self.iter_pack_records = self.iter_stream_decoder()
590 # Seed the very first subiterator with content; after this each one
591 # seeds the next.
592 list(self.iter_substream_bytes())
593
594
522def _byte_stream_to_stream(byte_stream):595def _byte_stream_to_stream(byte_stream):
523 """Convert a byte stream into a format and a stream.596 """Convert a byte stream into a format and a stream.
524597
525 :param byte_stream: A bytes iterator, as output by _stream_to_byte_stream.598 :param byte_stream: A bytes iterator, as output by _stream_to_byte_stream.
526 :return: (RepositoryFormat, stream_generator)599 :return: (RepositoryFormat, stream_generator)
527 """600 """
528 stream_decoder = pack.ContainerPushParser()601 decoder = _ByteStreamDecoder(byte_stream)
529 def record_stream():
530 """Closure to return the substreams."""
531 # May have fully parsed records already.
532 for record in stream_decoder.read_pending_records():
533 record_names, record_bytes = record
534 record_name, = record_names
535 substream_type = record_name[0]
536 substream = NetworkRecordStream([record_bytes])
537 yield substream_type, substream.read()
538 for bytes in byte_stream:
539 stream_decoder.accept_bytes(bytes)
540 for record in stream_decoder.read_pending_records():
541 record_names, record_bytes = record
542 record_name, = record_names
543 substream_type = record_name[0]
544 substream = NetworkRecordStream([record_bytes])
545 yield substream_type, substream.read()
546 for bytes in byte_stream:602 for bytes in byte_stream:
547 stream_decoder.accept_bytes(bytes)603 decoder.stream_decoder.accept_bytes(bytes)
548 for record in stream_decoder.read_pending_records(max=1):604 for record in decoder.stream_decoder.read_pending_records(max=1):
549 record_names, src_format_name = record605 record_names, src_format_name = record
550 src_format = network_format_registry.get(src_format_name)606 src_format = network_format_registry.get(src_format_name)
551 return src_format, record_stream()607 return src_format, decoder.record_stream()
552608
553609
554class SmartServerRepositoryUnlock(SmartServerRepositoryRequest):610class SmartServerRepositoryUnlock(SmartServerRepositoryRequest):
555611
=== modified file 'bzrlib/tests/__init__.py'
--- bzrlib/tests/__init__.py 2009-08-24 20:30:18 +0000
+++ bzrlib/tests/__init__.py 2009-08-28 21:05:31 +0000
@@ -28,6 +28,7 @@
2828
29import atexit29import atexit
30import codecs30import codecs
31from copy import copy
31from cStringIO import StringIO32from cStringIO import StringIO
32import difflib33import difflib
33import doctest34import doctest
@@ -174,17 +175,47 @@
174 self._overall_start_time = time.time()175 self._overall_start_time = time.time()
175 self._strict = strict176 self._strict = strict
176177
177 def done(self):178 def stopTestRun(self):
178 # nb: called stopTestRun in the version of this that Python merged179 run = self.testsRun
179 # upstream, according to lifeless 20090803180 actionTaken = "Ran"
181 stopTime = time.time()
182 timeTaken = stopTime - self.startTime
183 self.printErrors()
184 self.stream.writeln(self.separator2)
185 self.stream.writeln("%s %d test%s in %.3fs" % (actionTaken,
186 run, run != 1 and "s" or "", timeTaken))
187 self.stream.writeln()
188 if not self.wasSuccessful():
189 self.stream.write("FAILED (")
190 failed, errored = map(len, (self.failures, self.errors))
191 if failed:
192 self.stream.write("failures=%d" % failed)
193 if errored:
194 if failed: self.stream.write(", ")
195 self.stream.write("errors=%d" % errored)
196 if self.known_failure_count:
197 if failed or errored: self.stream.write(", ")
198 self.stream.write("known_failure_count=%d" %
199 self.known_failure_count)
200 self.stream.writeln(")")
201 else:
202 if self.known_failure_count:
203 self.stream.writeln("OK (known_failures=%d)" %
204 self.known_failure_count)
205 else:
206 self.stream.writeln("OK")
207 if self.skip_count > 0:
208 skipped = self.skip_count
209 self.stream.writeln('%d test%s skipped' %
210 (skipped, skipped != 1 and "s" or ""))
211 if self.unsupported:
212 for feature, count in sorted(self.unsupported.items()):
213 self.stream.writeln("Missing feature '%s' skipped %d tests." %
214 (feature, count))
180 if self._strict:215 if self._strict:
181 ok = self.wasStrictlySuccessful()216 ok = self.wasStrictlySuccessful()
182 else:217 else:
183 ok = self.wasSuccessful()218 ok = self.wasSuccessful()
184 if ok:
185 self.stream.write('tests passed\n')
186 else:
187 self.stream.write('tests failed\n')
188 if TestCase._first_thread_leaker_id:219 if TestCase._first_thread_leaker_id:
189 self.stream.write(220 self.stream.write(
190 '%s is leaking threads among %d leaking tests.\n' % (221 '%s is leaking threads among %d leaking tests.\n' % (
@@ -382,12 +413,12 @@
382 else:413 else:
383 raise errors.BzrError("Unknown whence %r" % whence)414 raise errors.BzrError("Unknown whence %r" % whence)
384415
385 def finished(self):
386 pass
387
388 def report_cleaning_up(self):416 def report_cleaning_up(self):
389 pass417 pass
390418
419 def startTestRun(self):
420 self.startTime = time.time()
421
391 def report_success(self, test):422 def report_success(self, test):
392 pass423 pass
393424
@@ -420,15 +451,14 @@
420 self.pb.update_latency = 0451 self.pb.update_latency = 0
421 self.pb.show_transport_activity = False452 self.pb.show_transport_activity = False
422453
423 def done(self):454 def stopTestRun(self):
424 # called when the tests that are going to run have run455 # called when the tests that are going to run have run
425 self.pb.clear()456 self.pb.clear()
426 super(TextTestResult, self).done()
427
428 def finished(self):
429 self.pb.finished()457 self.pb.finished()
458 super(TextTestResult, self).stopTestRun()
430459
431 def report_starting(self):460 def startTestRun(self):
461 super(TextTestResult, self).startTestRun()
432 self.pb.update('[test 0/%d] Starting' % (self.num_tests))462 self.pb.update('[test 0/%d] Starting' % (self.num_tests))
433463
434 def printErrors(self):464 def printErrors(self):
@@ -513,7 +543,8 @@
513 result = a_string543 result = a_string
514 return result.ljust(final_width)544 return result.ljust(final_width)
515545
516 def report_starting(self):546 def startTestRun(self):
547 super(VerboseTestResult, self).startTestRun()
517 self.stream.write('running %d tests...\n' % self.num_tests)548 self.stream.write('running %d tests...\n' % self.num_tests)
518549
519 def report_test_start(self, test):550 def report_test_start(self, test):
@@ -577,88 +608,57 @@
577 descriptions=0,608 descriptions=0,
578 verbosity=1,609 verbosity=1,
579 bench_history=None,610 bench_history=None,
580 list_only=False,
581 strict=False,611 strict=False,
612 result_decorators=None,
582 ):613 ):
614 """Create a TextTestRunner.
615
616 :param result_decorators: An optional list of decorators to apply
617 to the result object being used by the runner. Decorators are
618 applied left to right - the first element in the list is the
619 innermost decorator.
620 """
583 self.stream = unittest._WritelnDecorator(stream)621 self.stream = unittest._WritelnDecorator(stream)
584 self.descriptions = descriptions622 self.descriptions = descriptions
585 self.verbosity = verbosity623 self.verbosity = verbosity
586 self._bench_history = bench_history624 self._bench_history = bench_history
587 self.list_only = list_only
588 self._strict = strict625 self._strict = strict
626 self._result_decorators = result_decorators or []
589627
590 def run(self, test):628 def run(self, test):
591 "Run the given test case or test suite."629 "Run the given test case or test suite."
592 startTime = time.time()
593 if self.verbosity == 1:630 if self.verbosity == 1:
594 result_class = TextTestResult631 result_class = TextTestResult
595 elif self.verbosity >= 2:632 elif self.verbosity >= 2:
596 result_class = VerboseTestResult633 result_class = VerboseTestResult
597 result = result_class(self.stream,634 original_result = result_class(self.stream,
598 self.descriptions,635 self.descriptions,
599 self.verbosity,636 self.verbosity,
600 bench_history=self._bench_history,637 bench_history=self._bench_history,
601 strict=self._strict,638 strict=self._strict,
602 )639 )
603 result.stop_early = self.stop_on_failure640 # Signal to result objects that look at stop early policy to stop,
604 result.report_starting()641 original_result.stop_early = self.stop_on_failure
605 if self.list_only:642 result = original_result
606 if self.verbosity >= 2:643 for decorator in self._result_decorators:
607 self.stream.writeln("Listing tests only ...\n")644 result = decorator(result)
608 run = 0645 result.stop_early = self.stop_on_failure
609 for t in iter_suite_tests(test):646 try:
610 self.stream.writeln("%s" % (t.id()))647 import testtools
611 run += 1648 except ImportError:
612 return None649 pass
613 else:650 else:
614 try:651 if isinstance(test, testtools.ConcurrentTestSuite):
615 import testtools652 # We need to catch bzr specific behaviors
616 except ImportError:653 result = BZRTransformingResult(result)
617 test.run(result)654 result.startTestRun()
618 else:655 try:
619 if isinstance(test, testtools.ConcurrentTestSuite):656 test.run(result)
620 # We need to catch bzr specific behaviors657 finally:
621 test.run(BZRTransformingResult(result))658 result.stopTestRun()
622 else:659 # higher level code uses our extended protocol to determine
623 test.run(result)660 # what exit code to give.
624 run = result.testsRun661 return original_result
625 actionTaken = "Ran"
626 stopTime = time.time()
627 timeTaken = stopTime - startTime
628 result.printErrors()
629 self.stream.writeln(result.separator2)
630 self.stream.writeln("%s %d test%s in %.3fs" % (actionTaken,
631 run, run != 1 and "s" or "", timeTaken))
632 self.stream.writeln()
633 if not result.wasSuccessful():
634 self.stream.write("FAILED (")
635 failed, errored = map(len, (result.failures, result.errors))
636 if failed:
637 self.stream.write("failures=%d" % failed)
638 if errored:
639 if failed: self.stream.write(", ")
640 self.stream.write("errors=%d" % errored)
641 if result.known_failure_count:
642 if failed or errored: self.stream.write(", ")
643 self.stream.write("known_failure_count=%d" %
644 result.known_failure_count)
645 self.stream.writeln(")")
646 else:
647 if result.known_failure_count:
648 self.stream.writeln("OK (known_failures=%d)" %
649 result.known_failure_count)
650 else:
651 self.stream.writeln("OK")
652 if result.skip_count > 0:
653 skipped = result.skip_count
654 self.stream.writeln('%d test%s skipped' %
655 (skipped, skipped != 1 and "s" or ""))
656 if result.unsupported:
657 for feature, count in sorted(result.unsupported.items()):
658 self.stream.writeln("Missing feature '%s' skipped %d tests." %
659 (feature, count))
660 result.finished()
661 return result
662662
663663
664def iter_suite_tests(suite):664def iter_suite_tests(suite):
@@ -928,6 +928,18 @@
928 def _lock_broken(self, result):928 def _lock_broken(self, result):
929 self._lock_actions.append(('broken', result))929 self._lock_actions.append(('broken', result))
930930
931 def start_server(self, transport_server, backing_server=None):
932 """Start transport_server for this test.
933
934 This starts the server, registers a cleanup for it and permits the
935 server's urls to be used.
936 """
937 if backing_server is None:
938 transport_server.setUp()
939 else:
940 transport_server.setUp(backing_server)
941 self.addCleanup(transport_server.tearDown)
942
931 def _ndiff_strings(self, a, b):943 def _ndiff_strings(self, a, b):
932 """Return ndiff between two strings containing lines.944 """Return ndiff between two strings containing lines.
933945
@@ -2067,13 +2079,12 @@
2067 if self.__readonly_server is None:2079 if self.__readonly_server is None:
2068 if self.transport_readonly_server is None:2080 if self.transport_readonly_server is None:
2069 # readonly decorator requested2081 # readonly decorator requested
2070 # bring up the server
2071 self.__readonly_server = ReadonlyServer()2082 self.__readonly_server = ReadonlyServer()
2072 self.__readonly_server.setUp(self.get_vfs_only_server())
2073 else:2083 else:
2084 # explicit readonly transport.
2074 self.__readonly_server = self.create_transport_readonly_server()2085 self.__readonly_server = self.create_transport_readonly_server()
2075 self.__readonly_server.setUp(self.get_vfs_only_server())2086 self.start_server(self.__readonly_server,
2076 self.addCleanup(self.__readonly_server.tearDown)2087 self.get_vfs_only_server())
2077 return self.__readonly_server2088 return self.__readonly_server
20782089
2079 def get_readonly_url(self, relpath=None):2090 def get_readonly_url(self, relpath=None):
@@ -2098,8 +2109,7 @@
2098 """2109 """
2099 if self.__vfs_server is None:2110 if self.__vfs_server is None:
2100 self.__vfs_server = MemoryServer()2111 self.__vfs_server = MemoryServer()
2101 self.__vfs_server.setUp()2112 self.start_server(self.__vfs_server)
2102 self.addCleanup(self.__vfs_server.tearDown)
2103 return self.__vfs_server2113 return self.__vfs_server
21042114
2105 def get_server(self):2115 def get_server(self):
@@ -2112,19 +2122,13 @@
2112 then the self.get_vfs_server is returned.2122 then the self.get_vfs_server is returned.
2113 """2123 """
2114 if self.__server is None:2124 if self.__server is None:
2115 if self.transport_server is None or self.transport_server is self.vfs_transport_factory:2125 if (self.transport_server is None or self.transport_server is
2116 return self.get_vfs_only_server()2126 self.vfs_transport_factory):
2127 self.__server = self.get_vfs_only_server()
2117 else:2128 else:
2118 # bring up a decorated means of access to the vfs only server.2129 # bring up a decorated means of access to the vfs only server.
2119 self.__server = self.transport_server()2130 self.__server = self.transport_server()
2120 try:2131 self.start_server(self.__server, self.get_vfs_only_server())
2121 self.__server.setUp(self.get_vfs_only_server())
2122 except TypeError, e:
2123 # This should never happen; the try:Except here is to assist
2124 # developers having to update code rather than seeing an
2125 # uninformative TypeError.
2126 raise Exception, "Old server API in use: %s, %s" % (self.__server, e)
2127 self.addCleanup(self.__server.tearDown)
2128 return self.__server2132 return self.__server
21292133
2130 def _adjust_url(self, base, relpath):2134 def _adjust_url(self, base, relpath):
@@ -2263,9 +2267,8 @@
22632267
2264 def make_smart_server(self, path):2268 def make_smart_server(self, path):
2265 smart_server = server.SmartTCPServer_for_testing()2269 smart_server = server.SmartTCPServer_for_testing()
2266 smart_server.setUp(self.get_server())2270 self.start_server(smart_server, self.get_server())
2267 remote_transport = get_transport(smart_server.get_url()).clone(path)2271 remote_transport = get_transport(smart_server.get_url()).clone(path)
2268 self.addCleanup(smart_server.tearDown)
2269 return remote_transport2272 return remote_transport
22702273
2271 def make_branch_and_memory_tree(self, relpath, format=None):2274 def make_branch_and_memory_tree(self, relpath, format=None):
@@ -2472,8 +2475,7 @@
2472 """2475 """
2473 if self.__vfs_server is None:2476 if self.__vfs_server is None:
2474 self.__vfs_server = self.vfs_transport_factory()2477 self.__vfs_server = self.vfs_transport_factory()
2475 self.__vfs_server.setUp()2478 self.start_server(self.__vfs_server)
2476 self.addCleanup(self.__vfs_server.tearDown)
2477 return self.__vfs_server2479 return self.__vfs_server
24782480
2479 def make_branch_and_tree(self, relpath, format=None):2481 def make_branch_and_tree(self, relpath, format=None):
@@ -2486,6 +2488,15 @@
2486 repository will also be accessed locally. Otherwise a lightweight2488 repository will also be accessed locally. Otherwise a lightweight
2487 checkout is created and returned.2489 checkout is created and returned.
24882490
2491 We do this because we can't physically create a tree in the local
2492 path, with a branch reference to the transport_factory url, and
2493 a branch + repository in the vfs_transport, unless the vfs_transport
2494 namespace is distinct from the local disk - the two branch objects
2495 would collide. While we could construct a tree with its branch object
2496 pointing at the transport_factory transport in memory, reopening it
2497 would behaving unexpectedly, and has in the past caused testing bugs
2498 when we tried to do it that way.
2499
2489 :param format: The BzrDirFormat.2500 :param format: The BzrDirFormat.
2490 :returns: the WorkingTree.2501 :returns: the WorkingTree.
2491 """2502 """
@@ -2762,7 +2773,9 @@
2762 strict=False,2773 strict=False,
2763 runner_class=None,2774 runner_class=None,
2764 suite_decorators=None,2775 suite_decorators=None,
2765 stream=None):2776 stream=None,
2777 result_decorators=None,
2778 ):
2766 """Run a test suite for bzr selftest.2779 """Run a test suite for bzr selftest.
27672780
2768 :param runner_class: The class of runner to use. Must support the2781 :param runner_class: The class of runner to use. Must support the
@@ -2783,8 +2796,8 @@
2783 descriptions=0,2796 descriptions=0,
2784 verbosity=verbosity,2797 verbosity=verbosity,
2785 bench_history=bench_history,2798 bench_history=bench_history,
2786 list_only=list_only,
2787 strict=strict,2799 strict=strict,
2800 result_decorators=result_decorators,
2788 )2801 )
2789 runner.stop_on_failure=stop_on_failure2802 runner.stop_on_failure=stop_on_failure
2790 # built in decorator factories:2803 # built in decorator factories:
@@ -2805,10 +2818,15 @@
2805 decorators.append(CountingDecorator)2818 decorators.append(CountingDecorator)
2806 for decorator in decorators:2819 for decorator in decorators:
2807 suite = decorator(suite)2820 suite = decorator(suite)
2808 result = runner.run(suite)
2809 if list_only:2821 if list_only:
2822 # Done after test suite decoration to allow randomisation etc
2823 # to take effect, though that is of marginal benefit.
2824 if verbosity >= 2:
2825 stream.write("Listing tests only ...\n")
2826 for t in iter_suite_tests(suite):
2827 stream.write("%s\n" % (t.id()))
2810 return True2828 return True
2811 result.done()2829 result = runner.run(suite)
2812 if strict:2830 if strict:
2813 return result.wasStrictlySuccessful()2831 return result.wasStrictlySuccessful()
2814 else:2832 else:
@@ -3131,7 +3149,7 @@
3131 return result3149 return result
31323150
31333151
3134class BZRTransformingResult(unittest.TestResult):3152class ForwardingResult(unittest.TestResult):
31353153
3136 def __init__(self, target):3154 def __init__(self, target):
3137 unittest.TestResult.__init__(self)3155 unittest.TestResult.__init__(self)
@@ -3143,6 +3161,27 @@
3143 def stopTest(self, test):3161 def stopTest(self, test):
3144 self.result.stopTest(test)3162 self.result.stopTest(test)
31453163
3164 def startTestRun(self):
3165 self.result.startTestRun()
3166
3167 def stopTestRun(self):
3168 self.result.stopTestRun()
3169
3170 def addSkip(self, test, reason):
3171 self.result.addSkip(test, reason)
3172
3173 def addSuccess(self, test):
3174 self.result.addSuccess(test)
3175
3176 def addError(self, test, err):
3177 self.result.addError(test, err)
3178
3179 def addFailure(self, test, err):
3180 self.result.addFailure(test, err)
3181
3182
3183class BZRTransformingResult(ForwardingResult):
3184
3146 def addError(self, test, err):3185 def addError(self, test, err):
3147 feature = self._error_looks_like('UnavailableFeature: ', err)3186 feature = self._error_looks_like('UnavailableFeature: ', err)
3148 if feature is not None:3187 if feature is not None:
@@ -3158,12 +3197,6 @@
3158 else:3197 else:
3159 self.result.addFailure(test, err)3198 self.result.addFailure(test, err)
31603199
3161 def addSkip(self, test, reason):
3162 self.result.addSkip(test, reason)
3163
3164 def addSuccess(self, test):
3165 self.result.addSuccess(test)
3166
3167 def _error_looks_like(self, prefix, err):3200 def _error_looks_like(self, prefix, err):
3168 """Deserialize exception and returns the stringify value."""3201 """Deserialize exception and returns the stringify value."""
3169 import subunit3202 import subunit
@@ -3181,6 +3214,38 @@
3181 return value3214 return value
31823215
31833216
3217class ProfileResult(ForwardingResult):
3218 """Generate profiling data for all activity between start and success.
3219
3220 The profile data is appended to the test's _benchcalls attribute and can
3221 be accessed by the forwarded-to TestResult.
3222
3223 While it might be cleaner do accumulate this in stopTest, addSuccess is
3224 where our existing output support for lsprof is, and this class aims to
3225 fit in with that: while it could be moved it's not necessary to accomplish
3226 test profiling, nor would it be dramatically cleaner.
3227 """
3228
3229 def startTest(self, test):
3230 self.profiler = bzrlib.lsprof.BzrProfiler()
3231 self.profiler.start()
3232 ForwardingResult.startTest(self, test)
3233
3234 def addSuccess(self, test):
3235 stats = self.profiler.stop()
3236 try:
3237 calls = test._benchcalls
3238 except AttributeError:
3239 test._benchcalls = []
3240 calls = test._benchcalls
3241 calls.append(((test.id(), "", ""), stats))
3242 ForwardingResult.addSuccess(self, test)
3243
3244 def stopTest(self, test):
3245 ForwardingResult.stopTest(self, test)
3246 self.profiler = None
3247
3248
3184# Controlled by "bzr selftest -E=..." option3249# Controlled by "bzr selftest -E=..." option
3185# Currently supported:3250# Currently supported:
3186# -Eallow_debug Will no longer clear debug.debug_flags() so it3251# -Eallow_debug Will no longer clear debug.debug_flags() so it
@@ -3208,6 +3273,7 @@
3208 runner_class=None,3273 runner_class=None,
3209 suite_decorators=None,3274 suite_decorators=None,
3210 stream=None,3275 stream=None,
3276 lsprof_tests=False,
3211 ):3277 ):
3212 """Run the whole test suite under the enhanced runner"""3278 """Run the whole test suite under the enhanced runner"""
3213 # XXX: Very ugly way to do this...3279 # XXX: Very ugly way to do this...
@@ -3242,6 +3308,9 @@
3242 if starting_with:3308 if starting_with:
3243 # But always filter as requested.3309 # But always filter as requested.
3244 suite = filter_suite_by_id_startswith(suite, starting_with)3310 suite = filter_suite_by_id_startswith(suite, starting_with)
3311 result_decorators = []
3312 if lsprof_tests:
3313 result_decorators.append(ProfileResult)
3245 return run_suite(suite, 'testbzr', verbose=verbose, pattern=pattern,3314 return run_suite(suite, 'testbzr', verbose=verbose, pattern=pattern,
3246 stop_on_failure=stop_on_failure,3315 stop_on_failure=stop_on_failure,
3247 transport=transport,3316 transport=transport,
@@ -3255,6 +3324,7 @@
3255 runner_class=runner_class,3324 runner_class=runner_class,
3256 suite_decorators=suite_decorators,3325 suite_decorators=suite_decorators,
3257 stream=stream,3326 stream=stream,
3327 result_decorators=result_decorators,
3258 )3328 )
3259 finally:3329 finally:
3260 default_transport = old_transport3330 default_transport = old_transport
@@ -3416,6 +3486,206 @@
3416test_prefix_alias_registry.register('bp', 'bzrlib.plugins')3486test_prefix_alias_registry.register('bp', 'bzrlib.plugins')
34173487
34183488
3489def _test_suite_testmod_names():
3490 """Return the standard list of test module names to test."""
3491 return [
3492 'bzrlib.doc',
3493 'bzrlib.tests.blackbox',
3494 'bzrlib.tests.commands',
3495 'bzrlib.tests.per_branch',
3496 'bzrlib.tests.per_bzrdir',
3497 'bzrlib.tests.per_interrepository',
3498 'bzrlib.tests.per_intertree',
3499 'bzrlib.tests.per_inventory',
3500 'bzrlib.tests.per_interbranch',
3501 'bzrlib.tests.per_lock',
3502 'bzrlib.tests.per_transport',
3503 'bzrlib.tests.per_tree',
3504 'bzrlib.tests.per_pack_repository',
3505 'bzrlib.tests.per_repository',
3506 'bzrlib.tests.per_repository_chk',
3507 'bzrlib.tests.per_repository_reference',
3508 'bzrlib.tests.per_versionedfile',
3509 'bzrlib.tests.per_workingtree',
3510 'bzrlib.tests.test__annotator',
3511 'bzrlib.tests.test__chk_map',
3512 'bzrlib.tests.test__dirstate_helpers',
3513 'bzrlib.tests.test__groupcompress',
3514 'bzrlib.tests.test__known_graph',
3515 'bzrlib.tests.test__rio',
3516 'bzrlib.tests.test__walkdirs_win32',
3517 'bzrlib.tests.test_ancestry',
3518 'bzrlib.tests.test_annotate',
3519 'bzrlib.tests.test_api',
3520 'bzrlib.tests.test_atomicfile',
3521 'bzrlib.tests.test_bad_files',
3522 'bzrlib.tests.test_bencode',
3523 'bzrlib.tests.test_bisect_multi',
3524 'bzrlib.tests.test_branch',
3525 'bzrlib.tests.test_branchbuilder',
3526 'bzrlib.tests.test_btree_index',
3527 'bzrlib.tests.test_bugtracker',
3528 'bzrlib.tests.test_bundle',
3529 'bzrlib.tests.test_bzrdir',
3530 'bzrlib.tests.test__chunks_to_lines',
3531 'bzrlib.tests.test_cache_utf8',
3532 'bzrlib.tests.test_chk_map',
3533 'bzrlib.tests.test_chk_serializer',
3534 'bzrlib.tests.test_chunk_writer',
3535 'bzrlib.tests.test_clean_tree',
3536 'bzrlib.tests.test_commands',
3537 'bzrlib.tests.test_commit',
3538 'bzrlib.tests.test_commit_merge',
3539 'bzrlib.tests.test_config',
3540 'bzrlib.tests.test_conflicts',
3541 'bzrlib.tests.test_counted_lock',
3542 'bzrlib.tests.test_crash',
3543 'bzrlib.tests.test_decorators',
3544 'bzrlib.tests.test_delta',
3545 'bzrlib.tests.test_debug',
3546 'bzrlib.tests.test_deprecated_graph',
3547 'bzrlib.tests.test_diff',
3548 'bzrlib.tests.test_directory_service',
3549 'bzrlib.tests.test_dirstate',
3550 'bzrlib.tests.test_email_message',
3551 'bzrlib.tests.test_eol_filters',
3552 'bzrlib.tests.test_errors',
3553 'bzrlib.tests.test_export',
3554 'bzrlib.tests.test_extract',
3555 'bzrlib.tests.test_fetch',
3556 'bzrlib.tests.test_fifo_cache',
3557 'bzrlib.tests.test_filters',
3558 'bzrlib.tests.test_ftp_transport',
3559 'bzrlib.tests.test_foreign',
3560 'bzrlib.tests.test_generate_docs',
3561 'bzrlib.tests.test_generate_ids',
3562 'bzrlib.tests.test_globbing',
3563 'bzrlib.tests.test_gpg',
3564 'bzrlib.tests.test_graph',
3565 'bzrlib.tests.test_groupcompress',
3566 'bzrlib.tests.test_hashcache',
3567 'bzrlib.tests.test_help',
3568 'bzrlib.tests.test_hooks',
3569 'bzrlib.tests.test_http',
3570 'bzrlib.tests.test_http_response',
3571 'bzrlib.tests.test_https_ca_bundle',
3572 'bzrlib.tests.test_identitymap',
3573 'bzrlib.tests.test_ignores',
3574 'bzrlib.tests.test_index',
3575 'bzrlib.tests.test_info',
3576 'bzrlib.tests.test_inv',
3577 'bzrlib.tests.test_inventory_delta',
3578 'bzrlib.tests.test_knit',
3579 'bzrlib.tests.test_lazy_import',
3580 'bzrlib.tests.test_lazy_regex',
3581 'bzrlib.tests.test_lock',
3582 'bzrlib.tests.test_lockable_files',
3583 'bzrlib.tests.test_lockdir',
3584 'bzrlib.tests.test_log',
3585 'bzrlib.tests.test_lru_cache',
3586 'bzrlib.tests.test_lsprof',
3587 'bzrlib.tests.test_mail_client',
3588 'bzrlib.tests.test_memorytree',
3589 'bzrlib.tests.test_merge',
3590 'bzrlib.tests.test_merge3',
3591 'bzrlib.tests.test_merge_core',
3592 'bzrlib.tests.test_merge_directive',
3593 'bzrlib.tests.test_missing',
3594 'bzrlib.tests.test_msgeditor',
3595 'bzrlib.tests.test_multiparent',
3596 'bzrlib.tests.test_mutabletree',
3597 'bzrlib.tests.test_nonascii',
3598 'bzrlib.tests.test_options',
3599 'bzrlib.tests.test_osutils',
3600 'bzrlib.tests.test_osutils_encodings',
3601 'bzrlib.tests.test_pack',
3602 'bzrlib.tests.test_patch',
3603 'bzrlib.tests.test_patches',
3604 'bzrlib.tests.test_permissions',
3605 'bzrlib.tests.test_plugins',
3606 'bzrlib.tests.test_progress',
3607 'bzrlib.tests.test_read_bundle',
3608 'bzrlib.tests.test_reconcile',
3609 'bzrlib.tests.test_reconfigure',
3610 'bzrlib.tests.test_registry',
3611 'bzrlib.tests.test_remote',
3612 'bzrlib.tests.test_rename_map',
3613 'bzrlib.tests.test_repository',
3614 'bzrlib.tests.test_revert',
3615 'bzrlib.tests.test_revision',
3616 'bzrlib.tests.test_revisionspec',
3617 'bzrlib.tests.test_revisiontree',
3618 'bzrlib.tests.test_rio',
3619 'bzrlib.tests.test_rules',
3620 'bzrlib.tests.test_sampler',
3621 'bzrlib.tests.test_selftest',
3622 'bzrlib.tests.test_serializer',
3623 'bzrlib.tests.test_setup',
3624 'bzrlib.tests.test_sftp_transport',
3625 'bzrlib.tests.test_shelf',
3626 'bzrlib.tests.test_shelf_ui',
3627 'bzrlib.tests.test_smart',
3628 'bzrlib.tests.test_smart_add',
3629 'bzrlib.tests.test_smart_request',
3630 'bzrlib.tests.test_smart_transport',
3631 'bzrlib.tests.test_smtp_connection',
3632 'bzrlib.tests.test_source',
3633 'bzrlib.tests.test_ssh_transport',
3634 'bzrlib.tests.test_status',
3635 'bzrlib.tests.test_store',
3636 'bzrlib.tests.test_strace',
3637 'bzrlib.tests.test_subsume',
3638 'bzrlib.tests.test_switch',
3639 'bzrlib.tests.test_symbol_versioning',
3640 'bzrlib.tests.test_tag',
3641 'bzrlib.tests.test_testament',
3642 'bzrlib.tests.test_textfile',
3643 'bzrlib.tests.test_textmerge',
3644 'bzrlib.tests.test_timestamp',
3645 'bzrlib.tests.test_trace',
3646 'bzrlib.tests.test_transactions',
3647 'bzrlib.tests.test_transform',
3648 'bzrlib.tests.test_transport',
3649 'bzrlib.tests.test_transport_log',
3650 'bzrlib.tests.test_tree',
3651 'bzrlib.tests.test_treebuilder',
3652 'bzrlib.tests.test_tsort',
3653 'bzrlib.tests.test_tuned_gzip',
3654 'bzrlib.tests.test_ui',
3655 'bzrlib.tests.test_uncommit',
3656 'bzrlib.tests.test_upgrade',
3657 'bzrlib.tests.test_upgrade_stacked',
3658 'bzrlib.tests.test_urlutils',
3659 'bzrlib.tests.test_version',
3660 'bzrlib.tests.test_version_info',
3661 'bzrlib.tests.test_weave',
3662 'bzrlib.tests.test_whitebox',
3663 'bzrlib.tests.test_win32utils',
3664 'bzrlib.tests.test_workingtree',
3665 'bzrlib.tests.test_workingtree_4',
3666 'bzrlib.tests.test_wsgi',
3667 'bzrlib.tests.test_xml',
3668 ]
3669
3670
3671def _test_suite_modules_to_doctest():
3672 """Return the list of modules to doctest."""
3673 return [
3674 'bzrlib',
3675 'bzrlib.branchbuilder',
3676 'bzrlib.export',
3677 'bzrlib.inventory',
3678 'bzrlib.iterablefile',
3679 'bzrlib.lockdir',
3680 'bzrlib.merge3',
3681 'bzrlib.option',
3682 'bzrlib.symbol_versioning',
3683 'bzrlib.tests',
3684 'bzrlib.timestamp',
3685 'bzrlib.version_info_formats.format_custom',
3686 ]
3687
3688
3419def test_suite(keep_only=None, starting_with=None):3689def test_suite(keep_only=None, starting_with=None):
3420 """Build and return TestSuite for the whole of bzrlib.3690 """Build and return TestSuite for the whole of bzrlib.
34213691
@@ -3427,184 +3697,6 @@
3427 This function can be replaced if you need to change the default test3697 This function can be replaced if you need to change the default test
3428 suite on a global basis, but it is not encouraged.3698 suite on a global basis, but it is not encouraged.
3429 """3699 """
3430 testmod_names = [
3431 'bzrlib.doc',
3432 'bzrlib.tests.blackbox',
3433 'bzrlib.tests.commands',
3434 'bzrlib.tests.per_branch',
3435 'bzrlib.tests.per_bzrdir',
3436 'bzrlib.tests.per_interrepository',
3437 'bzrlib.tests.per_intertree',
3438 'bzrlib.tests.per_inventory',
3439 'bzrlib.tests.per_interbranch',
3440 'bzrlib.tests.per_lock',
3441 'bzrlib.tests.per_transport',
3442 'bzrlib.tests.per_tree',
3443 'bzrlib.tests.per_pack_repository',
3444 'bzrlib.tests.per_repository',
3445 'bzrlib.tests.per_repository_chk',
3446 'bzrlib.tests.per_repository_reference',
3447 'bzrlib.tests.per_versionedfile',
3448 'bzrlib.tests.per_workingtree',
3449 'bzrlib.tests.test__annotator',
3450 'bzrlib.tests.test__chk_map',
3451 'bzrlib.tests.test__dirstate_helpers',
3452 'bzrlib.tests.test__groupcompress',
3453 'bzrlib.tests.test__known_graph',
3454 'bzrlib.tests.test__rio',
3455 'bzrlib.tests.test__walkdirs_win32',
3456 'bzrlib.tests.test_ancestry',
3457 'bzrlib.tests.test_annotate',
3458 'bzrlib.tests.test_api',
3459 'bzrlib.tests.test_atomicfile',
3460 'bzrlib.tests.test_bad_files',
3461 'bzrlib.tests.test_bencode',
3462 'bzrlib.tests.test_bisect_multi',
3463 'bzrlib.tests.test_branch',
3464 'bzrlib.tests.test_branchbuilder',
3465 'bzrlib.tests.test_btree_index',
3466 'bzrlib.tests.test_bugtracker',
3467 'bzrlib.tests.test_bundle',
3468 'bzrlib.tests.test_bzrdir',
3469 'bzrlib.tests.test__chunks_to_lines',
3470 'bzrlib.tests.test_cache_utf8',
3471 'bzrlib.tests.test_chk_map',
3472 'bzrlib.tests.test_chk_serializer',
3473 'bzrlib.tests.test_chunk_writer',
3474 'bzrlib.tests.test_clean_tree',
3475 'bzrlib.tests.test_commands',
3476 'bzrlib.tests.test_commit',
3477 'bzrlib.tests.test_commit_merge',
3478 'bzrlib.tests.test_config',
3479 'bzrlib.tests.test_conflicts',
3480 'bzrlib.tests.test_counted_lock',
3481 'bzrlib.tests.test_crash',
3482 'bzrlib.tests.test_decorators',
3483 'bzrlib.tests.test_delta',
3484 'bzrlib.tests.test_debug',
3485 'bzrlib.tests.test_deprecated_graph',
3486 'bzrlib.tests.test_diff',
3487 'bzrlib.tests.test_directory_service',
3488 'bzrlib.tests.test_dirstate',
3489 'bzrlib.tests.test_email_message',
3490 'bzrlib.tests.test_eol_filters',
3491 'bzrlib.tests.test_errors',
3492 'bzrlib.tests.test_export',
3493 'bzrlib.tests.test_extract',
3494 'bzrlib.tests.test_fetch',
3495 'bzrlib.tests.test_fifo_cache',
3496 'bzrlib.tests.test_filters',
3497 'bzrlib.tests.test_ftp_transport',
3498 'bzrlib.tests.test_foreign',
3499 'bzrlib.tests.test_generate_docs',
3500 'bzrlib.tests.test_generate_ids',
3501 'bzrlib.tests.test_globbing',
3502 'bzrlib.tests.test_gpg',
3503 'bzrlib.tests.test_graph',
3504 'bzrlib.tests.test_groupcompress',
3505 'bzrlib.tests.test_hashcache',
3506 'bzrlib.tests.test_help',
3507 'bzrlib.tests.test_hooks',
3508 'bzrlib.tests.test_http',
3509 'bzrlib.tests.test_http_response',
3510 'bzrlib.tests.test_https_ca_bundle',
3511 'bzrlib.tests.test_identitymap',
3512 'bzrlib.tests.test_ignores',
3513 'bzrlib.tests.test_index',
3514 'bzrlib.tests.test_info',
3515 'bzrlib.tests.test_inv',
3516 'bzrlib.tests.test_inventory_delta',
3517 'bzrlib.tests.test_knit',
3518 'bzrlib.tests.test_lazy_import',
3519 'bzrlib.tests.test_lazy_regex',
3520 'bzrlib.tests.test_lock',
3521 'bzrlib.tests.test_lockable_files',
3522 'bzrlib.tests.test_lockdir',
3523 'bzrlib.tests.test_log',
3524 'bzrlib.tests.test_lru_cache',
3525 'bzrlib.tests.test_lsprof',
3526 'bzrlib.tests.test_mail_client',
3527 'bzrlib.tests.test_memorytree',
3528 'bzrlib.tests.test_merge',
3529 'bzrlib.tests.test_merge3',
3530 'bzrlib.tests.test_merge_core',
3531 'bzrlib.tests.test_merge_directive',
3532 'bzrlib.tests.test_missing',
3533 'bzrlib.tests.test_msgeditor',
3534 'bzrlib.tests.test_multiparent',
3535 'bzrlib.tests.test_mutabletree',
3536 'bzrlib.tests.test_nonascii',
3537 'bzrlib.tests.test_options',
3538 'bzrlib.tests.test_osutils',
3539 'bzrlib.tests.test_osutils_encodings',
3540 'bzrlib.tests.test_pack',
3541 'bzrlib.tests.test_patch',
3542 'bzrlib.tests.test_patches',
3543 'bzrlib.tests.test_permissions',
3544 'bzrlib.tests.test_plugins',
3545 'bzrlib.tests.test_progress',
3546 'bzrlib.tests.test_read_bundle',
3547 'bzrlib.tests.test_reconcile',
3548 'bzrlib.tests.test_reconfigure',
3549 'bzrlib.tests.test_registry',
3550 'bzrlib.tests.test_remote',
3551 'bzrlib.tests.test_rename_map',
3552 'bzrlib.tests.test_repository',
3553 'bzrlib.tests.test_revert',
3554 'bzrlib.tests.test_revision',
3555 'bzrlib.tests.test_revisionspec',
3556 'bzrlib.tests.test_revisiontree',
3557 'bzrlib.tests.test_rio',
3558 'bzrlib.tests.test_rules',
3559 'bzrlib.tests.test_sampler',
3560 'bzrlib.tests.test_selftest',
3561 'bzrlib.tests.test_serializer',
3562 'bzrlib.tests.test_setup',
3563 'bzrlib.tests.test_sftp_transport',
3564 'bzrlib.tests.test_shelf',
3565 'bzrlib.tests.test_shelf_ui',
3566 'bzrlib.tests.test_smart',
3567 'bzrlib.tests.test_smart_add',
3568 'bzrlib.tests.test_smart_request',
3569 'bzrlib.tests.test_smart_transport',
3570 'bzrlib.tests.test_smtp_connection',
3571 'bzrlib.tests.test_source',
3572 'bzrlib.tests.test_ssh_transport',
3573 'bzrlib.tests.test_status',
3574 'bzrlib.tests.test_store',
3575 'bzrlib.tests.test_strace',
3576 'bzrlib.tests.test_subsume',
3577 'bzrlib.tests.test_switch',
3578 'bzrlib.tests.test_symbol_versioning',
3579 'bzrlib.tests.test_tag',
3580 'bzrlib.tests.test_testament',
3581 'bzrlib.tests.test_textfile',
3582 'bzrlib.tests.test_textmerge',
3583 'bzrlib.tests.test_timestamp',
3584 'bzrlib.tests.test_trace',
3585 'bzrlib.tests.test_transactions',
3586 'bzrlib.tests.test_transform',
3587 'bzrlib.tests.test_transport',
3588 'bzrlib.tests.test_transport_log',
3589 'bzrlib.tests.test_tree',
3590 'bzrlib.tests.test_treebuilder',
3591 'bzrlib.tests.test_tsort',
3592 'bzrlib.tests.test_tuned_gzip',
3593 'bzrlib.tests.test_ui',
3594 'bzrlib.tests.test_uncommit',
3595 'bzrlib.tests.test_upgrade',
3596 'bzrlib.tests.test_upgrade_stacked',
3597 'bzrlib.tests.test_urlutils',
3598 'bzrlib.tests.test_version',
3599 'bzrlib.tests.test_version_info',
3600 'bzrlib.tests.test_weave',
3601 'bzrlib.tests.test_whitebox',
3602 'bzrlib.tests.test_win32utils',
3603 'bzrlib.tests.test_workingtree',
3604 'bzrlib.tests.test_workingtree_4',
3605 'bzrlib.tests.test_wsgi',
3606 'bzrlib.tests.test_xml',
3607 ]
36083700
3609 loader = TestUtil.TestLoader()3701 loader = TestUtil.TestLoader()
36103702
@@ -3639,24 +3731,9 @@
3639 suite = loader.suiteClass()3731 suite = loader.suiteClass()
36403732
3641 # modules building their suite with loadTestsFromModuleNames3733 # modules building their suite with loadTestsFromModuleNames
3642 suite.addTest(loader.loadTestsFromModuleNames(testmod_names))3734 suite.addTest(loader.loadTestsFromModuleNames(_test_suite_testmod_names()))
36433735
3644 modules_to_doctest = [3736 for mod in _test_suite_modules_to_doctest():
3645 'bzrlib',
3646 'bzrlib.branchbuilder',
3647 'bzrlib.export',
3648 'bzrlib.inventory',
3649 'bzrlib.iterablefile',
3650 'bzrlib.lockdir',
3651 'bzrlib.merge3',
3652 'bzrlib.option',
3653 'bzrlib.symbol_versioning',
3654 'bzrlib.tests',
3655 'bzrlib.timestamp',
3656 'bzrlib.version_info_formats.format_custom',
3657 ]
3658
3659 for mod in modules_to_doctest:
3660 if not interesting_module(mod):3737 if not interesting_module(mod):
3661 # No tests to keep here, move along3738 # No tests to keep here, move along
3662 continue3739 continue
@@ -3803,8 +3880,7 @@
3803 :param new_id: The id to assign to it.3880 :param new_id: The id to assign to it.
3804 :return: The new test.3881 :return: The new test.
3805 """3882 """
3806 from copy import deepcopy3883 new_test = copy(test)
3807 new_test = deepcopy(test)
3808 new_test.id = lambda: new_id3884 new_test.id = lambda: new_id
3809 return new_test3885 return new_test
38103886
38113887
=== modified file 'bzrlib/tests/blackbox/test_filesystem_cicp.py'
--- bzrlib/tests/blackbox/test_filesystem_cicp.py 2009-04-06 08:17:53 +0000
+++ bzrlib/tests/blackbox/test_filesystem_cicp.py 2009-08-26 09:06:02 +0000
@@ -216,12 +216,19 @@
216216
217217
218class TestMisc(TestCICPBase):218class TestMisc(TestCICPBase):
219
219 def test_status(self):220 def test_status(self):
220 wt = self._make_mixed_case_tree()221 wt = self._make_mixed_case_tree()
221 self.run_bzr('add')222 self.run_bzr('add')
222223
223 self.check_output('added:\n CamelCaseParent/CamelCase\n lowercaseparent/lowercase\n',224 self.check_output(
224 'status camelcaseparent/camelcase LOWERCASEPARENT/LOWERCASE')225 """added:
226 CamelCaseParent/
227 CamelCaseParent/CamelCase
228 lowercaseparent/
229 lowercaseparent/lowercase
230""",
231 'status camelcaseparent/camelcase LOWERCASEPARENT/LOWERCASE')
225232
226 def test_ci(self):233 def test_ci(self):
227 wt = self._make_mixed_case_tree()234 wt = self._make_mixed_case_tree()
228235
=== modified file 'bzrlib/tests/blackbox/test_info.py'
--- bzrlib/tests/blackbox/test_info.py 2009-08-17 03:47:03 +0000
+++ bzrlib/tests/blackbox/test_info.py 2009-08-25 23:38:10 +0000
@@ -1328,6 +1328,10 @@
1328 def test_info_locking_oslocks(self):1328 def test_info_locking_oslocks(self):
1329 if sys.platform == "win32":1329 if sys.platform == "win32":
1330 raise TestSkipped("don't use oslocks on win32 in unix manner")1330 raise TestSkipped("don't use oslocks on win32 in unix manner")
1331 # This test tests old (all-in-one, OS lock using) behaviour which
1332 # simply cannot work on windows (and is indeed why we changed our
1333 # design. As such, don't try to remove the thisFailsStrictLockCheck
1334 # call here.
1331 self.thisFailsStrictLockCheck()1335 self.thisFailsStrictLockCheck()
13321336
1333 tree = self.make_branch_and_tree('branch',1337 tree = self.make_branch_and_tree('branch',
13341338
=== modified file 'bzrlib/tests/blackbox/test_push.py'
--- bzrlib/tests/blackbox/test_push.py 2009-08-20 04:09:58 +0000
+++ bzrlib/tests/blackbox/test_push.py 2009-08-27 22:17:35 +0000
@@ -576,9 +576,7 @@
576 def setUp(self):576 def setUp(self):
577 tests.TestCaseWithTransport.setUp(self)577 tests.TestCaseWithTransport.setUp(self)
578 self.memory_server = RedirectingMemoryServer()578 self.memory_server = RedirectingMemoryServer()
579 self.memory_server.setUp()579 self.start_server(self.memory_server)
580 self.addCleanup(self.memory_server.tearDown)
581
582 # Make the branch and tree that we'll be pushing.580 # Make the branch and tree that we'll be pushing.
583 t = self.make_branch_and_tree('tree')581 t = self.make_branch_and_tree('tree')
584 self.build_tree(['tree/file'])582 self.build_tree(['tree/file'])
585583
=== modified file 'bzrlib/tests/blackbox/test_selftest.py'
--- bzrlib/tests/blackbox/test_selftest.py 2009-08-24 05:23:11 +0000
+++ bzrlib/tests/blackbox/test_selftest.py 2009-08-24 22:32:53 +0000
@@ -172,3 +172,7 @@
172 outputs_nothing(['selftest', '--list-only', '--exclude', 'selftest'])172 outputs_nothing(['selftest', '--list-only', '--exclude', 'selftest'])
173 finally:173 finally:
174 tests.selftest = original_selftest174 tests.selftest = original_selftest
175
176 def test_lsprof_tests(self):
177 params = self.get_params_passed_to_core('selftest --lsprof-tests')
178 self.assertEqual(True, params[1]["lsprof_tests"])
175179
=== modified file 'bzrlib/tests/blackbox/test_serve.py'
--- bzrlib/tests/blackbox/test_serve.py 2009-07-20 11:27:05 +0000
+++ bzrlib/tests/blackbox/test_serve.py 2009-08-27 22:17:35 +0000
@@ -209,8 +209,7 @@
209 ssh_server = SFTPServer(StubSSHServer)209 ssh_server = SFTPServer(StubSSHServer)
210 # XXX: We *don't* want to override the default SSH vendor, so we set210 # XXX: We *don't* want to override the default SSH vendor, so we set
211 # _vendor to what _get_ssh_vendor returns.211 # _vendor to what _get_ssh_vendor returns.
212 ssh_server.setUp()212 self.start_server(ssh_server)
213 self.addCleanup(ssh_server.tearDown)
214 port = ssh_server._listener.port213 port = ssh_server._listener.port
215214
216 # Access the branch via a bzr+ssh URL. The BZR_REMOTE_PATH environment215 # Access the branch via a bzr+ssh URL. The BZR_REMOTE_PATH environment
217216
=== modified file 'bzrlib/tests/blackbox/test_split.py'
--- bzrlib/tests/blackbox/test_split.py 2009-06-08 02:02:08 +0000
+++ bzrlib/tests/blackbox/test_split.py 2009-08-27 21:48:33 +0000
@@ -31,7 +31,7 @@
31 wt.add(['b', 'b/c'])31 wt.add(['b', 'b/c'])
32 wt.commit('rev1')32 wt.commit('rev1')
33 self.run_bzr('split a/b')33 self.run_bzr('split a/b')
34 self.run_bzr_error(('.* is not versioned',), 'split q')34 self.run_bzr_error(('.* is not versioned',), 'split q', working_dir='a')
3535
36 def test_split_repo_failure(self):36 def test_split_repo_failure(self):
37 repo = self.make_repository('branch', shared=True, format='knit')37 repo = self.make_repository('branch', shared=True, format='knit')
3838
=== modified file 'bzrlib/tests/http_utils.py'
--- bzrlib/tests/http_utils.py 2009-05-04 14:48:21 +0000
+++ bzrlib/tests/http_utils.py 2009-08-27 22:17:35 +0000
@@ -133,8 +133,7 @@
133 """Get the server instance for the secondary transport."""133 """Get the server instance for the secondary transport."""
134 if self.__secondary_server is None:134 if self.__secondary_server is None:
135 self.__secondary_server = self.create_transport_secondary_server()135 self.__secondary_server = self.create_transport_secondary_server()
136 self.__secondary_server.setUp()136 self.start_server(self.__secondary_server)
137 self.addCleanup(self.__secondary_server.tearDown)
138 return self.__secondary_server137 return self.__secondary_server
139138
140139
141140
=== modified file 'bzrlib/tests/per_branch/test_push.py'
--- bzrlib/tests/per_branch/test_push.py 2009-08-14 00:55:42 +0000
+++ bzrlib/tests/per_branch/test_push.py 2009-08-27 22:17:35 +0000
@@ -394,8 +394,7 @@
394 # Create a smart server that publishes whatever the backing VFS server394 # Create a smart server that publishes whatever the backing VFS server
395 # does.395 # does.
396 self.smart_server = server.SmartTCPServer_for_testing()396 self.smart_server = server.SmartTCPServer_for_testing()
397 self.smart_server.setUp(self.get_server())397 self.start_server(self.smart_server, self.get_server())
398 self.addCleanup(self.smart_server.tearDown)
399 # Make two empty branches, 'empty' and 'target'.398 # Make two empty branches, 'empty' and 'target'.
400 self.empty_branch = self.make_branch('empty')399 self.empty_branch = self.make_branch('empty')
401 self.make_branch('target')400 self.make_branch('target')
402401
=== modified file 'bzrlib/tests/per_pack_repository.py'
--- bzrlib/tests/per_pack_repository.py 2009-08-14 00:55:42 +0000
+++ bzrlib/tests/per_pack_repository.py 2009-08-27 22:17:35 +0000
@@ -271,8 +271,7 @@
271 # failing to delete obsolete packs is not fatal271 # failing to delete obsolete packs is not fatal
272 format = self.get_format()272 format = self.get_format()
273 server = fakenfs.FakeNFSServer()273 server = fakenfs.FakeNFSServer()
274 server.setUp()274 self.start_server(server)
275 self.addCleanup(server.tearDown)
276 transport = get_transport(server.get_url())275 transport = get_transport(server.get_url())
277 bzrdir = self.get_format().initialize_on_transport(transport)276 bzrdir = self.get_format().initialize_on_transport(transport)
278 repo = bzrdir.create_repository()277 repo = bzrdir.create_repository()
@@ -1020,8 +1019,7 @@
1020 # Create a smart server that publishes whatever the backing VFS server1019 # Create a smart server that publishes whatever the backing VFS server
1021 # does.1020 # does.
1022 self.smart_server = server.SmartTCPServer_for_testing()1021 self.smart_server = server.SmartTCPServer_for_testing()
1023 self.smart_server.setUp(self.get_server())1022 self.start_server(self.smart_server, self.get_server())
1024 self.addCleanup(self.smart_server.tearDown)
1025 # Log all HPSS calls into self.hpss_calls.1023 # Log all HPSS calls into self.hpss_calls.
1026 client._SmartClient.hooks.install_named_hook(1024 client._SmartClient.hooks.install_named_hook(
1027 'call', self.capture_hpss_call, None)1025 'call', self.capture_hpss_call, None)
10281026
=== modified file 'bzrlib/tests/per_repository/test_repository.py'
--- bzrlib/tests/per_repository/test_repository.py 2009-08-18 22:03:18 +0000
+++ bzrlib/tests/per_repository/test_repository.py 2009-08-27 22:17:35 +0000
@@ -823,9 +823,8 @@
823 be created at the given path."""823 be created at the given path."""
824 repo = self.make_repository(path, shared=shared)824 repo = self.make_repository(path, shared=shared)
825 smart_server = server.SmartTCPServer_for_testing()825 smart_server = server.SmartTCPServer_for_testing()
826 smart_server.setUp(self.get_server())826 self.start_server(smart_server, self.get_server())
827 remote_transport = get_transport(smart_server.get_url()).clone(path)827 remote_transport = get_transport(smart_server.get_url()).clone(path)
828 self.addCleanup(smart_server.tearDown)
829 remote_bzrdir = bzrdir.BzrDir.open_from_transport(remote_transport)828 remote_bzrdir = bzrdir.BzrDir.open_from_transport(remote_transport)
830 remote_repo = remote_bzrdir.open_repository()829 remote_repo = remote_bzrdir.open_repository()
831 return remote_repo830 return remote_repo
@@ -897,14 +896,6 @@
897 local_repo = local_bzrdir.open_repository()896 local_repo = local_bzrdir.open_repository()
898 self.assertEqual(remote_backing_repo._format, local_repo._format)897 self.assertEqual(remote_backing_repo._format, local_repo._format)
899898
900 # XXX: this helper probably belongs on TestCaseWithTransport
901 def make_smart_server(self, path):
902 smart_server = server.SmartTCPServer_for_testing()
903 smart_server.setUp(self.get_server())
904 remote_transport = get_transport(smart_server.get_url()).clone(path)
905 self.addCleanup(smart_server.tearDown)
906 return remote_transport
907
908 def test_clone_to_hpss(self):899 def test_clone_to_hpss(self):
909 pre_metadir_formats = [RepositoryFormat5(), RepositoryFormat6()]900 pre_metadir_formats = [RepositoryFormat5(), RepositoryFormat6()]
910 if self.repository_format in pre_metadir_formats:901 if self.repository_format in pre_metadir_formats:
911902
=== modified file 'bzrlib/tests/per_workingtree/test_flush.py'
--- bzrlib/tests/per_workingtree/test_flush.py 2009-07-31 17:42:29 +0000
+++ bzrlib/tests/per_workingtree/test_flush.py 2009-08-25 23:38:10 +0000
@@ -16,7 +16,9 @@
1616
17"""Tests for WorkingTree.flush."""17"""Tests for WorkingTree.flush."""
1818
19import sys
19from bzrlib import errors, inventory20from bzrlib import errors, inventory
21from bzrlib.tests import TestSkipped
20from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree22from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree
2123
2224
@@ -31,8 +33,14 @@
31 tree.unlock()33 tree.unlock()
3234
33 def test_flush_when_inventory_is_modified(self):35 def test_flush_when_inventory_is_modified(self):
36 if sys.platform == "win32":
37 raise TestSkipped("don't use oslocks on win32 in unix manner")
34 # This takes a write lock on the source tree, then opens a second copy38 # This takes a write lock on the source tree, then opens a second copy
35 # and tries to grab a read lock, which is a bit bogus39 # and tries to grab a read lock. This works on Unix and is a reasonable
40 # way to detect when the file is actually written to, but it won't work
41 # (as a test) on Windows. It might be nice to instead stub out the
42 # functions used to write and that way do both less work and also be
43 # able to execute on Windows.
36 self.thisFailsStrictLockCheck()44 self.thisFailsStrictLockCheck()
37 # when doing a flush the inventory should be written if needed.45 # when doing a flush the inventory should be written if needed.
38 # we test that by changing the inventory (using46 # we test that by changing the inventory (using
3947
=== modified file 'bzrlib/tests/per_workingtree/test_locking.py'
--- bzrlib/tests/per_workingtree/test_locking.py 2009-07-31 17:42:29 +0000
+++ bzrlib/tests/per_workingtree/test_locking.py 2009-08-25 23:38:10 +0000
@@ -16,11 +16,14 @@
1616
17"""Tests for the (un)lock interfaces on all working tree implemenations."""17"""Tests for the (un)lock interfaces on all working tree implemenations."""
1818
19import sys
20
19from bzrlib import (21from bzrlib import (
20 branch,22 branch,
21 errors,23 errors,
22 lockdir,24 lockdir,
23 )25 )
26from bzrlib.tests import TestSkipped
24from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree27from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree
2528
2629
@@ -105,8 +108,14 @@
105108
106 :param methodname: The lock method to use to establish locks.109 :param methodname: The lock method to use to establish locks.
107 """110 """
108 # This write locks the local tree, and then grabs a read lock on a111 if sys.platform == "win32":
109 # copy, which is bogus and the test just needs to be rewritten.112 raise TestSkipped("don't use oslocks on win32 in unix manner")
113 # This helper takes a write lock on the source tree, then opens a
114 # second copy and tries to grab a read lock. This works on Unix and is
115 # a reasonable way to detect when the file is actually written to, but
116 # it won't work (as a test) on Windows. It might be nice to instead
117 # stub out the functions used to write and that way do both less work
118 # and also be able to execute on Windows.
110 self.thisFailsStrictLockCheck()119 self.thisFailsStrictLockCheck()
111 # when unlocking the last lock count from tree_write_lock,120 # when unlocking the last lock count from tree_write_lock,
112 # the tree should do a flush().121 # the tree should do a flush().
113122
=== modified file 'bzrlib/tests/per_workingtree/test_set_root_id.py'
--- bzrlib/tests/per_workingtree/test_set_root_id.py 2009-08-21 01:48:13 +0000
+++ bzrlib/tests/per_workingtree/test_set_root_id.py 2009-08-28 05:00:33 +0000
@@ -16,13 +16,18 @@
1616
17"""Tests for WorkingTree.set_root_id"""17"""Tests for WorkingTree.set_root_id"""
1818
19import sys
20
19from bzrlib import inventory21from bzrlib import inventory
22from bzrlib.tests import TestSkipped
20from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree23from bzrlib.tests.per_workingtree import TestCaseWithWorkingTree
2124
2225
23class TestSetRootId(TestCaseWithWorkingTree):26class TestSetRootId(TestCaseWithWorkingTree):
2427
25 def test_set_and_read_unicode(self):28 def test_set_and_read_unicode(self):
29 if sys.platform == "win32":
30 raise TestSkipped("don't use oslocks on win32 in unix manner")
26 # This test tests that setting the root doesn't flush, so it31 # This test tests that setting the root doesn't flush, so it
27 # deliberately tests concurrent access that isn't possible on windows.32 # deliberately tests concurrent access that isn't possible on windows.
28 self.thisFailsStrictLockCheck()33 self.thisFailsStrictLockCheck()
2934
=== modified file 'bzrlib/tests/test__known_graph.py'
--- bzrlib/tests/test__known_graph.py 2009-08-26 16:03:59 +0000
+++ bzrlib/tests/test__known_graph.py 2009-09-02 13:32:52 +0000
@@ -768,3 +768,70 @@
768 },768 },
769 'E',769 'E',
770 [])770 [])
771
772
773class TestKnownGraphStableReverseTopoSort(TestCaseWithKnownGraph):
774 """Test the sort order returned by gc_sort."""
775
776 def assertSorted(self, expected, parent_map):
777 graph = self.make_known_graph(parent_map)
778 value = graph.gc_sort()
779 if expected != value:
780 self.assertEqualDiff(pprint.pformat(expected),
781 pprint.pformat(value))
782
783 def test_empty(self):
784 self.assertSorted([], {})
785
786 def test_single(self):
787 self.assertSorted(['a'], {'a':()})
788 self.assertSorted([('a',)], {('a',):()})
789 self.assertSorted([('F', 'a')], {('F', 'a'):()})
790
791 def test_linear(self):
792 self.assertSorted(['c', 'b', 'a'], {'a':(), 'b':('a',), 'c':('b',)})
793 self.assertSorted([('c',), ('b',), ('a',)],
794 {('a',):(), ('b',): (('a',),), ('c',): (('b',),)})
795 self.assertSorted([('F', 'c'), ('F', 'b'), ('F', 'a')],
796 {('F', 'a'):(), ('F', 'b'): (('F', 'a'),),
797 ('F', 'c'): (('F', 'b'),)})
798
799 def test_mixed_ancestries(self):
800 # Each prefix should be sorted separately
801 self.assertSorted([('F', 'c'), ('F', 'b'), ('F', 'a'),
802 ('G', 'c'), ('G', 'b'), ('G', 'a'),
803 ('Q', 'c'), ('Q', 'b'), ('Q', 'a'),
804 ],
805 {('F', 'a'):(), ('F', 'b'): (('F', 'a'),),
806 ('F', 'c'): (('F', 'b'),),
807 ('G', 'a'):(), ('G', 'b'): (('G', 'a'),),
808 ('G', 'c'): (('G', 'b'),),
809 ('Q', 'a'):(), ('Q', 'b'): (('Q', 'a'),),
810 ('Q', 'c'): (('Q', 'b'),),
811 })
812
813 def test_stable_sorting(self):
814 # the sort order should be stable even when extra nodes are added
815 self.assertSorted(['b', 'c', 'a'],
816 {'a':(), 'b':('a',), 'c':('a',)})
817 self.assertSorted(['b', 'c', 'd', 'a'],
818 {'a':(), 'b':('a',), 'c':('a',), 'd':('a',)})
819 self.assertSorted(['b', 'c', 'd', 'a'],
820 {'a':(), 'b':('a',), 'c':('a',), 'd':('a',)})
821 self.assertSorted(['Z', 'b', 'c', 'd', 'a'],
822 {'a':(), 'b':('a',), 'c':('a',), 'd':('a',),
823 'Z':('a',)})
824 self.assertSorted(['e', 'b', 'c', 'f', 'Z', 'd', 'a'],
825 {'a':(), 'b':('a',), 'c':('a',), 'd':('a',),
826 'Z':('a',),
827 'e':('b', 'c', 'd'),
828 'f':('d', 'Z'),
829 })
830
831 def test_skip_ghost(self):
832 self.assertSorted(['b', 'c', 'a'],
833 {'a':(), 'b':('a', 'ghost'), 'c':('a',)})
834
835 def test_skip_mainline_ghost(self):
836 self.assertSorted(['b', 'c', 'a'],
837 {'a':(), 'b':('ghost', 'a'), 'c':('a',)})
771838
=== modified file 'bzrlib/tests/test_bundle.py'
--- bzrlib/tests/test_bundle.py 2009-08-04 14:10:09 +0000
+++ bzrlib/tests/test_bundle.py 2009-08-27 22:17:35 +0000
@@ -1830,9 +1830,8 @@
1830 """1830 """
1831 from bzrlib.tests.blackbox.test_push import RedirectingMemoryServer1831 from bzrlib.tests.blackbox.test_push import RedirectingMemoryServer
1832 server = RedirectingMemoryServer()1832 server = RedirectingMemoryServer()
1833 server.setUp()1833 self.start_server(server)
1834 url = server.get_url() + 'infinite-loop'1834 url = server.get_url() + 'infinite-loop'
1835 self.addCleanup(server.tearDown)
1836 self.assertRaises(errors.NotABundle, read_mergeable_from_url, url)1835 self.assertRaises(errors.NotABundle, read_mergeable_from_url, url)
18371836
1838 def test_smart_server_connection_reset(self):1837 def test_smart_server_connection_reset(self):
@@ -1841,8 +1840,7 @@
1841 """1840 """
1842 # Instantiate a server that will provoke a ConnectionReset1841 # Instantiate a server that will provoke a ConnectionReset
1843 sock_server = _DisconnectingTCPServer()1842 sock_server = _DisconnectingTCPServer()
1844 sock_server.setUp()1843 self.start_server(sock_server)
1845 self.addCleanup(sock_server.tearDown)
1846 # We don't really care what the url is since the server will close the1844 # We don't really care what the url is since the server will close the
1847 # connection without interpreting it1845 # connection without interpreting it
1848 url = sock_server.get_url()1846 url = sock_server.get_url()
18491847
=== modified file 'bzrlib/tests/test_crash.py'
--- bzrlib/tests/test_crash.py 2009-08-20 04:45:48 +0000
+++ bzrlib/tests/test_crash.py 2009-08-28 12:38:01 +0000
@@ -18,20 +18,17 @@
18from StringIO import StringIO18from StringIO import StringIO
19import sys19import sys
2020
2121from bzrlib import (
22from bzrlib.crash import (22 crash,
23 report_bug,23 tests,
24 _write_apport_report_to_file,
25 )24 )
26from bzrlib.tests import TestCase25
27from bzrlib.tests.features import ApportFeature26from bzrlib.tests import features
2827
2928
30class TestApportReporting(TestCase):29class TestApportReporting(tests.TestCase):
3130
32 def setUp(self):31 _test_needs_features = [features.ApportFeature]
33 TestCase.setUp(self)
34 self.requireFeature(ApportFeature)
3532
36 def test_apport_report_contents(self):33 def test_apport_report_contents(self):
37 try:34 try:
@@ -39,19 +36,13 @@
39 except AssertionError, e:36 except AssertionError, e:
40 pass37 pass
41 outf = StringIO()38 outf = StringIO()
42 _write_apport_report_to_file(sys.exc_info(),39 crash._write_apport_report_to_file(sys.exc_info(), outf)
43 outf)
44 report = outf.getvalue()40 report = outf.getvalue()
4541
46 self.assertContainsRe(report,42 self.assertContainsRe(report, '(?m)^BzrVersion:')
47 '(?m)^BzrVersion:')
48 # should be in the traceback43 # should be in the traceback
49 self.assertContainsRe(report,44 self.assertContainsRe(report, 'my error')
50 'my error')45 self.assertContainsRe(report, 'AssertionError')
51 self.assertContainsRe(report,46 self.assertContainsRe(report, 'test_apport_report_contents')
52 'AssertionError')
53 self.assertContainsRe(report,
54 'test_apport_report_contents')
55 # should also be in there47 # should also be in there
56 self.assertContainsRe(report,48 self.assertContainsRe(report, '(?m)^CommandLine:')
57 '(?m)^CommandLine:.*selftest')
5849
=== modified file 'bzrlib/tests/test_groupcompress.py'
--- bzrlib/tests/test_groupcompress.py 2009-06-29 14:51:13 +0000
+++ bzrlib/tests/test_groupcompress.py 2009-09-03 15:26:27 +0000
@@ -538,7 +538,7 @@
538 'as-requested', False)]538 'as-requested', False)]
539 self.assertEqual([('b',), ('a',), ('d',), ('c',)], keys)539 self.assertEqual([('b',), ('a',), ('d',), ('c',)], keys)
540540
541 def test_insert_record_stream_re_uses_blocks(self):541 def test_insert_record_stream_reuses_blocks(self):
542 vf = self.make_test_vf(True, dir='source')542 vf = self.make_test_vf(True, dir='source')
543 def grouped_stream(revision_ids, first_parents=()):543 def grouped_stream(revision_ids, first_parents=()):
544 parents = first_parents544 parents = first_parents
@@ -582,8 +582,14 @@
582 vf2 = self.make_test_vf(True, dir='target')582 vf2 = self.make_test_vf(True, dir='target')
583 # ordering in 'groupcompress' order, should actually swap the groups in583 # ordering in 'groupcompress' order, should actually swap the groups in
584 # the target vf, but the groups themselves should not be disturbed.584 # the target vf, but the groups themselves should not be disturbed.
585 vf2.insert_record_stream(vf.get_record_stream(585 def small_size_stream():
586 [(r,) for r in 'abcdefgh'], 'groupcompress', False))586 for record in vf.get_record_stream([(r,) for r in 'abcdefgh'],
587 'groupcompress', False):
588 record._manager._full_enough_block_size = \
589 record._manager._block._content_length
590 yield record
591
592 vf2.insert_record_stream(small_size_stream())
587 stream = vf2.get_record_stream([(r,) for r in 'abcdefgh'],593 stream = vf2.get_record_stream([(r,) for r in 'abcdefgh'],
588 'groupcompress', False)594 'groupcompress', False)
589 vf2.writer.end()595 vf2.writer.end()
@@ -594,6 +600,44 @@
594 record._manager._block._z_content)600 record._manager._block._z_content)
595 self.assertEqual(8, num_records)601 self.assertEqual(8, num_records)
596602
603 def test_insert_record_stream_packs_on_the_fly(self):
604 vf = self.make_test_vf(True, dir='source')
605 def grouped_stream(revision_ids, first_parents=()):
606 parents = first_parents
607 for revision_id in revision_ids:
608 key = (revision_id,)
609 record = versionedfile.FulltextContentFactory(
610 key, parents, None,
611 'some content that is\n'
612 'identical except for\n'
613 'revision_id:%s\n' % (revision_id,))
614 yield record
615 parents = (key,)
616 # One group, a-d
617 vf.insert_record_stream(grouped_stream(['a', 'b', 'c', 'd']))
618 # Second group, e-h
619 vf.insert_record_stream(grouped_stream(['e', 'f', 'g', 'h'],
620 first_parents=(('d',),)))
621 # Now copy the blocks into another vf, and see that the
622 # insert_record_stream rebuilt a new block on-the-fly because of
623 # under-utilization
624 vf2 = self.make_test_vf(True, dir='target')
625 vf2.insert_record_stream(vf.get_record_stream(
626 [(r,) for r in 'abcdefgh'], 'groupcompress', False))
627 stream = vf2.get_record_stream([(r,) for r in 'abcdefgh'],
628 'groupcompress', False)
629 vf2.writer.end()
630 num_records = 0
631 # All of the records should be recombined into a single block
632 block = None
633 for record in stream:
634 num_records += 1
635 if block is None:
636 block = record._manager._block
637 else:
638 self.assertIs(block, record._manager._block)
639 self.assertEqual(8, num_records)
640
597 def test__insert_record_stream_no_reuse_block(self):641 def test__insert_record_stream_no_reuse_block(self):
598 vf = self.make_test_vf(True, dir='source')642 vf = self.make_test_vf(True, dir='source')
599 def grouped_stream(revision_ids, first_parents=()):643 def grouped_stream(revision_ids, first_parents=()):
@@ -702,19 +746,128 @@
702 " 0 8', \(\(\('a',\),\),\)\)")746 " 0 8', \(\(\('a',\),\),\)\)")
703747
704748
749class StubGCVF(object):
750 def __init__(self, canned_get_blocks=None):
751 self._group_cache = {}
752 self._canned_get_blocks = canned_get_blocks or []
753 def _get_blocks(self, read_memos):
754 return iter(self._canned_get_blocks)
755
756
757class Test_BatchingBlockFetcher(TestCaseWithGroupCompressVersionedFiles):
758 """Simple whitebox unit tests for _BatchingBlockFetcher."""
759
760 def test_add_key_new_read_memo(self):
761 """Adding a key with an uncached read_memo new to this batch adds that
762 read_memo to the list of memos to fetch.
763 """
764 # locations are: index_memo, ignored, parents, ignored
765 # where index_memo is: (idx, offset, len, factory_start, factory_end)
766 # and (idx, offset, size) is known as the 'read_memo', identifying the
767 # raw bytes needed.
768 read_memo = ('fake index', 100, 50)
769 locations = {
770 ('key',): (read_memo + (None, None), None, None, None)}
771 batcher = groupcompress._BatchingBlockFetcher(StubGCVF(), locations)
772 total_size = batcher.add_key(('key',))
773 self.assertEqual(50, total_size)
774 self.assertEqual([('key',)], batcher.keys)
775 self.assertEqual([read_memo], batcher.memos_to_get)
776
777 def test_add_key_duplicate_read_memo(self):
778 """read_memos that occur multiple times in a batch will only be fetched
779 once.
780 """
781 read_memo = ('fake index', 100, 50)
782 # Two keys, both sharing the same read memo (but different overall
783 # index_memos).
784 locations = {
785 ('key1',): (read_memo + (0, 1), None, None, None),
786 ('key2',): (read_memo + (1, 2), None, None, None)}
787 batcher = groupcompress._BatchingBlockFetcher(StubGCVF(), locations)
788 total_size = batcher.add_key(('key1',))
789 total_size = batcher.add_key(('key2',))
790 self.assertEqual(50, total_size)
791 self.assertEqual([('key1',), ('key2',)], batcher.keys)
792 self.assertEqual([read_memo], batcher.memos_to_get)
793
794 def test_add_key_cached_read_memo(self):
795 """Adding a key with a cached read_memo will not cause that read_memo
796 to be added to the list to fetch.
797 """
798 read_memo = ('fake index', 100, 50)
799 gcvf = StubGCVF()
800 gcvf._group_cache[read_memo] = 'fake block'
801 locations = {
802 ('key',): (read_memo + (None, None), None, None, None)}
803 batcher = groupcompress._BatchingBlockFetcher(gcvf, locations)
804 total_size = batcher.add_key(('key',))
805 self.assertEqual(0, total_size)
806 self.assertEqual([('key',)], batcher.keys)
807 self.assertEqual([], batcher.memos_to_get)
808
809 def test_yield_factories_empty(self):
810 """An empty batch yields no factories."""
811 batcher = groupcompress._BatchingBlockFetcher(StubGCVF(), {})
812 self.assertEqual([], list(batcher.yield_factories()))
813
814 def test_yield_factories_calls_get_blocks(self):
815 """Uncached memos are retrieved via get_blocks."""
816 read_memo1 = ('fake index', 100, 50)
817 read_memo2 = ('fake index', 150, 40)
818 gcvf = StubGCVF(
819 canned_get_blocks=[
820 (read_memo1, groupcompress.GroupCompressBlock()),
821 (read_memo2, groupcompress.GroupCompressBlock())])
822 locations = {
823 ('key1',): (read_memo1 + (None, None), None, None, None),
824 ('key2',): (read_memo2 + (None, None), None, None, None)}
825 batcher = groupcompress._BatchingBlockFetcher(gcvf, locations)
826 batcher.add_key(('key1',))
827 batcher.add_key(('key2',))
828 factories = list(batcher.yield_factories(full_flush=True))
829 self.assertLength(2, factories)
830 keys = [f.key for f in factories]
831 kinds = [f.storage_kind for f in factories]
832 self.assertEqual([('key1',), ('key2',)], keys)
833 self.assertEqual(['groupcompress-block', 'groupcompress-block'], kinds)
834
835 def test_yield_factories_flushing(self):
836 """yield_factories holds back on yielding results from the final block
837 unless passed full_flush=True.
838 """
839 fake_block = groupcompress.GroupCompressBlock()
840 read_memo = ('fake index', 100, 50)
841 gcvf = StubGCVF()
842 gcvf._group_cache[read_memo] = fake_block
843 locations = {
844 ('key',): (read_memo + (None, None), None, None, None)}
845 batcher = groupcompress._BatchingBlockFetcher(gcvf, locations)
846 batcher.add_key(('key',))
847 self.assertEqual([], list(batcher.yield_factories()))
848 factories = list(batcher.yield_factories(full_flush=True))
849 self.assertLength(1, factories)
850 self.assertEqual(('key',), factories[0].key)
851 self.assertEqual('groupcompress-block', factories[0].storage_kind)
852
853
705class TestLazyGroupCompress(tests.TestCaseWithTransport):854class TestLazyGroupCompress(tests.TestCaseWithTransport):
706855
707 _texts = {856 _texts = {
708 ('key1',): "this is a text\n"857 ('key1',): "this is a text\n"
709 "with a reasonable amount of compressible bytes\n",858 "with a reasonable amount of compressible bytes\n"
859 "which can be shared between various other texts\n",
710 ('key2',): "another text\n"860 ('key2',): "another text\n"
711 "with a reasonable amount of compressible bytes\n",861 "with a reasonable amount of compressible bytes\n"
862 "which can be shared between various other texts\n",
712 ('key3',): "yet another text which won't be extracted\n"863 ('key3',): "yet another text which won't be extracted\n"
713 "with a reasonable amount of compressible bytes\n",864 "with a reasonable amount of compressible bytes\n"
865 "which can be shared between various other texts\n",
714 ('key4',): "this will be extracted\n"866 ('key4',): "this will be extracted\n"
715 "but references most of its bytes from\n"867 "but references most of its bytes from\n"
716 "yet another text which won't be extracted\n"868 "yet another text which won't be extracted\n"
717 "with a reasonable amount of compressible bytes\n",869 "with a reasonable amount of compressible bytes\n"
870 "which can be shared between various other texts\n",
718 }871 }
719 def make_block(self, key_to_text):872 def make_block(self, key_to_text):
720 """Create a GroupCompressBlock, filling it with the given texts."""873 """Create a GroupCompressBlock, filling it with the given texts."""
@@ -732,6 +885,13 @@
732 start, end = locations[key]885 start, end = locations[key]
733 manager.add_factory(key, (), start, end)886 manager.add_factory(key, (), start, end)
734887
888 def make_block_and_full_manager(self, texts):
889 locations, block = self.make_block(texts)
890 manager = groupcompress._LazyGroupContentManager(block)
891 for key in sorted(texts):
892 self.add_key_to_manager(key, locations, block, manager)
893 return block, manager
894
735 def test_get_fulltexts(self):895 def test_get_fulltexts(self):
736 locations, block = self.make_block(self._texts)896 locations, block = self.make_block(self._texts)
737 manager = groupcompress._LazyGroupContentManager(block)897 manager = groupcompress._LazyGroupContentManager(block)
@@ -788,8 +948,8 @@
788 header_len = int(header_len)948 header_len = int(header_len)
789 block_len = int(block_len)949 block_len = int(block_len)
790 self.assertEqual('groupcompress-block', storage_kind)950 self.assertEqual('groupcompress-block', storage_kind)
791 self.assertEqual(33, z_header_len)951 self.assertEqual(34, z_header_len)
792 self.assertEqual(25, header_len)952 self.assertEqual(26, header_len)
793 self.assertEqual(len(block_bytes), block_len)953 self.assertEqual(len(block_bytes), block_len)
794 z_header = rest[:z_header_len]954 z_header = rest[:z_header_len]
795 header = zlib.decompress(z_header)955 header = zlib.decompress(z_header)
@@ -829,13 +989,7 @@
829 self.assertEqual([('key1',), ('key4',)], result_order)989 self.assertEqual([('key1',), ('key4',)], result_order)
830990
831 def test__check_rebuild_no_changes(self):991 def test__check_rebuild_no_changes(self):
832 locations, block = self.make_block(self._texts)992 block, manager = self.make_block_and_full_manager(self._texts)
833 manager = groupcompress._LazyGroupContentManager(block)
834 # Request all the keys, which ensures that we won't rebuild
835 self.add_key_to_manager(('key1',), locations, block, manager)
836 self.add_key_to_manager(('key2',), locations, block, manager)
837 self.add_key_to_manager(('key3',), locations, block, manager)
838 self.add_key_to_manager(('key4',), locations, block, manager)
839 manager._check_rebuild_block()993 manager._check_rebuild_block()
840 self.assertIs(block, manager._block)994 self.assertIs(block, manager._block)
841995
@@ -866,3 +1020,50 @@
866 self.assertEqual(('key4',), record.key)1020 self.assertEqual(('key4',), record.key)
867 self.assertEqual(self._texts[record.key],1021 self.assertEqual(self._texts[record.key],
868 record.get_bytes_as('fulltext'))1022 record.get_bytes_as('fulltext'))
1023
1024 def test_check_is_well_utilized_all_keys(self):
1025 block, manager = self.make_block_and_full_manager(self._texts)
1026 self.assertFalse(manager.check_is_well_utilized())
1027 # Though we can fake it by changing the recommended minimum size
1028 manager._full_enough_block_size = block._content_length
1029 self.assertTrue(manager.check_is_well_utilized())
1030 # Setting it just above causes it to fail
1031 manager._full_enough_block_size = block._content_length + 1
1032 self.assertFalse(manager.check_is_well_utilized())
1033 # Setting the mixed-block size doesn't do anything, because the content
1034 # is considered to not be 'mixed'
1035 manager._full_enough_mixed_block_size = block._content_length
1036 self.assertFalse(manager.check_is_well_utilized())
1037
1038 def test_check_is_well_utilized_mixed_keys(self):
1039 texts = {}
1040 f1k1 = ('f1', 'k1')
1041 f1k2 = ('f1', 'k2')
1042 f2k1 = ('f2', 'k1')
1043 f2k2 = ('f2', 'k2')
1044 texts[f1k1] = self._texts[('key1',)]
1045 texts[f1k2] = self._texts[('key2',)]
1046 texts[f2k1] = self._texts[('key3',)]
1047 texts[f2k2] = self._texts[('key4',)]
1048 block, manager = self.make_block_and_full_manager(texts)
1049 self.assertFalse(manager.check_is_well_utilized())
1050 manager._full_enough_block_size = block._content_length
1051 self.assertTrue(manager.check_is_well_utilized())
1052 manager._full_enough_block_size = block._content_length + 1
1053 self.assertFalse(manager.check_is_well_utilized())
1054 manager._full_enough_mixed_block_size = block._content_length
1055 self.assertTrue(manager.check_is_well_utilized())
1056
1057 def test_check_is_well_utilized_partial_use(self):
1058 locations, block = self.make_block(self._texts)
1059 manager = groupcompress._LazyGroupContentManager(block)
1060 manager._full_enough_block_size = block._content_length
1061 self.add_key_to_manager(('key1',), locations, block, manager)
1062 self.add_key_to_manager(('key2',), locations, block, manager)
1063 # Just using the content from key1 and 2 is not enough to be considered
1064 # 'complete'
1065 self.assertFalse(manager.check_is_well_utilized())
1066 # However if we add key3, then we have enough, as we only require 75%
1067 # consumption
1068 self.add_key_to_manager(('key4',), locations, block, manager)
1069 self.assertTrue(manager.check_is_well_utilized())
8691070
=== modified file 'bzrlib/tests/test_http.py'
--- bzrlib/tests/test_http.py 2009-08-19 16:33:39 +0000
+++ bzrlib/tests/test_http.py 2009-08-27 22:17:35 +0000
@@ -304,7 +304,7 @@
304304
305 server = http_server.HttpServer(BogusRequestHandler)305 server = http_server.HttpServer(BogusRequestHandler)
306 try:306 try:
307 self.assertRaises(httplib.UnknownProtocol,server.setUp)307 self.assertRaises(httplib.UnknownProtocol, server.setUp)
308 except:308 except:
309 server.tearDown()309 server.tearDown()
310 self.fail('HTTP Server creation did not raise UnknownProtocol')310 self.fail('HTTP Server creation did not raise UnknownProtocol')
@@ -312,7 +312,7 @@
312 def test_force_invalid_protocol(self):312 def test_force_invalid_protocol(self):
313 server = http_server.HttpServer(protocol_version='HTTP/0.1')313 server = http_server.HttpServer(protocol_version='HTTP/0.1')
314 try:314 try:
315 self.assertRaises(httplib.UnknownProtocol,server.setUp)315 self.assertRaises(httplib.UnknownProtocol, server.setUp)
316 except:316 except:
317 server.tearDown()317 server.tearDown()
318 self.fail('HTTP Server creation did not raise UnknownProtocol')318 self.fail('HTTP Server creation did not raise UnknownProtocol')
@@ -320,8 +320,10 @@
320 def test_server_start_and_stop(self):320 def test_server_start_and_stop(self):
321 server = http_server.HttpServer()321 server = http_server.HttpServer()
322 server.setUp()322 server.setUp()
323 self.assertTrue(server._http_running)323 try:
324 server.tearDown()324 self.assertTrue(server._http_running)
325 finally:
326 server.tearDown()
325 self.assertFalse(server._http_running)327 self.assertFalse(server._http_running)
326328
327 def test_create_http_server_one_zero(self):329 def test_create_http_server_one_zero(self):
@@ -330,8 +332,7 @@
330 protocol_version = 'HTTP/1.0'332 protocol_version = 'HTTP/1.0'
331333
332 server = http_server.HttpServer(RequestHandlerOneZero)334 server = http_server.HttpServer(RequestHandlerOneZero)
333 server.setUp()335 self.start_server(server)
334 self.addCleanup(server.tearDown)
335 self.assertIsInstance(server._httpd, http_server.TestingHTTPServer)336 self.assertIsInstance(server._httpd, http_server.TestingHTTPServer)
336337
337 def test_create_http_server_one_one(self):338 def test_create_http_server_one_one(self):
@@ -340,8 +341,7 @@
340 protocol_version = 'HTTP/1.1'341 protocol_version = 'HTTP/1.1'
341342
342 server = http_server.HttpServer(RequestHandlerOneOne)343 server = http_server.HttpServer(RequestHandlerOneOne)
343 server.setUp()344 self.start_server(server)
344 self.addCleanup(server.tearDown)
345 self.assertIsInstance(server._httpd,345 self.assertIsInstance(server._httpd,
346 http_server.TestingThreadingHTTPServer)346 http_server.TestingThreadingHTTPServer)
347347
@@ -352,8 +352,7 @@
352352
353 server = http_server.HttpServer(RequestHandlerOneZero,353 server = http_server.HttpServer(RequestHandlerOneZero,
354 protocol_version='HTTP/1.1')354 protocol_version='HTTP/1.1')
355 server.setUp()355 self.start_server(server)
356 self.addCleanup(server.tearDown)
357 self.assertIsInstance(server._httpd,356 self.assertIsInstance(server._httpd,
358 http_server.TestingThreadingHTTPServer)357 http_server.TestingThreadingHTTPServer)
359358
@@ -364,8 +363,7 @@
364363
365 server = http_server.HttpServer(RequestHandlerOneOne,364 server = http_server.HttpServer(RequestHandlerOneOne,
366 protocol_version='HTTP/1.0')365 protocol_version='HTTP/1.0')
367 server.setUp()366 self.start_server(server)
368 self.addCleanup(server.tearDown)
369 self.assertIsInstance(server._httpd,367 self.assertIsInstance(server._httpd,
370 http_server.TestingHTTPServer)368 http_server.TestingHTTPServer)
371369
@@ -431,8 +429,8 @@
431 def test_http_impl_urls(self):429 def test_http_impl_urls(self):
432 """There are servers which ask for particular clients to connect"""430 """There are servers which ask for particular clients to connect"""
433 server = self._server()431 server = self._server()
432 server.setUp()
434 try:433 try:
435 server.setUp()
436 url = server.get_url()434 url = server.get_url()
437 self.assertTrue(url.startswith('%s://' % self._qualified_prefix))435 self.assertTrue(url.startswith('%s://' % self._qualified_prefix))
438 finally:436 finally:
@@ -544,8 +542,7 @@
544542
545 def test_post_body_is_received(self):543 def test_post_body_is_received(self):
546 server = RecordingServer(expect_body_tail='end-of-body')544 server = RecordingServer(expect_body_tail='end-of-body')
547 server.setUp()545 self.start_server(server)
548 self.addCleanup(server.tearDown)
549 scheme = self._qualified_prefix546 scheme = self._qualified_prefix
550 url = '%s://%s:%s/' % (scheme, server.host, server.port)547 url = '%s://%s:%s/' % (scheme, server.host, server.port)
551 http_transport = self._transport(url)548 http_transport = self._transport(url)
@@ -780,8 +777,7 @@
780777
781 def test_send_receive_bytes(self):778 def test_send_receive_bytes(self):
782 server = RecordingServer(expect_body_tail='c')779 server = RecordingServer(expect_body_tail='c')
783 server.setUp()780 self.start_server(server)
784 self.addCleanup(server.tearDown)
785 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)781 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
786 sock.connect((server.host, server.port))782 sock.connect((server.host, server.port))
787 sock.sendall('abc')783 sock.sendall('abc')
788784
=== modified file 'bzrlib/tests/test_lsprof.py'
--- bzrlib/tests/test_lsprof.py 2009-03-23 14:59:43 +0000
+++ bzrlib/tests/test_lsprof.py 2009-08-24 21:05:09 +0000
@@ -92,3 +92,22 @@
92 self.stats.save(f)92 self.stats.save(f)
93 data1 = cPickle.load(open(f))93 data1 = cPickle.load(open(f))
94 self.assertEqual(type(data1), bzrlib.lsprof.Stats)94 self.assertEqual(type(data1), bzrlib.lsprof.Stats)
95
96
97class TestBzrProfiler(tests.TestCase):
98
99 _test_needs_features = [LSProfFeature]
100
101 def test_start_call_stuff_stop(self):
102 profiler = bzrlib.lsprof.BzrProfiler()
103 profiler.start()
104 try:
105 def a_function():
106 pass
107 a_function()
108 finally:
109 stats = profiler.stop()
110 stats.freeze()
111 lines = [str(data) for data in stats.data]
112 lines = [line for line in lines if 'a_function' in line]
113 self.assertLength(1, lines)
95114
=== modified file 'bzrlib/tests/test_remote.py'
--- bzrlib/tests/test_remote.py 2009-08-27 05:22:14 +0000
+++ bzrlib/tests/test_remote.py 2009-08-30 21:34:42 +0000
@@ -1945,8 +1945,7 @@
1945 def test_allows_new_revisions(self):1945 def test_allows_new_revisions(self):
1946 """get_parent_map's results can be updated by commit."""1946 """get_parent_map's results can be updated by commit."""
1947 smart_server = server.SmartTCPServer_for_testing()1947 smart_server = server.SmartTCPServer_for_testing()
1948 smart_server.setUp()1948 self.start_server(smart_server)
1949 self.addCleanup(smart_server.tearDown)
1950 self.make_branch('branch')1949 self.make_branch('branch')
1951 branch = Branch.open(smart_server.get_url() + '/branch')1950 branch = Branch.open(smart_server.get_url() + '/branch')
1952 tree = branch.create_checkout('tree', lightweight=True)1951 tree = branch.create_checkout('tree', lightweight=True)
@@ -2781,8 +2780,7 @@
2781 stacked_branch.set_stacked_on_url('../base')2780 stacked_branch.set_stacked_on_url('../base')
2782 # start a server looking at this2781 # start a server looking at this
2783 smart_server = server.SmartTCPServer_for_testing()2782 smart_server = server.SmartTCPServer_for_testing()
2784 smart_server.setUp()2783 self.start_server(smart_server)
2785 self.addCleanup(smart_server.tearDown)
2786 remote_bzrdir = BzrDir.open(smart_server.get_url() + '/stacked')2784 remote_bzrdir = BzrDir.open(smart_server.get_url() + '/stacked')
2787 # can get its branch and repository2785 # can get its branch and repository
2788 remote_branch = remote_bzrdir.open_branch()2786 remote_branch = remote_bzrdir.open_branch()
@@ -2943,8 +2941,7 @@
2943 # Create a smart server that publishes whatever the backing VFS server2941 # Create a smart server that publishes whatever the backing VFS server
2944 # does.2942 # does.
2945 self.smart_server = server.SmartTCPServer_for_testing()2943 self.smart_server = server.SmartTCPServer_for_testing()
2946 self.smart_server.setUp(self.get_server())2944 self.start_server(self.smart_server, self.get_server())
2947 self.addCleanup(self.smart_server.tearDown)
2948 # Log all HPSS calls into self.hpss_calls.2945 # Log all HPSS calls into self.hpss_calls.
2949 _SmartClient.hooks.install_named_hook(2946 _SmartClient.hooks.install_named_hook(
2950 'call', self.capture_hpss_call, None)2947 'call', self.capture_hpss_call, None)
29512948
=== modified file 'bzrlib/tests/test_repository.py'
--- bzrlib/tests/test_repository.py 2009-08-17 23:15:55 +0000
+++ bzrlib/tests/test_repository.py 2009-09-01 21:21:53 +0000
@@ -683,6 +683,28 @@
683683
684class Test2a(TestCaseWithTransport):684class Test2a(TestCaseWithTransport):
685685
686 def test_fetch_combines_groups(self):
687 builder = self.make_branch_builder('source', format='2a')
688 builder.start_series()
689 builder.build_snapshot('1', None, [
690 ('add', ('', 'root-id', 'directory', '')),
691 ('add', ('file', 'file-id', 'file', 'content\n'))])
692 builder.build_snapshot('2', ['1'], [
693 ('modify', ('file-id', 'content-2\n'))])
694 builder.finish_series()
695 source = builder.get_branch()
696 target = self.make_repository('target', format='2a')
697 target.fetch(source.repository)
698 target.lock_read()
699 self.addCleanup(target.unlock)
700 details = target.texts._index.get_build_details(
701 [('file-id', '1',), ('file-id', '2',)])
702 file_1_details = details[('file-id', '1')]
703 file_2_details = details[('file-id', '2')]
704 # The index, and what to read off disk, should be the same for both
705 # versions of the file.
706 self.assertEqual(file_1_details[0][:3], file_2_details[0][:3])
707
686 def test_format_pack_compresses_True(self):708 def test_format_pack_compresses_True(self):
687 repo = self.make_repository('repo', format='2a')709 repo = self.make_repository('repo', format='2a')
688 self.assertTrue(repo._format.pack_compresses)710 self.assertTrue(repo._format.pack_compresses)
689711
=== modified file 'bzrlib/tests/test_selftest.py'
--- bzrlib/tests/test_selftest.py 2009-08-24 05:35:28 +0000
+++ bzrlib/tests/test_selftest.py 2009-08-26 23:25:28 +0000
@@ -687,6 +687,26 @@
687 self.assertEqual(url, t.clone('..').base)687 self.assertEqual(url, t.clone('..').base)
688688
689689
690class TestProfileResult(tests.TestCase):
691
692 def test_profiles_tests(self):
693 self.requireFeature(test_lsprof.LSProfFeature)
694 terminal = unittest.TestResult()
695 result = tests.ProfileResult(terminal)
696 class Sample(tests.TestCase):
697 def a(self):
698 self.sample_function()
699 def sample_function(self):
700 pass
701 test = Sample("a")
702 test.attrs_to_keep = test.attrs_to_keep + ('_benchcalls',)
703 test.run(result)
704 self.assertLength(1, test._benchcalls)
705 # We must be able to unpack it as the test reporting code wants
706 (_, _, _), stats = test._benchcalls[0]
707 self.assertTrue(callable(stats.pprint))
708
709
690class TestTestResult(tests.TestCase):710class TestTestResult(tests.TestCase):
691711
692 def check_timing(self, test_case, expected_re):712 def check_timing(self, test_case, expected_re):
@@ -800,7 +820,7 @@
800 def test_known_failure(self):820 def test_known_failure(self):
801 """A KnownFailure being raised should trigger several result actions."""821 """A KnownFailure being raised should trigger several result actions."""
802 class InstrumentedTestResult(tests.ExtendedTestResult):822 class InstrumentedTestResult(tests.ExtendedTestResult):
803 def done(self): pass823 def stopTestRun(self): pass
804 def startTests(self): pass824 def startTests(self): pass
805 def report_test_start(self, test): pass825 def report_test_start(self, test): pass
806 def report_known_failure(self, test, err):826 def report_known_failure(self, test, err):
@@ -854,7 +874,7 @@
854 def test_add_not_supported(self):874 def test_add_not_supported(self):
855 """Test the behaviour of invoking addNotSupported."""875 """Test the behaviour of invoking addNotSupported."""
856 class InstrumentedTestResult(tests.ExtendedTestResult):876 class InstrumentedTestResult(tests.ExtendedTestResult):
857 def done(self): pass877 def stopTestRun(self): pass
858 def startTests(self): pass878 def startTests(self): pass
859 def report_test_start(self, test): pass879 def report_test_start(self, test): pass
860 def report_unsupported(self, test, feature):880 def report_unsupported(self, test, feature):
@@ -898,7 +918,7 @@
898 def test_unavailable_exception(self):918 def test_unavailable_exception(self):
899 """An UnavailableFeature being raised should invoke addNotSupported."""919 """An UnavailableFeature being raised should invoke addNotSupported."""
900 class InstrumentedTestResult(tests.ExtendedTestResult):920 class InstrumentedTestResult(tests.ExtendedTestResult):
901 def done(self): pass921 def stopTestRun(self): pass
902 def startTests(self): pass922 def startTests(self): pass
903 def report_test_start(self, test): pass923 def report_test_start(self, test): pass
904 def addNotSupported(self, test, feature):924 def addNotSupported(self, test, feature):
@@ -981,11 +1001,14 @@
981 because of our use of global state.1001 because of our use of global state.
982 """1002 """
983 old_root = tests.TestCaseInTempDir.TEST_ROOT1003 old_root = tests.TestCaseInTempDir.TEST_ROOT
1004 old_leak = tests.TestCase._first_thread_leaker_id
984 try:1005 try:
985 tests.TestCaseInTempDir.TEST_ROOT = None1006 tests.TestCaseInTempDir.TEST_ROOT = None
1007 tests.TestCase._first_thread_leaker_id = None
986 return testrunner.run(test)1008 return testrunner.run(test)
987 finally:1009 finally:
988 tests.TestCaseInTempDir.TEST_ROOT = old_root1010 tests.TestCaseInTempDir.TEST_ROOT = old_root
1011 tests.TestCase._first_thread_leaker_id = old_leak
9891012
990 def test_known_failure_failed_run(self):1013 def test_known_failure_failed_run(self):
991 # run a test that generates a known failure which should be printed in1014 # run a test that generates a known failure which should be printed in
@@ -1031,6 +1054,20 @@
1031 '\n'1054 '\n'
1032 'OK \\(known_failures=1\\)\n')1055 'OK \\(known_failures=1\\)\n')
10331056
1057 def test_result_decorator(self):
1058 # decorate results
1059 calls = []
1060 class LoggingDecorator(tests.ForwardingResult):
1061 def startTest(self, test):
1062 tests.ForwardingResult.startTest(self, test)
1063 calls.append('start')
1064 test = unittest.FunctionTestCase(lambda:None)
1065 stream = StringIO()
1066 runner = tests.TextTestRunner(stream=stream,
1067 result_decorators=[LoggingDecorator])
1068 result = self.run_test_runner(runner, test)
1069 self.assertLength(1, calls)
1070
1034 def test_skipped_test(self):1071 def test_skipped_test(self):
1035 # run a test that is skipped, and check the suite as a whole still1072 # run a test that is skipped, and check the suite as a whole still
1036 # succeeds.1073 # succeeds.
@@ -1103,10 +1140,6 @@
1103 self.assertContainsRe(out.getvalue(),1140 self.assertContainsRe(out.getvalue(),
1104 r'(?m)^ this test never runs')1141 r'(?m)^ this test never runs')
11051142
1106 def test_not_applicable_demo(self):
1107 # just so you can see it in the test output
1108 raise tests.TestNotApplicable('this test is just a demonstation')
1109
1110 def test_unsupported_features_listed(self):1143 def test_unsupported_features_listed(self):
1111 """When unsupported features are encountered they are detailed."""1144 """When unsupported features are encountered they are detailed."""
1112 class Feature1(tests.Feature):1145 class Feature1(tests.Feature):
@@ -1261,6 +1294,34 @@
1261 self.assertContainsRe(log, 'this will be kept')1294 self.assertContainsRe(log, 'this will be kept')
1262 self.assertEqual(log, test._log_contents)1295 self.assertEqual(log, test._log_contents)
12631296
1297 def test_startTestRun(self):
1298 """run should call result.startTestRun()"""
1299 calls = []
1300 class LoggingDecorator(tests.ForwardingResult):
1301 def startTestRun(self):
1302 tests.ForwardingResult.startTestRun(self)
1303 calls.append('startTestRun')
1304 test = unittest.FunctionTestCase(lambda:None)
1305 stream = StringIO()
1306 runner = tests.TextTestRunner(stream=stream,
1307 result_decorators=[LoggingDecorator])
1308 result = self.run_test_runner(runner, test)
1309 self.assertLength(1, calls)
1310
1311 def test_stopTestRun(self):
1312 """run should call result.stopTestRun()"""
1313 calls = []
1314 class LoggingDecorator(tests.ForwardingResult):
1315 def stopTestRun(self):
1316 tests.ForwardingResult.stopTestRun(self)
1317 calls.append('stopTestRun')
1318 test = unittest.FunctionTestCase(lambda:None)
1319 stream = StringIO()
1320 runner = tests.TextTestRunner(stream=stream,
1321 result_decorators=[LoggingDecorator])
1322 result = self.run_test_runner(runner, test)
1323 self.assertLength(1, calls)
1324
12641325
1265class SampleTestCase(tests.TestCase):1326class SampleTestCase(tests.TestCase):
12661327
@@ -1480,6 +1541,7 @@
1480 self.assertEqual((time.sleep, (0.003,), {}), self._benchcalls[1][0])1541 self.assertEqual((time.sleep, (0.003,), {}), self._benchcalls[1][0])
1481 self.assertIsInstance(self._benchcalls[0][1], bzrlib.lsprof.Stats)1542 self.assertIsInstance(self._benchcalls[0][1], bzrlib.lsprof.Stats)
1482 self.assertIsInstance(self._benchcalls[1][1], bzrlib.lsprof.Stats)1543 self.assertIsInstance(self._benchcalls[1][1], bzrlib.lsprof.Stats)
1544 del self._benchcalls[:]
14831545
1484 def test_knownFailure(self):1546 def test_knownFailure(self):
1485 """Self.knownFailure() should raise a KnownFailure exception."""1547 """Self.knownFailure() should raise a KnownFailure exception."""
@@ -1742,16 +1804,16 @@
1742 tree = self.make_branch_and_memory_tree('a')1804 tree = self.make_branch_and_memory_tree('a')
1743 self.assertIsInstance(tree, bzrlib.memorytree.MemoryTree)1805 self.assertIsInstance(tree, bzrlib.memorytree.MemoryTree)
17441806
17451807 def test_make_tree_for_local_vfs_backed_transport(self):
1746class TestSFTPMakeBranchAndTree(test_sftp_transport.TestCaseWithSFTPServer):1808 # make_branch_and_tree has to use local branch and repositories
17471809 # when the vfs transport and local disk are colocated, even if
1748 def test_make_tree_for_sftp_branch(self):1810 # a different transport is in use for url generation.
1749 """Transports backed by local directories create local trees."""1811 from bzrlib.transport.fakevfat import FakeVFATServer
1750 # NB: This is arguably a bug in the definition of make_branch_and_tree.1812 self.transport_server = FakeVFATServer
1813 self.assertFalse(self.get_url('t1').startswith('file://'))
1751 tree = self.make_branch_and_tree('t1')1814 tree = self.make_branch_and_tree('t1')
1752 base = tree.bzrdir.root_transport.base1815 base = tree.bzrdir.root_transport.base
1753 self.failIf(base.startswith('sftp'),1816 self.assertStartsWith(base, 'file://')
1754 'base %r is on sftp but should be local' % base)
1755 self.assertEquals(tree.bzrdir.root_transport,1817 self.assertEquals(tree.bzrdir.root_transport,
1756 tree.branch.bzrdir.root_transport)1818 tree.branch.bzrdir.root_transport)
1757 self.assertEquals(tree.bzrdir.root_transport,1819 self.assertEquals(tree.bzrdir.root_transport,
@@ -1817,6 +1879,20 @@
1817 self.assertNotContainsRe("Test.b", output.getvalue())1879 self.assertNotContainsRe("Test.b", output.getvalue())
1818 self.assertLength(2, output.readlines())1880 self.assertLength(2, output.readlines())
18191881
1882 def test_lsprof_tests(self):
1883 self.requireFeature(test_lsprof.LSProfFeature)
1884 calls = []
1885 class Test(object):
1886 def __call__(test, result):
1887 test.run(result)
1888 def run(test, result):
1889 self.assertIsInstance(result, tests.ForwardingResult)
1890 calls.append("called")
1891 def countTestCases(self):
1892 return 1
1893 self.run_selftest(test_suite_factory=Test, lsprof_tests=True)
1894 self.assertLength(1, calls)
1895
1820 def test_random(self):1896 def test_random(self):
1821 # test randomising by listing a number of tests.1897 # test randomising by listing a number of tests.
1822 output_123 = self.run_selftest(test_suite_factory=self.factory,1898 output_123 = self.run_selftest(test_suite_factory=self.factory,
@@ -1877,8 +1953,8 @@
1877 def test_transport_sftp(self):1953 def test_transport_sftp(self):
1878 try:1954 try:
1879 import bzrlib.transport.sftp1955 import bzrlib.transport.sftp
1880 except ParamikoNotPresent:1956 except errors.ParamikoNotPresent:
1881 raise TestSkipped("Paramiko not present")1957 raise tests.TestSkipped("Paramiko not present")
1882 self.check_transport_set(bzrlib.transport.sftp.SFTPAbsoluteServer)1958 self.check_transport_set(bzrlib.transport.sftp.SFTPAbsoluteServer)
18831959
1884 def test_transport_memory(self):1960 def test_transport_memory(self):
@@ -2072,7 +2148,8 @@
2072 return self.out, self.err2148 return self.out, self.err
20732149
20742150
2075class TestRunBzrSubprocess(tests.TestCaseWithTransport):2151class TestWithFakedStartBzrSubprocess(tests.TestCaseWithTransport):
2152 """Base class for tests testing how we might run bzr."""
20762153
2077 def setUp(self):2154 def setUp(self):
2078 tests.TestCaseWithTransport.setUp(self)2155 tests.TestCaseWithTransport.setUp(self)
@@ -2089,6 +2166,9 @@
2089 'working_dir':working_dir, 'allow_plugins':allow_plugins})2166 'working_dir':working_dir, 'allow_plugins':allow_plugins})
2090 return self.next_subprocess2167 return self.next_subprocess
20912168
2169
2170class TestRunBzrSubprocess(TestWithFakedStartBzrSubprocess):
2171
2092 def assertRunBzrSubprocess(self, expected_args, process, *args, **kwargs):2172 def assertRunBzrSubprocess(self, expected_args, process, *args, **kwargs):
2093 """Run run_bzr_subprocess with args and kwargs using a stubbed process.2173 """Run run_bzr_subprocess with args and kwargs using a stubbed process.
20942174
@@ -2157,6 +2237,32 @@
2157 StubProcess(), '', allow_plugins=True)2237 StubProcess(), '', allow_plugins=True)
21582238
21592239
2240class TestFinishBzrSubprocess(TestWithFakedStartBzrSubprocess):
2241
2242 def test_finish_bzr_subprocess_with_error(self):
2243 """finish_bzr_subprocess allows specification of the desired exit code.
2244 """
2245 process = StubProcess(err="unknown command", retcode=3)
2246 result = self.finish_bzr_subprocess(process, retcode=3)
2247 self.assertEqual('', result[0])
2248 self.assertContainsRe(result[1], 'unknown command')
2249
2250 def test_finish_bzr_subprocess_ignoring_retcode(self):
2251 """finish_bzr_subprocess allows the exit code to be ignored."""
2252 process = StubProcess(err="unknown command", retcode=3)
2253 result = self.finish_bzr_subprocess(process, retcode=None)
2254 self.assertEqual('', result[0])
2255 self.assertContainsRe(result[1], 'unknown command')
2256
2257 def test_finish_subprocess_with_unexpected_retcode(self):
2258 """finish_bzr_subprocess raises self.failureException if the retcode is
2259 not the expected one.
2260 """
2261 process = StubProcess(err="unknown command", retcode=3)
2262 self.assertRaises(self.failureException, self.finish_bzr_subprocess,
2263 process)
2264
2265
2160class _DontSpawnProcess(Exception):2266class _DontSpawnProcess(Exception):
2161 """A simple exception which just allows us to skip unnecessary steps"""2267 """A simple exception which just allows us to skip unnecessary steps"""
21622268
@@ -2240,39 +2346,8 @@
2240 self.assertEqual(['foo', 'current'], chdirs)2346 self.assertEqual(['foo', 'current'], chdirs)
22412347
22422348
2243class TestBzrSubprocess(tests.TestCaseWithTransport):2349class TestActuallyStartBzrSubprocess(tests.TestCaseWithTransport):
22442350 """Tests that really need to do things with an external bzr."""
2245 def test_start_and_stop_bzr_subprocess(self):
2246 """We can start and perform other test actions while that process is
2247 still alive.
2248 """
2249 process = self.start_bzr_subprocess(['--version'])
2250 result = self.finish_bzr_subprocess(process)
2251 self.assertContainsRe(result[0], 'is free software')
2252 self.assertEqual('', result[1])
2253
2254 def test_start_and_stop_bzr_subprocess_with_error(self):
2255 """finish_bzr_subprocess allows specification of the desired exit code.
2256 """
2257 process = self.start_bzr_subprocess(['--versionn'])
2258 result = self.finish_bzr_subprocess(process, retcode=3)
2259 self.assertEqual('', result[0])
2260 self.assertContainsRe(result[1], 'unknown command')
2261
2262 def test_start_and_stop_bzr_subprocess_ignoring_retcode(self):
2263 """finish_bzr_subprocess allows the exit code to be ignored."""
2264 process = self.start_bzr_subprocess(['--versionn'])
2265 result = self.finish_bzr_subprocess(process, retcode=None)
2266 self.assertEqual('', result[0])
2267 self.assertContainsRe(result[1], 'unknown command')
2268
2269 def test_start_and_stop_bzr_subprocess_with_unexpected_retcode(self):
2270 """finish_bzr_subprocess raises self.failureException if the retcode is
2271 not the expected one.
2272 """
2273 process = self.start_bzr_subprocess(['--versionn'])
2274 self.assertRaises(self.failureException, self.finish_bzr_subprocess,
2275 process)
22762351
2277 def test_start_and_stop_bzr_subprocess_send_signal(self):2352 def test_start_and_stop_bzr_subprocess_send_signal(self):
2278 """finish_bzr_subprocess raises self.failureException if the retcode is2353 """finish_bzr_subprocess raises self.failureException if the retcode is
@@ -2286,14 +2361,6 @@
2286 self.assertEqual('', result[0])2361 self.assertEqual('', result[0])
2287 self.assertEqual('bzr: interrupted\n', result[1])2362 self.assertEqual('bzr: interrupted\n', result[1])
22882363
2289 def test_start_and_stop_working_dir(self):
2290 cwd = osutils.getcwd()
2291 self.make_branch_and_tree('one')
2292 process = self.start_bzr_subprocess(['root'], working_dir='one')
2293 result = self.finish_bzr_subprocess(process, universal_newlines=True)
2294 self.assertEndsWith(result[0], 'one\n')
2295 self.assertEqual('', result[1])
2296
22972364
2298class TestKnownFailure(tests.TestCase):2365class TestKnownFailure(tests.TestCase):
22992366
@@ -2681,10 +2748,52 @@
26812748
2682class TestTestSuite(tests.TestCase):2749class TestTestSuite(tests.TestCase):
26832750
2751 def test__test_suite_testmod_names(self):
2752 # Test that a plausible list of test module names are returned
2753 # by _test_suite_testmod_names.
2754 test_list = tests._test_suite_testmod_names()
2755 self.assertSubset([
2756 'bzrlib.tests.blackbox',
2757 'bzrlib.tests.per_transport',
2758 'bzrlib.tests.test_selftest',
2759 ],
2760 test_list)
2761
2762 def test__test_suite_modules_to_doctest(self):
2763 # Test that a plausible list of modules to doctest is returned
2764 # by _test_suite_modules_to_doctest.
2765 test_list = tests._test_suite_modules_to_doctest()
2766 self.assertSubset([
2767 'bzrlib.timestamp',
2768 ],
2769 test_list)
2770
2684 def test_test_suite(self):2771 def test_test_suite(self):
2685 # This test is slow - it loads the entire test suite to operate, so we2772 # test_suite() loads the entire test suite to operate. To avoid this
2686 # do a single test with one test in each category2773 # overhead, and yet still be confident that things are happening,
2687 test_list = [2774 # we temporarily replace two functions used by test_suite with
2775 # test doubles that supply a few sample tests to load, and check they
2776 # are loaded.
2777 calls = []
2778 def _test_suite_testmod_names():
2779 calls.append("testmod_names")
2780 return [
2781 'bzrlib.tests.blackbox.test_branch',
2782 'bzrlib.tests.per_transport',
2783 'bzrlib.tests.test_selftest',
2784 ]
2785 original_testmod_names = tests._test_suite_testmod_names
2786 def _test_suite_modules_to_doctest():
2787 calls.append("modules_to_doctest")
2788 return ['bzrlib.timestamp']
2789 orig_modules_to_doctest = tests._test_suite_modules_to_doctest
2790 def restore_names():
2791 tests._test_suite_testmod_names = original_testmod_names
2792 tests._test_suite_modules_to_doctest = orig_modules_to_doctest
2793 self.addCleanup(restore_names)
2794 tests._test_suite_testmod_names = _test_suite_testmod_names
2795 tests._test_suite_modules_to_doctest = _test_suite_modules_to_doctest
2796 expected_test_list = [
2688 # testmod_names2797 # testmod_names
2689 'bzrlib.tests.blackbox.test_branch.TestBranch.test_branch',2798 'bzrlib.tests.blackbox.test_branch.TestBranch.test_branch',
2690 ('bzrlib.tests.per_transport.TransportTests'2799 ('bzrlib.tests.per_transport.TransportTests'
@@ -2695,13 +2804,16 @@
2695 # plugins can't be tested that way since selftest may be run with2804 # plugins can't be tested that way since selftest may be run with
2696 # --no-plugins2805 # --no-plugins
2697 ]2806 ]
2698 suite = tests.test_suite(test_list)2807 suite = tests.test_suite()
2699 self.assertEquals(test_list, _test_ids(suite))2808 self.assertEqual(set(["testmod_names", "modules_to_doctest"]),
2809 set(calls))
2810 self.assertSubset(expected_test_list, _test_ids(suite))
27002811
2701 def test_test_suite_list_and_start(self):2812 def test_test_suite_list_and_start(self):
2702 # We cannot test this at the same time as the main load, because we want2813 # We cannot test this at the same time as the main load, because we want
2703 # to know that starting_with == None works. So a second full load is2814 # to know that starting_with == None works. So a second load is
2704 # incurred.2815 # incurred - note that the starting_with parameter causes a partial load
2816 # rather than a full load so this test should be pretty quick.
2705 test_list = ['bzrlib.tests.test_selftest.TestTestSuite.test_test_suite']2817 test_list = ['bzrlib.tests.test_selftest.TestTestSuite.test_test_suite']
2706 suite = tests.test_suite(test_list,2818 suite = tests.test_suite(test_list,
2707 ['bzrlib.tests.test_selftest.TestTestSuite'])2819 ['bzrlib.tests.test_selftest.TestTestSuite'])
@@ -2853,19 +2965,3 @@
2853 self.verbosity)2965 self.verbosity)
2854 tests.run_suite(suite, runner_class=MyRunner, stream=StringIO())2966 tests.run_suite(suite, runner_class=MyRunner, stream=StringIO())
2855 self.assertLength(1, calls)2967 self.assertLength(1, calls)
2856
2857 def test_done(self):
2858 """run_suite should call result.done()"""
2859 self.calls = 0
2860 def one_more_call(): self.calls += 1
2861 def test_function():
2862 pass
2863 test = unittest.FunctionTestCase(test_function)
2864 class InstrumentedTestResult(tests.ExtendedTestResult):
2865 def done(self): one_more_call()
2866 class MyRunner(tests.TextTestRunner):
2867 def run(self, test):
2868 return InstrumentedTestResult(self.stream, self.descriptions,
2869 self.verbosity)
2870 tests.run_suite(test, runner_class=MyRunner, stream=StringIO())
2871 self.assertEquals(1, self.calls)
28722968
=== modified file 'bzrlib/tests/test_shelf.py'
--- bzrlib/tests/test_shelf.py 2009-08-26 07:40:38 +0000
+++ bzrlib/tests/test_shelf.py 2009-08-28 05:00:33 +0000
@@ -476,6 +476,8 @@
476 def test_shelve_skips_added_root(self):476 def test_shelve_skips_added_root(self):
477 """Skip adds of the root when iterating through shelvable changes."""477 """Skip adds of the root when iterating through shelvable changes."""
478 tree = self.make_branch_and_tree('tree')478 tree = self.make_branch_and_tree('tree')
479 tree.lock_tree_write()
480 self.addCleanup(tree.unlock)
479 creator = shelf.ShelfCreator(tree, tree.basis_tree())481 creator = shelf.ShelfCreator(tree, tree.basis_tree())
480 self.addCleanup(creator.finalize)482 self.addCleanup(creator.finalize)
481 self.assertEqual([], list(creator.iter_shelvable()))483 self.assertEqual([], list(creator.iter_shelvable()))
482484
=== modified file 'bzrlib/tests/test_smart.py'
--- bzrlib/tests/test_smart.py 2009-08-17 23:15:55 +0000
+++ bzrlib/tests/test_smart.py 2009-09-03 15:26:27 +0000
@@ -36,6 +36,7 @@
36 smart,36 smart,
37 tests,37 tests,
38 urlutils,38 urlutils,
39 versionedfile,
39 )40 )
40from bzrlib.branch import Branch, BranchReferenceFormat41from bzrlib.branch import Branch, BranchReferenceFormat
41import bzrlib.smart.branch42import bzrlib.smart.branch
@@ -87,8 +88,7 @@
87 if self._chroot_server is None:88 if self._chroot_server is None:
88 backing_transport = tests.TestCaseWithTransport.get_transport(self)89 backing_transport = tests.TestCaseWithTransport.get_transport(self)
89 self._chroot_server = chroot.ChrootServer(backing_transport)90 self._chroot_server = chroot.ChrootServer(backing_transport)
90 self._chroot_server.setUp()91 self.start_server(self._chroot_server)
91 self.addCleanup(self._chroot_server.tearDown)
92 t = get_transport(self._chroot_server.get_url())92 t = get_transport(self._chroot_server.get_url())
93 if relpath is not None:93 if relpath is not None:
94 t = t.clone(relpath)94 t = t.clone(relpath)
@@ -113,6 +113,25 @@
113 return self.get_transport().get_smart_medium()113 return self.get_transport().get_smart_medium()
114114
115115
116class TestByteStreamToStream(tests.TestCase):
117
118 def test_repeated_substreams_same_kind_are_one_stream(self):
119 # Make a stream - an iterable of bytestrings.
120 stream = [('text', [versionedfile.FulltextContentFactory(('k1',), None,
121 None, 'foo')]),('text', [
122 versionedfile.FulltextContentFactory(('k2',), None, None, 'bar')])]
123 fmt = bzrdir.format_registry.get('pack-0.92')().repository_format
124 bytes = smart.repository._stream_to_byte_stream(stream, fmt)
125 streams = []
126 # Iterate the resulting iterable; checking that we get only one stream
127 # out.
128 fmt, stream = smart.repository._byte_stream_to_stream(bytes)
129 for kind, substream in stream:
130 streams.append((kind, list(substream)))
131 self.assertLength(1, streams)
132 self.assertLength(2, streams[0][1])
133
134
116class TestSmartServerResponse(tests.TestCase):135class TestSmartServerResponse(tests.TestCase):
117136
118 def test__eq__(self):137 def test__eq__(self):
119138
=== modified file 'bzrlib/tests/test_transport.py'
--- bzrlib/tests/test_transport.py 2009-03-23 14:59:43 +0000
+++ bzrlib/tests/test_transport.py 2009-08-27 22:17:35 +0000
@@ -363,24 +363,22 @@
363 def test_abspath(self):363 def test_abspath(self):
364 # The abspath is always relative to the chroot_url.364 # The abspath is always relative to the chroot_url.
365 server = ChrootServer(get_transport('memory:///foo/bar/'))365 server = ChrootServer(get_transport('memory:///foo/bar/'))
366 server.setUp()366 self.start_server(server)
367 transport = get_transport(server.get_url())367 transport = get_transport(server.get_url())
368 self.assertEqual(server.get_url(), transport.abspath('/'))368 self.assertEqual(server.get_url(), transport.abspath('/'))
369369
370 subdir_transport = transport.clone('subdir')370 subdir_transport = transport.clone('subdir')
371 self.assertEqual(server.get_url(), subdir_transport.abspath('/'))371 self.assertEqual(server.get_url(), subdir_transport.abspath('/'))
372 server.tearDown()
373372
374 def test_clone(self):373 def test_clone(self):
375 server = ChrootServer(get_transport('memory:///foo/bar/'))374 server = ChrootServer(get_transport('memory:///foo/bar/'))
376 server.setUp()375 self.start_server(server)
377 transport = get_transport(server.get_url())376 transport = get_transport(server.get_url())
378 # relpath from root and root path are the same377 # relpath from root and root path are the same
379 relpath_cloned = transport.clone('foo')378 relpath_cloned = transport.clone('foo')
380 abspath_cloned = transport.clone('/foo')379 abspath_cloned = transport.clone('/foo')
381 self.assertEqual(server, relpath_cloned.server)380 self.assertEqual(server, relpath_cloned.server)
382 self.assertEqual(server, abspath_cloned.server)381 self.assertEqual(server, abspath_cloned.server)
383 server.tearDown()
384382
385 def test_chroot_url_preserves_chroot(self):383 def test_chroot_url_preserves_chroot(self):
386 """Calling get_transport on a chroot transport's base should produce a384 """Calling get_transport on a chroot transport's base should produce a
@@ -393,12 +391,11 @@
393 new_transport = get_transport(parent_url)391 new_transport = get_transport(parent_url)
394 """392 """
395 server = ChrootServer(get_transport('memory:///path/subpath'))393 server = ChrootServer(get_transport('memory:///path/subpath'))
396 server.setUp()394 self.start_server(server)
397 transport = get_transport(server.get_url())395 transport = get_transport(server.get_url())
398 new_transport = get_transport(transport.base)396 new_transport = get_transport(transport.base)
399 self.assertEqual(transport.server, new_transport.server)397 self.assertEqual(transport.server, new_transport.server)
400 self.assertEqual(transport.base, new_transport.base)398 self.assertEqual(transport.base, new_transport.base)
401 server.tearDown()
402399
403 def test_urljoin_preserves_chroot(self):400 def test_urljoin_preserves_chroot(self):
404 """Using urlutils.join(url, '..') on a chroot URL should not produce a401 """Using urlutils.join(url, '..') on a chroot URL should not produce a
@@ -410,11 +407,10 @@
410 new_transport = get_transport(parent_url)407 new_transport = get_transport(parent_url)
411 """408 """
412 server = ChrootServer(get_transport('memory:///path/'))409 server = ChrootServer(get_transport('memory:///path/'))
413 server.setUp()410 self.start_server(server)
414 transport = get_transport(server.get_url())411 transport = get_transport(server.get_url())
415 self.assertRaises(412 self.assertRaises(
416 InvalidURLJoin, urlutils.join, transport.base, '..')413 InvalidURLJoin, urlutils.join, transport.base, '..')
417 server.tearDown()
418414
419415
420class ChrootServerTest(TestCase):416class ChrootServerTest(TestCase):
@@ -428,7 +424,10 @@
428 backing_transport = MemoryTransport()424 backing_transport = MemoryTransport()
429 server = ChrootServer(backing_transport)425 server = ChrootServer(backing_transport)
430 server.setUp()426 server.setUp()
431 self.assertTrue(server.scheme in _get_protocol_handlers().keys())427 try:
428 self.assertTrue(server.scheme in _get_protocol_handlers().keys())
429 finally:
430 server.tearDown()
432431
433 def test_tearDown(self):432 def test_tearDown(self):
434 backing_transport = MemoryTransport()433 backing_transport = MemoryTransport()
@@ -441,8 +440,10 @@
441 backing_transport = MemoryTransport()440 backing_transport = MemoryTransport()
442 server = ChrootServer(backing_transport)441 server = ChrootServer(backing_transport)
443 server.setUp()442 server.setUp()
444 self.assertEqual('chroot-%d:///' % id(server), server.get_url())443 try:
445 server.tearDown()444 self.assertEqual('chroot-%d:///' % id(server), server.get_url())
445 finally:
446 server.tearDown()
446447
447448
448class ReadonlyDecoratorTransportTest(TestCase):449class ReadonlyDecoratorTransportTest(TestCase):
@@ -460,15 +461,12 @@
460 import bzrlib.transport.readonly as readonly461 import bzrlib.transport.readonly as readonly
461 # connect to '.' via http which is not listable462 # connect to '.' via http which is not listable
462 server = HttpServer()463 server = HttpServer()
463 server.setUp()464 self.start_server(server)
464 try:465 transport = get_transport('readonly+' + server.get_url())
465 transport = get_transport('readonly+' + server.get_url())466 self.failUnless(isinstance(transport,
466 self.failUnless(isinstance(transport,467 readonly.ReadonlyTransportDecorator))
467 readonly.ReadonlyTransportDecorator))468 self.assertEqual(False, transport.listable())
468 self.assertEqual(False, transport.listable())469 self.assertEqual(True, transport.is_readonly())
469 self.assertEqual(True, transport.is_readonly())
470 finally:
471 server.tearDown()
472470
473471
474class FakeNFSDecoratorTests(TestCaseInTempDir):472class FakeNFSDecoratorTests(TestCaseInTempDir):
@@ -492,31 +490,24 @@
492 from bzrlib.tests.http_server import HttpServer490 from bzrlib.tests.http_server import HttpServer
493 # connect to '.' via http which is not listable491 # connect to '.' via http which is not listable
494 server = HttpServer()492 server = HttpServer()
495 server.setUp()493 self.start_server(server)
496 try:494 transport = self.get_nfs_transport(server.get_url())
497 transport = self.get_nfs_transport(server.get_url())495 self.assertIsInstance(
498 self.assertIsInstance(496 transport, bzrlib.transport.fakenfs.FakeNFSTransportDecorator)
499 transport, bzrlib.transport.fakenfs.FakeNFSTransportDecorator)497 self.assertEqual(False, transport.listable())
500 self.assertEqual(False, transport.listable())498 self.assertEqual(True, transport.is_readonly())
501 self.assertEqual(True, transport.is_readonly())
502 finally:
503 server.tearDown()
504499
505 def test_fakenfs_server_default(self):500 def test_fakenfs_server_default(self):
506 # a FakeNFSServer() should bring up a local relpath server for itself501 # a FakeNFSServer() should bring up a local relpath server for itself
507 import bzrlib.transport.fakenfs as fakenfs502 import bzrlib.transport.fakenfs as fakenfs
508 server = fakenfs.FakeNFSServer()503 server = fakenfs.FakeNFSServer()
509 server.setUp()504 self.start_server(server)
510 try:505 # the url should be decorated appropriately
511 # the url should be decorated appropriately506 self.assertStartsWith(server.get_url(), 'fakenfs+')
512 self.assertStartsWith(server.get_url(), 'fakenfs+')507 # and we should be able to get a transport for it
513 # and we should be able to get a transport for it508 transport = get_transport(server.get_url())
514 transport = get_transport(server.get_url())509 # which must be a FakeNFSTransportDecorator instance.
515 # which must be a FakeNFSTransportDecorator instance.510 self.assertIsInstance(transport, fakenfs.FakeNFSTransportDecorator)
516 self.assertIsInstance(
517 transport, fakenfs.FakeNFSTransportDecorator)
518 finally:
519 server.tearDown()
520511
521 def test_fakenfs_rename_semantics(self):512 def test_fakenfs_rename_semantics(self):
522 # a FakeNFS transport must mangle the way rename errors occur to513 # a FakeNFS transport must mangle the way rename errors occur to
@@ -587,8 +578,7 @@
587 def setUp(self):578 def setUp(self):
588 super(TestTransportImplementation, self).setUp()579 super(TestTransportImplementation, self).setUp()
589 self._server = self.transport_server()580 self._server = self.transport_server()
590 self._server.setUp()581 self.start_server(self._server)
591 self.addCleanup(self._server.tearDown)
592582
593 def get_transport(self, relpath=None):583 def get_transport(self, relpath=None):
594 """Return a connected transport to the local directory.584 """Return a connected transport to the local directory.
595585
=== modified file 'doc/_templates/index.html'
--- doc/_templates/index.html 2009-07-22 14:36:38 +0000
+++ doc/_templates/index.html 2009-08-18 00:10:19 +0000
@@ -26,19 +26,17 @@
26 <p class="biglink"><a class="biglink" href="{{ pathto("en/upgrade-guide/index") }}">Upgrade Guide</a><br/>26 <p class="biglink"><a class="biglink" href="{{ pathto("en/upgrade-guide/index") }}">Upgrade Guide</a><br/>
27 <span class="linkdescr">moving to Bazaar 2.x</span>27 <span class="linkdescr">moving to Bazaar 2.x</span>
28 </p>28 </p>
29 <p class="biglink"><a class="biglink" href="{{ pathto("en/migration/index") }}">Migration Docs</a><br/>29 <p class="biglink"><a class="biglink" href="http://doc.bazaar-vcs.org/migration/en/">Migration Docs</a><br/>
30 <span class="linkdescr">for refugees of other tools</span>30 <span class="linkdescr">for refugees of other tools</span>
31 </p>31 </p>
32 <p class="biglink"><a class="biglink" href="{{ pathto("developers/index") }}">Developer Docs</a><br/>32 <p class="biglink"><a class="biglink" href="http://doc.bazaar-vcs.org/plugins/en/">Plugins Guide</a><br/>
33 <span class="linkdescr">polices and tools for giving back</span>33 <span class="linkdescr">help on popular plugins</span>
34 </p>34 </p>
35 </td></tr>35 </td></tr>
36 </table>36 </table>
3737
38 <p>Other languages:38 <p>Keen to help? See the <a href="{{ pathto("developers/index") }}">Developer Docs</a>
39 <a href="{{ pathto("index.es") }}">Spanish</a>,39 for policies and tools on contributing code, tests and documentation.</p>
40 <a href="{{ pathto("index.ru") }}">Russian</a>
41 </p>
4240
4341
44 <h2>Related Links</h2>42 <h2>Related Links</h2>
@@ -59,4 +57,9 @@
59 </td></tr>57 </td></tr>
60 </table>58 </table>
6159
60 <hr>
61 <p>Other languages:
62 <a href="{{ pathto("index.es") }}">Spanish</a>,
63 <a href="{{ pathto("index.ru") }}">Russian</a>
64 </p>
62{% endblock %}65{% endblock %}
6366
=== modified file 'doc/contents.txt'
--- doc/contents.txt 2009-07-22 13:41:01 +0000
+++ doc/contents.txt 2009-08-18 00:10:19 +0000
@@ -20,7 +20,6 @@
2020
21 en/release-notes/index21 en/release-notes/index
22 en/upgrade-guide/index22 en/upgrade-guide/index
23 en/migration/index
24 developers/index23 developers/index
2524
2625
2726
=== modified file 'doc/developers/bug-handling.txt'
--- doc/developers/bug-handling.txt 2009-08-24 00:29:31 +0000
+++ doc/developers/bug-handling.txt 2009-08-24 20:16:15 +0000
@@ -142,12 +142,8 @@
142 it's not a good idea for a developer to spend time reproducing the bug142 it's not a good idea for a developer to spend time reproducing the bug
143 until they're going to work on it.)143 until they're going to work on it.)
144Triaged144Triaged
145 This is an odd state - one we consider a bug in launchpad, as it really145 We don't use this status. If it is set, it means the same as
146 means "Importance has been set". We use this to mean the same thing146 Confirmed.
147 as confirmed, and set no preference on whether Confirmed or Triaged are
148 used. Please do not change a "Confirmed" bug to "Triaged" or vice verca -
149 any reports we create or use will always search for both "Confirmed" and
150 "Triaged" or neither "Confirmed" nor "Triaged".
151In Progress147In Progress
152 Someone has started working on this.148 Someone has started working on this.
153Won't Fix149Won't Fix
154150
=== removed directory 'doc/en/migration'
=== removed file 'doc/en/migration/index.txt'
--- doc/en/migration/index.txt 2009-07-22 13:41:01 +0000
+++ doc/en/migration/index.txt 1970-01-01 00:00:00 +0000
@@ -1,6 +0,0 @@
1Bazaar Migration Guide
2======================
3
4This guide is under development. For notes collected so far, see
5http://bazaar-vcs.org/BzrMigration/.
6