Merge lp:~jameinel/bzr/1.16-commit-fulltext into lp:~bzr/bzr/trunk-old

Proposed by John A Meinel
Status: Superseded
Proposed branch: lp:~jameinel/bzr/1.16-commit-fulltext
Merge into: lp:~bzr/bzr/trunk-old
Diff against target: 485 lines
To merge this branch: bzr merge lp:~jameinel/bzr/1.16-commit-fulltext
Reviewer Review Type Date Requested Status
bzr-core Pending
Review via email: mp+6988@code.launchpad.net

This proposal has been superseded by a proposal from 2009-06-04.

To post a comment you must log in.
Revision history for this message
John A Meinel (jameinel) wrote :

This branch adds a new api, VersionedFiles.add_text(). If people really want, I could change it to VF.add_chunks(), but add_text() fits what I needed, and was expedient.

The main effect is to change 'bzr commit' to use file.read() rather than file.readlines(), and then to pass that on to VF.add_text() rather than VF.add_lines().

It also spends a little bit of time to remove some of the bits that were causing us to copy the memory structures a lot. It doesn't completely remove the need for a list of lines during Knit.commit() but it *does* remove the need during --dev6.commit.

To test this, I created a 90MB file, which consists of mostly 20 byte strings with no final newline. I then did:
rm -rf .bzr; bzr init --format=X; bzr add; time bzr commit -Dmemory -m "bigfile"

For --pack-0.92:
pre 469,748 kB, 5.554s
post 360,836 kB, 4.789s

For --development6-rich-root:
pre 589,732 kB, 7.785s
post 348,796 kB, 5.803s

So it is both faster and smaller. Though I still need to explore why --dev6 isn't more memory friendly. It seems to be because of the DeltaIndex structures as part of Groupcompress blocks. It might be worthwhile to optimize for not creating those on the first insert into a new group. (for the 90MB file, it seems to allocate 97MB for the 'loose' index, and then packs that into a 134MB index that has empty slots.)

Revision history for this message
Robert Collins (lifeless) wrote :

On Tue, 2009-06-02 at 20:50 +0000, John A Meinel wrote:
> John A Meinel has proposed merging lp:~jameinel/bzr/1.16-commit-fulltext into lp:bzr.
>
> Requested reviews:
> bzr-core (bzr-core)
>
> This branch adds a new api, VersionedFiles.add_text(). If people really want, I could change it to VF.add_chunks(), but add_text() fits what I needed, and was expedient.

Is it at all possible to use insert_record_stream?

I'd really like to shrink the VF surface area, not increase it.

-Rob

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Robert Collins wrote:
> On Tue, 2009-06-02 at 20:50 +0000, John A Meinel wrote:
>> John A Meinel has proposed merging lp:~jameinel/bzr/1.16-commit-fulltext into lp:bzr.
>>
>> Requested reviews:
>> bzr-core (bzr-core)
>>
>> This branch adds a new api, VersionedFiles.add_text(). If people really want, I could change it to VF.add_chunks(), but add_text() fits what I needed, and was expedient.
>
> Is it at all possible to use insert_record_stream?
>
> I'd really like to shrink the VF surface area, not increase it.
>
> -Rob
>

Not trivially.

1) It is an incompatible api change to insert_record_stream

2) It requires setting up a FulltextContextFactory and passing in a
stream of 1 entry just to add a text, which isn't particularly nice.

3) It requires adding lots of parameters like 'nostore_sha', and
'random_id', etc, onto insert_record_stream

4) It requires rewriting the internals of
KnitVersionedFiles.insert_record_stream to *not* thunk back to
self.add_lines(chunks_to_lines(record.get_bytes_as('chunked')))

5) nostore_sha especially doesn't fit with the theology of
insert_record_stream. It is really only applicable to a single text, and
insert_record_stream is really designed around many texts. Wedging new
parameters onto a function where it doesn't really fit doesn't seem
*better*.

6) As for VF surface area, there is at least a default implementation
that simply thunks over to .add_lines() for those that don't strictly
care about memory performance. (And thus works fine for Weaves, etc.)

In theory we could try to layer it so that we had an 'ongoing' stream,
and 'yield' texts to be inserted as we find them. But that really
doesn't fit 'nostore_sha' since that also needs to be passed in, and
needs to raise an exception which breaks the stream.
Also, I thought we *wanted* commit for groupcompress to not have to do
deltas, and if we stream the texts in, we would spend a modest amount of
time getting poor compression between text files. (Note that we were
already spending that time to compute the delta index, but I have a
patch which fixes that.)

I can understand wanting to shrink the api. If you really push on it,
I'm willing to deprecate .add_lines() and write a .add_chunks() that is
meant to replace it. (since you can .add_chunks(lines) and
.add_chunks([text])) However, chunks fits slightly worse for knits,
since the Content code and annotation and deltas needs lines anyway, and
Groupcompress wants fulltexts...

So if you push hard, I'll try to find the time to do it. But this was
*much* easier.

John
=:->

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkol5pQACgkQJdeBCYSNAAMreQCgqQEE98oPL7DgmZnJRDZIf4F7
g2gAn3sZ91KqokOlLPwRI0rUwJpy/F8+
=wAGi
-----END PGP SIGNATURE-----

Revision history for this message
Robert Collins (lifeless) wrote :
Download full text (3.2 KiB)

On Wed, 2009-06-03 at 03:00 +0000, John A Meinel wrote:
>
> Not trivially.
>
> 1) It is an incompatible api change to insert_record_stream

Yes. Doing this before 2.0 would be better than doing it later.

> 2) It requires setting up a FulltextContextFactory and passing in a
> stream of 1 entry just to add a text, which isn't particularly nice.

record_iter_changes would pass a generator into
texts.insert_record_stream.

e.g.
text_details = \
self.repository.texts.insert_record_stream(self._ric_texts, ...)
for details in text_details:
    [...]

> 3) It requires adding lots of parameters like 'nostore_sha', and
> 'random_id', etc, onto insert_record_stream

or onto the factory. I'm not sure offhand where is best.

> 4) It requires rewriting the internals of
> KnitVersionedFiles.insert_record_stream to *not* thunk back to
> self.add_lines(chunks_to_lines(record.get_bytes_as('chunked')))

this is fairly straightforward: move add_lines to call
self.insert_record_stream appropriately:- I did that for GCVF and it
worked well.

> 5) nostore_sha especially doesn't fit with the theology of
> insert_record_stream. It is really only applicable to a single text,
> and
> insert_record_stream is really designed around many texts. Wedging new
> parameters onto a function where it doesn't really fit doesn't seem
> *better*.

Agreed; so perhaps an attribute on the factory.

> 6) As for VF surface area, there is at least a default implementation
> that simply thunks over to .add_lines() for those that don't strictly
> care about memory performance. (And thus works fine for Weaves, etc.)

Well, I want to delete add_lines as it is.

> In theory we could try to layer it so that we had an 'ongoing' stream,
> and 'yield' texts to be inserted as we find them. But that really
> doesn't fit 'nostore_sha' since that also needs to be passed in, and
> needs to raise an exception which breaks the stream.

I'd yield data per record.

> Also, I thought we *wanted* commit for groupcompress to not have to do
> deltas, and if we stream the texts in, we would spend a modest amount
> of
> time getting poor compression between text files. (Note that we were
> already spending that time to compute the delta index, but I have a
> patch which fixes that.)

It would be good to measure actually... first commit after all suffers
hugely because every page in the CHKMap is add_text'd separately.

> I can understand wanting to shrink the api. If you really push on it,
> I'm willing to deprecate .add_lines() and write a .add_chunks() that
> is
> meant to replace it. (since you can .add_chunks(lines) and
> .add_chunks([text])) However, chunks fits slightly worse for knits,
> since the Content code and annotation and deltas needs lines anyway,
> and
> Groupcompress wants fulltexts...
>
> So if you push hard, I'll try to find the time to do it. But this was
> *much* easier.

I think we're at the point of maturity in bzr that it makes sense to
spend a small amount of time saying 'whats the cleanest way to do X',
and then talk about how to get there.

At the moment, expanding VF's API doesn't seem desirable, or the best
way to be tackling the problem. I think there should be precisely one
...

Read more...

Revision history for this message
John A Meinel (jameinel) wrote :

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

...

> I think we're at the point of maturity in bzr that it makes sense to
> spend a small amount of time saying 'whats the cleanest way to do X',
> and then talk about how to get there.
>
> At the moment, expanding VF's API doesn't seem desirable, or the best
> way to be tackling the problem. I think there should be precisely one
> way to add texts to a VF, and that should be as small and fast as we can
> make it.
>
> -Rob
>

We're also blocking on a fairly significant win *today* because of a
potential desire to rewrite a lot of code to make something slightly
cleaner. (Which is something that has been a misfeature of the bzr
project for a *long* time.)

I'm not saying we shouldn't do this, I'm just pointing out the issue.

*For now* I don't feel like rewriting the entire insert_record_stream
stack just to get this in. So I'll leave this pending for now. (More
important is to actually get GC stacking working over bzr+ssh, etc.)

I'm also not sure that getting rid of the "add_this_text_to_the_repo" is
really a net win. Having to write code like:
vf.get_record_stream([one_key], 'unordered',
True).next().get_bytes_as('fulltext')

just to get a single text out is ugly. Not to mention prone to raising
bad exceptions like "AbsentContentFactory has no attribute
.get_bytes_as()", rather than something sane like "NoSuchRevision".
Having to do the same thing during *insert* is just as ugly.

I know you wanted to push people towards multi requests, and I
understand why. I'm not sure that completely removing the convenience
functions is a complete solution, though.

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkome6kACgkQJdeBCYSNAAOEggCgozRvOFoof+3NISA7xnFQM+MU
DzMAn1FRwhU8zJDGubT6O6BjCyfCODfE
=csCo
-----END PGP SIGNATURE-----

Revision history for this message
Robert Collins (lifeless) wrote :

On Wed, 2009-06-03 at 13:36 +0000, John A Meinel wrote:

Meta: I'm really confused vis-a-vis reviews and blocking. All I've done
here is *ask* your opinion on reusing insert_record_stream and provided
answers to some of the technical issues you see with that. I haven't set
a review status of veto or resubmit - and I don't think I've signalled
in anyway that I would. So I don't know why you're feeling blocked.

> > I think we're at the point of maturity in bzr that it makes sense to
> > spend a small amount of time saying 'whats the cleanest way to do X',
> > and then talk about how to get there.
> >
> > At the moment, expanding VF's API doesn't seem desirable, or the best
> > way to be tackling the problem. I think there should be precisely one
> > way to add texts to a VF, and that should be as small and fast as we can
> > make it.
> >
> > -Rob
> >
>
> We're also blocking on a fairly significant win *today* because of a
> potential desire to rewrite a lot of code to make something slightly
> cleaner. (Which is something that has been a misfeature of the bzr
> project for a *long* time.)

I think we often ask the question - and thats important. Sometimes the
answer is 'yes we should fix the deep issue' and sometimes its 'lets do
it with the least possible changes'. Some things do get stuck, and thats
a shame - I've had that happen to concepts I've proposed, and seen it
happen to other peoples ideas.

> I'm not saying we shouldn't do this, I'm just pointing out the issue.
>
> *For now* I don't feel like rewriting the entire insert_record_stream
> stack just to get this in. So I'll leave this pending for now. (More
> important is to actually get GC stacking working over bzr+ssh, etc.)

I think it would be a good idea to make the new method private then,
because of the open question hanging over it.

> I'm also not sure that getting rid of the "add_this_text_to_the_repo" is
> really a net win. Having to write code like:
> vf.get_record_stream([one_key], 'unordered',
> True).next().get_bytes_as('fulltext')
>
> just to get a single text out is ugly. Not to mention prone to raising
> bad exceptions like "AbsentContentFactory has no attribute
> .get_bytes_as()", rather than something sane like "NoSuchRevision".
> Having to do the same thing during *insert* is just as ugly.

And yet, single read/single write methods are terrible for networking,
and commit over the network is something we currently support - but
can't make even vaguely fast until commit no long uses add_text_*. With
respect to exceptions, we actually do want different exceptions at
different places, so I think it has on balance cleaned some stuff up, in
fact.

> I know you wanted to push people towards multi requests, and I
> understand why. I'm not sure that completely removing the convenience
> functions is a complete solution, though.

I'd like us to get to the point where the core code doesn't do network
hostile things. Beyond that - well, I'm ok if plugins and library users
want to shoot themselves in the foot.

-Rob

Revision history for this message
Ian Clatworthy (ian-clatworthy) wrote :

Robert Collins wrote:
> On Wed, 2009-06-03 at 13:36 +0000, John A Meinel wrote:
>
>
>> We're also blocking on a fairly significant win *today* because of a
>> potential desire to rewrite a lot of code to make something slightly
>> cleaner. (Which is something that has been a misfeature of the bzr
>> project for a *long* time.)
>>

I agree this is a problem that we need to sort out. I occasionally put
and leave useful code in plugins simply because it can take weeks of
effort/debate to get APIs extended in bzrlib. If it only takes a few
hours to write the methods in the first place, it's more productive for
me to just leave the code out of the core and cut-and-paste it when I
need it again.

> I think we often ask the question - and thats important. Sometimes the
> answer is 'yes we should fix the deep issue' and sometimes its 'lets do
> it with the least possible changes'. Some things do get stuck, and thats
> a shame - I've had that happen to concepts I've proposed, and seen it
> happen to other peoples ideas.
>
>

I agree it's really important to ask the questions. That's the whole
point of reviews.

>> *For now* I don't feel like rewriting the entire insert_record_stream
>> stack just to get this in. So I'll leave this pending for now. (More
>> important is to actually get GC stacking working over bzr+ssh, etc.)
>>
>
> I think it would be a good idea to make the new method private then,
> because of the open question hanging over it.
>
>

That sounds like a reasonable compromise. The other way to look at the
problem though is this:

  "Is this new API a step forward with medium-to-long term value?"

> I'd like us to get to the point where the core code doesn't do network
> hostile things. Beyond that - well, I'm ok if plugins and library users
> want to shoot themselves in the foot.
>

Right. But there are genuine use cases for having easy-to-use,
appropriate-locally-only APIs, e.g. import tools. I see no problems with
having such APIs *provided* the docstrings point the reader to more
network-friendly alternatives.

FWIW, if John's proposed API is faster than the current commonly-used
one, then it sounds like a one-or-two line change to fast-import for me
to take advantage of it. I appreciate that you want fast-import moving
towards using CommitBuilder instead of it's own CommitImporter class but
that's a much bigger change (and it's some time away).

Ian C.

Revision history for this message
Robert Collins (lifeless) wrote :

On Thu, 2009-06-04 at 04:09 +0000, Ian Clatworthy wrote:
> Robert Collins wrote:
> > On Wed, 2009-06-03 at 13:36 +0000, John A Meinel wrote:
> >
> >
> >> We're also blocking on a fairly significant win *today* because of a
> >> potential desire to rewrite a lot of code to make something slightly
> >> cleaner. (Which is something that has been a misfeature of the bzr
> >> project for a *long* time.)
> >>
>
> I agree this is a problem that we need to sort out. I occasionally put
> and leave useful code in plugins simply because it can take weeks of
> effort/debate to get APIs extended in bzrlib. If it only takes a few
> hours to write the methods in the first place, it's more productive for
> me to just leave the code out of the core and cut-and-paste it when I
> need it again.

We don't have a good place for experiments ' in core'. And one possible
answer is that we don't need one - thats what we have plugins for. For
instance, I note that your revno cache got rewritten to be significantly
different as you learnt more about the problem. I think this is healthy,
as long as you don't get blocked.

> >> *For now* I don't feel like rewriting the entire insert_record_stream
> >> stack just to get this in. So I'll leave this pending for now. (More
> >> important is to actually get GC stacking working over bzr+ssh, etc.)
> >>
> >
> > I think it would be a good idea to make the new method private then,
> > because of the open question hanging over it.
> >
> >
>
> That sounds like a reasonable compromise. The other way to look at the
> problem though is this:
>
> "Is this new API a step forward with medium-to-long term value?"

I think thats what the design aspect of the review seeks to answer; but
its often hard to tell.

> > I'd like us to get to the point where the core code doesn't do network
> > hostile things. Beyond that - well, I'm ok if plugins and library users
> > want to shoot themselves in the foot.
> >
>
> Right. But there are genuine use cases for having easy-to-use,
> appropriate-locally-only APIs, e.g. import tools. I see no problems with
> having such APIs *provided* the docstrings point the reader to more
> network-friendly alternatives.

In this particular case I'd like to have them as adapters; such as
versionedfile.add_text(versioned_files, bytes, ....):
    for details in
versioned_files.insert_record_stream([FullTextContentFactory(bytes, ...)])
        return details

or whatever. That would separate them cleanly from the core API, prevent
them varying per implementation (easing testing) and make them not the
default way of working.

> FWIW, if John's proposed API is faster than the current commonly-used
> one, then it sounds like a one-or-two line change to fast-import for me
> to take advantage of it. I appreciate that you want fast-import moving
> towards using CommitBuilder instead of it's own CommitImporter class but
> that's a much bigger change (and it's some time away).

I think it would be fine to use a private method in fast-import:
fast-import is trying for maximum speed, and you are keeping a close eye
on it.

-Rob

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'bzrlib/groupcompress.py'
2--- bzrlib/groupcompress.py 2009-05-29 10:25:37 +0000
3+++ bzrlib/groupcompress.py 2009-06-04 20:35:40 +0000
4@@ -992,6 +992,26 @@
5 nostore_sha=nostore_sha))[0]
6 return sha1, length, None
7
8+ def add_text(self, key, parents, text, parent_texts=None,
9+ nostore_sha=None, random_id=False,
10+ check_content=True):
11+ """See VersionedFiles.add_text()."""
12+ self._index._check_write_ok()
13+ self._check_add(key, None, random_id, check_content=False)
14+ if text.__class__ is not str:
15+ raise errors.BzrBadParameterUnicode("text")
16+ if parents is None:
17+ # The caller might pass None if there is no graph data, but kndx
18+ # indexes can't directly store that, so we give them
19+ # an empty tuple instead.
20+ parents = ()
21+ # double handling for now. Make it work until then.
22+ length = len(text)
23+ record = FulltextContentFactory(key, parents, None, text)
24+ sha1 = list(self._insert_record_stream([record], random_id=random_id,
25+ nostore_sha=nostore_sha))[0]
26+ return sha1, length, None
27+
28 def add_fallback_versioned_files(self, a_versioned_files):
29 """Add a source of texts for texts not present in this knit.
30
31@@ -1597,7 +1617,7 @@
32 if refs:
33 for ref in refs:
34 if ref:
35- raise KnitCorrupt(self,
36+ raise errors.KnitCorrupt(self,
37 "attempt to add node with parents "
38 "in parentless index.")
39 refs = ()
40
41=== modified file 'bzrlib/knit.py'
42--- bzrlib/knit.py 2009-05-29 10:25:37 +0000
43+++ bzrlib/knit.py 2009-06-04 20:35:40 +0000
44@@ -909,18 +909,37 @@
45 # indexes can't directly store that, so we give them
46 # an empty tuple instead.
47 parents = ()
48+ line_bytes = ''.join(lines)
49 return self._add(key, lines, parents,
50- parent_texts, left_matching_blocks, nostore_sha, random_id)
51+ parent_texts, left_matching_blocks, nostore_sha, random_id,
52+ line_bytes=line_bytes)
53+
54+ def add_text(self, key, parents, text, parent_texts=None,
55+ nostore_sha=None, random_id=False,
56+ check_content=True):
57+ """See VersionedFiles.add_text()."""
58+ self._index._check_write_ok()
59+ self._check_add(key, None, random_id, check_content=False)
60+ if text.__class__ is not str:
61+ raise errors.BzrBadParameterUnicode("text")
62+ if parents is None:
63+ # The caller might pass None if there is no graph data, but kndx
64+ # indexes can't directly store that, so we give them
65+ # an empty tuple instead.
66+ parents = ()
67+ return self._add(key, None, parents,
68+ parent_texts, None, nostore_sha, random_id,
69+ line_bytes=text)
70
71 def _add(self, key, lines, parents, parent_texts,
72- left_matching_blocks, nostore_sha, random_id):
73+ left_matching_blocks, nostore_sha, random_id,
74+ line_bytes):
75 """Add a set of lines on top of version specified by parents.
76
77 Any versions not present will be converted into ghosts.
78 """
79 # first thing, if the content is something we don't need to store, find
80 # that out.
81- line_bytes = ''.join(lines)
82 digest = sha_string(line_bytes)
83 if nostore_sha == digest:
84 raise errors.ExistingContent
85@@ -947,13 +966,22 @@
86
87 text_length = len(line_bytes)
88 options = []
89- if lines:
90- if lines[-1][-1] != '\n':
91- # copy the contents of lines.
92+ no_eol = False
93+ # Note: line_bytes is not modified to add a newline, that is tracked
94+ # via the no_eol flag. 'lines' *is* modified, because that is the
95+ # general values needed by the Content code.
96+ if line_bytes and line_bytes[-1] != '\n':
97+ options.append('no-eol')
98+ no_eol = True
99+ # Copy the existing list, or create a new one
100+ if lines is None:
101+ lines = osutils.split_lines(line_bytes)
102+ else:
103 lines = lines[:]
104- options.append('no-eol')
105- lines[-1] = lines[-1] + '\n'
106- line_bytes += '\n'
107+ # Replace the last line with one that ends in a final newline
108+ lines[-1] = lines[-1] + '\n'
109+ if lines is None:
110+ lines = osutils.split_lines(line_bytes)
111
112 for element in key[:-1]:
113 if type(element) != str:
114@@ -965,7 +993,7 @@
115 # Knit hunks are still last-element only
116 version_id = key[-1]
117 content = self._factory.make(lines, version_id)
118- if 'no-eol' in options:
119+ if no_eol:
120 # Hint to the content object that its text() call should strip the
121 # EOL.
122 content._should_strip_eol = True
123@@ -986,8 +1014,11 @@
124 if self._factory.__class__ is KnitPlainFactory:
125 # Use the already joined bytes saving iteration time in
126 # _record_to_data.
127+ dense_lines = [line_bytes]
128+ if no_eol:
129+ dense_lines.append('\n')
130 size, bytes = self._record_to_data(key, digest,
131- lines, [line_bytes])
132+ lines, dense_lines)
133 else:
134 # get mixed annotation + content and feed it into the
135 # serialiser.
136@@ -1920,21 +1951,16 @@
137 function spends less time resizing the final string.
138 :return: (len, a StringIO instance with the raw data ready to read.)
139 """
140- # Note: using a string copy here increases memory pressure with e.g.
141- # ISO's, but it is about 3 seconds faster on a 1.2Ghz intel machine
142- # when doing the initial commit of a mozilla tree. RBC 20070921
143- bytes = ''.join(chain(
144- ["version %s %d %s\n" % (key[-1],
145- len(lines),
146- digest)],
147- dense_lines or lines,
148- ["end %s\n" % key[-1]]))
149- if type(bytes) != str:
150- raise AssertionError(
151- 'data must be plain bytes was %s' % type(bytes))
152+ chunks = ["version %s %d %s\n" % (key[-1], len(lines), digest)]
153+ chunks.extend(dense_lines or lines)
154+ chunks.append("end %s\n" % key[-1])
155+ for chunk in chunks:
156+ if type(chunk) != str:
157+ raise AssertionError(
158+ 'data must be plain bytes was %s' % type(chunk))
159 if lines and lines[-1][-1] != '\n':
160 raise ValueError('corrupt lines value %r' % lines)
161- compressed_bytes = tuned_gzip.bytes_to_gzip(bytes)
162+ compressed_bytes = tuned_gzip.chunks_to_gzip(chunks)
163 return len(compressed_bytes), compressed_bytes
164
165 def _split_header(self, line):
166
167=== modified file 'bzrlib/repository.py'
168--- bzrlib/repository.py 2009-06-03 21:31:43 +0000
169+++ bzrlib/repository.py 2009-06-04 20:35:40 +0000
170@@ -494,12 +494,12 @@
171 ie.executable = content_summary[2]
172 file_obj, stat_value = tree.get_file_with_stat(ie.file_id, path)
173 try:
174- lines = file_obj.readlines()
175+ text = file_obj.read()
176 finally:
177 file_obj.close()
178 try:
179 ie.text_sha1, ie.text_size = self._add_text_to_weave(
180- ie.file_id, lines, heads, nostore_sha)
181+ ie.file_id, text, heads, nostore_sha)
182 # Let the caller know we generated a stat fingerprint.
183 fingerprint = (ie.text_sha1, stat_value)
184 except errors.ExistingContent:
185@@ -517,8 +517,7 @@
186 # carry over:
187 ie.revision = parent_entry.revision
188 return self._get_delta(ie, basis_inv, path), False, None
189- lines = []
190- self._add_text_to_weave(ie.file_id, lines, heads, None)
191+ self._add_text_to_weave(ie.file_id, '', heads, None)
192 elif kind == 'symlink':
193 current_link_target = content_summary[3]
194 if not store:
195@@ -532,8 +531,7 @@
196 ie.symlink_target = parent_entry.symlink_target
197 return self._get_delta(ie, basis_inv, path), False, None
198 ie.symlink_target = current_link_target
199- lines = []
200- self._add_text_to_weave(ie.file_id, lines, heads, None)
201+ self._add_text_to_weave(ie.file_id, '', heads, None)
202 elif kind == 'tree-reference':
203 if not store:
204 if content_summary[3] != parent_entry.reference_revision:
205@@ -544,8 +542,7 @@
206 ie.revision = parent_entry.revision
207 return self._get_delta(ie, basis_inv, path), False, None
208 ie.reference_revision = content_summary[3]
209- lines = []
210- self._add_text_to_weave(ie.file_id, lines, heads, None)
211+ self._add_text_to_weave(ie.file_id, '', heads, None)
212 else:
213 raise NotImplementedError('unknown kind')
214 ie.revision = self._new_revision_id
215@@ -745,7 +742,7 @@
216 entry.executable = True
217 else:
218 entry.executable = False
219- if (carry_over_possible and
220+ if (carry_over_possible and
221 parent_entry.executable == entry.executable):
222 # Check the file length, content hash after reading
223 # the file.
224@@ -754,12 +751,12 @@
225 nostore_sha = None
226 file_obj, stat_value = tree.get_file_with_stat(file_id, change[1][1])
227 try:
228- lines = file_obj.readlines()
229+ text = file_obj.read()
230 finally:
231 file_obj.close()
232 try:
233 entry.text_sha1, entry.text_size = self._add_text_to_weave(
234- file_id, lines, heads, nostore_sha)
235+ file_id, text, heads, nostore_sha)
236 yield file_id, change[1][1], (entry.text_sha1, stat_value)
237 except errors.ExistingContent:
238 # No content change against a carry_over parent
239@@ -774,7 +771,7 @@
240 parent_entry.symlink_target == entry.symlink_target):
241 carried_over = True
242 else:
243- self._add_text_to_weave(change[0], [], heads, None)
244+ self._add_text_to_weave(change[0], '', heads, None)
245 elif kind == 'directory':
246 if carry_over_possible:
247 carried_over = True
248@@ -782,7 +779,7 @@
249 # Nothing to set on the entry.
250 # XXX: split into the Root and nonRoot versions.
251 if change[1][1] != '' or self.repository.supports_rich_root():
252- self._add_text_to_weave(change[0], [], heads, None)
253+ self._add_text_to_weave(change[0], '', heads, None)
254 elif kind == 'tree-reference':
255 if not self.repository._format.supports_tree_reference:
256 # This isn't quite sane as an error, but we shouldn't
257@@ -797,7 +794,7 @@
258 parent_entry.reference_revision == reference_revision):
259 carried_over = True
260 else:
261- self._add_text_to_weave(change[0], [], heads, None)
262+ self._add_text_to_weave(change[0], '', heads, None)
263 else:
264 raise AssertionError('unknown kind %r' % kind)
265 if not carried_over:
266@@ -818,15 +815,15 @@
267 self._require_root_change(tree)
268 self.basis_delta_revision = basis_revision_id
269
270- def _add_text_to_weave(self, file_id, new_lines, parents, nostore_sha):
271+ def _add_text_to_weave(self, file_id, new_text, parents, nostore_sha):
272 # Note: as we read the content directly from the tree, we know its not
273 # been turned into unicode or badly split - but a broken tree
274 # implementation could give us bad output from readlines() so this is
275 # not a guarantee of safety. What would be better is always checking
276 # the content during test suite execution. RBC 20070912
277 parent_keys = tuple((file_id, parent) for parent in parents)
278- return self.repository.texts.add_lines(
279- (file_id, self._new_revision_id), parent_keys, new_lines,
280+ return self.repository.texts.add_text(
281+ (file_id, self._new_revision_id), parent_keys, new_text,
282 nostore_sha=nostore_sha, random_id=self.random_revid,
283 check_content=False)[0:2]
284
285
286=== modified file 'bzrlib/tests/test_tuned_gzip.py'
287--- bzrlib/tests/test_tuned_gzip.py 2009-03-23 14:59:43 +0000
288+++ bzrlib/tests/test_tuned_gzip.py 2009-06-04 20:35:40 +0000
289@@ -85,3 +85,28 @@
290 self.assertEqual('', stream.read())
291 # and it should be new member time in the stream.
292 self.failUnless(myfile._new_member)
293+
294+
295+class TestToGzip(TestCase):
296+
297+ def assertToGzip(self, chunks):
298+ bytes = ''.join(chunks)
299+ gzfromchunks = tuned_gzip.chunks_to_gzip(chunks)
300+ gzfrombytes = tuned_gzip.bytes_to_gzip(bytes)
301+ self.assertEqual(gzfrombytes, gzfromchunks)
302+ decoded = tuned_gzip.GzipFile(fileobj=StringIO(gzfromchunks)).read()
303+ self.assertEqual(bytes, decoded)
304+
305+ def test_single_chunk(self):
306+ self.assertToGzip(['a modest chunk\nwith some various\nbits\n'])
307+
308+ def test_simple_text(self):
309+ self.assertToGzip(['some\n', 'strings\n', 'to\n', 'process\n'])
310+
311+ def test_large_chunks(self):
312+ self.assertToGzip(['a large string\n'*1024])
313+ self.assertToGzip(['a large string\n']*1024)
314+
315+ def test_enormous_chunks(self):
316+ self.assertToGzip(['a large string\n'*1024*256])
317+ self.assertToGzip(['a large string\n']*1024*256)
318
319=== modified file 'bzrlib/tests/test_versionedfile.py'
320--- bzrlib/tests/test_versionedfile.py 2009-05-01 18:09:24 +0000
321+++ bzrlib/tests/test_versionedfile.py 2009-06-04 20:35:40 +0000
322@@ -1471,6 +1471,58 @@
323 self.addCleanup(lambda:self.cleanup(files))
324 return files
325
326+ def test_add_lines(self):
327+ f = self.get_versionedfiles()
328+ if self.key_length == 1:
329+ key0 = ('r0',)
330+ key1 = ('r1',)
331+ key2 = ('r2',)
332+ keyf = ('foo',)
333+ else:
334+ key0 = ('fid', 'r0')
335+ key1 = ('fid', 'r1')
336+ key2 = ('fid', 'r2')
337+ keyf = ('fid', 'foo')
338+ f.add_lines(key0, [], ['a\n', 'b\n'])
339+ if self.graph:
340+ f.add_lines(key1, [key0], ['b\n', 'c\n'])
341+ else:
342+ f.add_lines(key1, [], ['b\n', 'c\n'])
343+ keys = f.keys()
344+ self.assertTrue(key0 in keys)
345+ self.assertTrue(key1 in keys)
346+ records = []
347+ for record in f.get_record_stream([key0, key1], 'unordered', True):
348+ records.append((record.key, record.get_bytes_as('fulltext')))
349+ records.sort()
350+ self.assertEqual([(key0, 'a\nb\n'), (key1, 'b\nc\n')], records)
351+
352+ def test_add_text(self):
353+ f = self.get_versionedfiles()
354+ if self.key_length == 1:
355+ key0 = ('r0',)
356+ key1 = ('r1',)
357+ key2 = ('r2',)
358+ keyf = ('foo',)
359+ else:
360+ key0 = ('fid', 'r0')
361+ key1 = ('fid', 'r1')
362+ key2 = ('fid', 'r2')
363+ keyf = ('fid', 'foo')
364+ f.add_text(key0, [], 'a\nb\n')
365+ if self.graph:
366+ f.add_text(key1, [key0], 'b\nc\n')
367+ else:
368+ f.add_text(key1, [], 'b\nc\n')
369+ keys = f.keys()
370+ self.assertTrue(key0 in keys)
371+ self.assertTrue(key1 in keys)
372+ records = []
373+ for record in f.get_record_stream([key0, key1], 'unordered', True):
374+ records.append((record.key, record.get_bytes_as('fulltext')))
375+ records.sort()
376+ self.assertEqual([(key0, 'a\nb\n'), (key1, 'b\nc\n')], records)
377+
378 def test_annotate(self):
379 files = self.get_versionedfiles()
380 self.get_diamond_files(files)
381@@ -1520,7 +1572,7 @@
382 trailing_eol=trailing_eol, nograph=not self.graph,
383 left_only=left_only, nokeys=nokeys)
384
385- def test_add_lines_nostoresha(self):
386+ def _add_content_nostoresha(self, add_lines):
387 """When nostore_sha is supplied using old content raises."""
388 vf = self.get_versionedfiles()
389 empty_text = ('a', [])
390@@ -1528,7 +1580,12 @@
391 sample_text_no_nl = ('c', ["foo\n", "bar"])
392 shas = []
393 for version, lines in (empty_text, sample_text_nl, sample_text_no_nl):
394- sha, _, _ = vf.add_lines(self.get_simple_key(version), [], lines)
395+ if add_lines:
396+ sha, _, _ = vf.add_lines(self.get_simple_key(version), [],
397+ lines)
398+ else:
399+ sha, _, _ = vf.add_text(self.get_simple_key(version), [],
400+ ''.join(lines))
401 shas.append(sha)
402 # we now have a copy of all the lines in the vf.
403 for sha, (version, lines) in zip(
404@@ -1537,10 +1594,19 @@
405 self.assertRaises(errors.ExistingContent,
406 vf.add_lines, new_key, [], lines,
407 nostore_sha=sha)
408+ self.assertRaises(errors.ExistingContent,
409+ vf.add_text, new_key, [], ''.join(lines),
410+ nostore_sha=sha)
411 # and no new version should have been added.
412 record = vf.get_record_stream([new_key], 'unordered', True).next()
413 self.assertEqual('absent', record.storage_kind)
414
415+ def test_add_lines_nostoresha(self):
416+ self._add_content_nostoresha(add_lines=True)
417+
418+ def test_add_text_nostoresha(self):
419+ self._add_content_nostoresha(add_lines=False)
420+
421 def test_add_lines_return(self):
422 files = self.get_versionedfiles()
423 # save code by using the stock data insertion helper.
424
425=== modified file 'bzrlib/tuned_gzip.py'
426--- bzrlib/tuned_gzip.py 2009-03-23 14:59:43 +0000
427+++ bzrlib/tuned_gzip.py 2009-06-04 20:35:40 +0000
428@@ -52,6 +52,18 @@
429 width=-zlib.MAX_WBITS, mem=zlib.DEF_MEM_LEVEL,
430 crc32=zlib.crc32):
431 """Create a gzip file containing bytes and return its content."""
432+ return chunks_to_gzip([bytes])
433+
434+
435+def chunks_to_gzip(chunks, factory=zlib.compressobj,
436+ level=zlib.Z_DEFAULT_COMPRESSION, method=zlib.DEFLATED,
437+ width=-zlib.MAX_WBITS, mem=zlib.DEF_MEM_LEVEL,
438+ crc32=zlib.crc32):
439+ """Create a gzip file containing chunks and return its content.
440+
441+ :param chunks: An iterable of strings. Each string can have arbitrary
442+ layout.
443+ """
444 result = [
445 '\037\213' # self.fileobj.write('\037\213') # magic header
446 '\010' # self.fileobj.write('\010') # compression method
447@@ -69,11 +81,17 @@
448 # using a compressobj avoids a small header and trailer that the compress()
449 # utility function adds.
450 compress = factory(level, method, width, mem, 0)
451- result.append(compress.compress(bytes))
452+ crc = 0
453+ total_len = 0
454+ for chunk in chunks:
455+ crc = crc32(chunk, crc)
456+ total_len += len(chunk)
457+ zbytes = compress.compress(chunk)
458+ if zbytes:
459+ result.append(zbytes)
460 result.append(compress.flush())
461- result.append(struct.pack("<L", LOWU32(crc32(bytes))))
462 # size may exceed 2GB, or even 4GB
463- result.append(struct.pack("<L", LOWU32(len(bytes))))
464+ result.append(struct.pack("<LL", LOWU32(crc), LOWU32(total_len)))
465 return ''.join(result)
466
467
468
469=== modified file 'bzrlib/versionedfile.py'
470--- bzrlib/versionedfile.py 2009-04-29 17:02:36 +0000
471+++ bzrlib/versionedfile.py 2009-06-04 20:35:40 +0000
472@@ -829,6 +829,14 @@
473 """
474 raise NotImplementedError(self.add_lines)
475
476+ def add_text(self, key, parents, text, parent_texts=None,
477+ nostore_sha=None, random_id=False, check_content=True):
478+ return self.add_lines(key, parents, osutils.split_lines(text),
479+ parent_texts=parent_texts,
480+ nostore_sha=nostore_sha,
481+ random_id=random_id,
482+ check_content=check_content)
483+
484 def add_mpdiffs(self, records):
485 """Add mpdiffs to this VersionedFile.
486