Merge lp:~jelmer/launchpad/506256-remove-popen into lp:launchpad
- 506256-remove-popen
- Merge into devel
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | Graham Binns | ||||
Approved revision: | no longer in the source branch. | ||||
Merged at revision: | 11566 | ||||
Proposed branch: | lp:~jelmer/launchpad/506256-remove-popen | ||||
Merge into: | lp:launchpad | ||||
Diff against target: |
1176 lines (+222/-405) 17 files modified
lib/canonical/launchpad/webapp/tales.py (+1/-0) lib/lp/archiveuploader/tests/test_uploadprocessor.py (+53/-24) lib/lp/archiveuploader/uploadprocessor.py (+25/-9) lib/lp/buildmaster/enums.py (+7/-3) lib/lp/buildmaster/interfaces/buildfarmjob.py (+4/-0) lib/lp/buildmaster/interfaces/packagebuild.py (+2/-20) lib/lp/buildmaster/model/buildfarmjob.py (+10/-0) lib/lp/buildmaster/model/packagebuild.py (+35/-153) lib/lp/buildmaster/tests/test_buildfarmjob.py (+15/-0) lib/lp/buildmaster/tests/test_packagebuild.py (+29/-67) lib/lp/code/browser/sourcepackagerecipebuild.py (+1/-0) lib/lp/code/model/tests/test_sourcepackagerecipebuild.py (+6/-6) lib/lp/registry/model/sourcepackage.py (+6/-2) lib/lp/soyuz/browser/archive.py (+1/-0) lib/lp/soyuz/doc/buildd-slavescanner.txt (+12/-116) lib/lp/soyuz/model/archive.py (+3/-2) lib/lp/soyuz/model/binarypackagebuild.py (+12/-3) |
||||
To merge this branch: | bzr merge lp:~jelmer/launchpad/506256-remove-popen | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Graham Binns (community) | release-critical | Approve | |
Brad Crittenden (community) | code | Approve | |
Julian Edwards (community) | Approve | ||
Review via email: mp+34549@code.launchpad.net |
Commit message
Builddmaster now moves build binaries away for later processing rather than invoking process-uploader on them directly.
Description of the change
buildd-manager currently invokes the uploadprocessor on binaries that it has fetched from the buildd slaves.
As this process is synchronous is blocks the buildd manager from doing other things at the same time - such as scheduling new builds - time during which the buildd slaves are idling.
This branch changes the buildd manager to move build results out of the way into a queue that can independently be processed by process-upload (extensions for process-upload to support this were landed earlier).
Tests:
./bin/test lp.archiveuploader
./bin/test lp.buildmaster
There is still a single test testNoFiles() in the archiveuploader that is itself buggy. I'm looking into this at the moment, but wanted to submit the branch for review earlier because of the upcoming PQM closure. The fix for this test shouldn't involve any changes outside of the test itself.
QA:
This branch has been running on dogfood for the past week or so in its current form and has been working well. We've tested with multiple buildds and thrown several hundred builds at it.
Deployment:
A cron job needs to be set up to run the following command regularly:
LPCURRENT/
Jelmer Vernooij (jelmer) wrote : | # |
I need to change the lookups in the archiveuploader to be of binarypackagebuilds or sourcepackagebuilds rather than on packagebuilds in general, since otherwise we don't have a sensible verifySuccessBu
Graham Binns (gmb) wrote : | # |
Abstaining until the tests are fixed.
Jelmer Vernooij (jelmer) wrote : | # |
This turned out to be quite a complex issue; I'm fixing the last test now.
Julian Edwards (julian-edwards) wrote : | # |
My two pence, as I mentioned on Mumble:
1. Please change the added __getitem__ to a getByID() method. We're deprecating __getitem__
2. Typo in the new enum (s/process/
Otherwise, ROCK!
Brad Crittenden (bac) wrote : | # |
Thanks for fixing the tests Jelmer. They all ran locally for me.
Graham Binns (gmb) wrote : | # |
RC=me on this. Please land this on db-devel rather than devel (I believe there's some argument you can pass to utils/ec2 in order to achieve this). We want to close devel for landings sooner rather than later, so all traffic should go to db-devel unless absolutely necessary.
Preview Diff
1 | === modified file 'lib/canonical/launchpad/webapp/tales.py' |
2 | --- lib/canonical/launchpad/webapp/tales.py 2010-08-27 11:19:54 +0000 |
3 | +++ lib/canonical/launchpad/webapp/tales.py 2010-09-16 00:48:58 +0000 |
4 | @@ -995,6 +995,7 @@ |
5 | BuildStatus.CHROOTWAIT: {'src': "/@@/build-chrootwait"}, |
6 | BuildStatus.SUPERSEDED: {'src': "/@@/build-superseded"}, |
7 | BuildStatus.BUILDING: {'src': "/@@/processing"}, |
8 | + BuildStatus.UPLOADING: {'src': "/@@/processing"}, |
9 | BuildStatus.FAILEDTOUPLOAD: {'src': "/@@/build-failedtoupload"}, |
10 | } |
11 | |
12 | |
13 | === modified file 'lib/lp/archiveuploader/tests/test_uploadprocessor.py' |
14 | --- lib/lp/archiveuploader/tests/test_uploadprocessor.py 2010-09-02 16:28:50 +0000 |
15 | +++ lib/lp/archiveuploader/tests/test_uploadprocessor.py 2010-09-16 00:48:58 +0000 |
16 | @@ -31,6 +31,7 @@ |
17 | from canonical.launchpad.testing.fakepackager import FakePackager |
18 | from canonical.launchpad.webapp.errorlog import ErrorReportingUtility |
19 | from canonical.testing import LaunchpadZopelessLayer |
20 | + |
21 | from lp.app.errors import NotFoundError |
22 | from lp.archiveuploader.uploadpolicy import ( |
23 | AbstractUploadPolicy, |
24 | @@ -41,7 +42,10 @@ |
25 | parse_build_upload_leaf_name, |
26 | UploadProcessor, |
27 | ) |
28 | -from lp.buildmaster.enums import BuildStatus |
29 | +from lp.buildmaster.enums import ( |
30 | + BuildFarmJobType, |
31 | + BuildStatus, |
32 | + ) |
33 | from lp.registry.interfaces.distribution import IDistributionSet |
34 | from lp.registry.interfaces.person import IPersonSet |
35 | from lp.registry.interfaces.pocket import PackagePublishingPocket |
36 | @@ -1861,17 +1865,26 @@ |
37 | self.uploadprocessor = self.setupBreezyAndGetUploadProcessor() |
38 | |
39 | def testInvalidLeafName(self): |
40 | - upload_dir = self.queueUpload("bar_1.0-1") |
41 | - self.uploadprocessor.processBuildUpload(upload_dir, "bar_1.0-1") |
42 | + # Directories with invalid leaf names should be skipped, |
43 | + # and a warning logged. |
44 | + upload_dir = self.queueUpload("bar_1.0-1", queue_entry="bar") |
45 | + self.uploadprocessor.processBuildUpload(upload_dir, "bar") |
46 | self.assertLogContains('Unable to extract build id from leaf ' |
47 | - 'name bar_1.0-1, skipping.') |
48 | + 'name bar, skipping.') |
49 | |
50 | def testNoBuildEntry(self): |
51 | - upload_dir = self.queueUpload("bar_1.0-1", queue_entry="42-60") |
52 | - self.assertRaises(NotFoundError, self.uploadprocessor.processBuildUpload, |
53 | - upload_dir, "42-60") |
54 | + # Directories with that refer to a nonexistent build |
55 | + # should be skipped and a warning logged. |
56 | + cookie = "%s-%d" % (BuildFarmJobType.PACKAGEBUILD.name, 42) |
57 | + upload_dir = self.queueUpload("bar_1.0-1", queue_entry=cookie) |
58 | + self.uploadprocessor.processBuildUpload(upload_dir, cookie) |
59 | + self.assertLogContains( |
60 | + "Unable to find package build job with id 42. Skipping.") |
61 | |
62 | def testNoFiles(self): |
63 | + # If the upload directory is empty, the upload |
64 | + # will fail. |
65 | + |
66 | # Upload a source package |
67 | upload_dir = self.queueUpload("bar_1.0-1") |
68 | self.processUpload(self.uploadprocessor, upload_dir) |
69 | @@ -1884,19 +1897,28 @@ |
70 | version="1.0-1", name="bar") |
71 | queue_item.setDone() |
72 | |
73 | - build.jobStarted() |
74 | - build.builder = self.factory.makeBuilder() |
75 | + builder = self.factory.makeBuilder() |
76 | + build.buildqueue_record.markAsBuilding(builder) |
77 | + build.builder = build.buildqueue_record.builder |
78 | + |
79 | + build.status = BuildStatus.UPLOADING |
80 | |
81 | # Upload and accept a binary for the primary archive source. |
82 | shutil.rmtree(upload_dir) |
83 | self.layer.txn.commit() |
84 | - leaf_name = "%d-%d" % (build.id, 60) |
85 | + leaf_name = build.getUploadDirLeaf(build.getBuildCookie()) |
86 | os.mkdir(os.path.join(self.incoming_folder, leaf_name)) |
87 | self.options.context = 'buildd' |
88 | self.options.builds = True |
89 | - self.uploadprocessor.processBuildUpload(self.incoming_folder, leaf_name) |
90 | + self.uploadprocessor.processBuildUpload( |
91 | + self.incoming_folder, leaf_name) |
92 | + self.assertEquals(1, len(self.oopses)) |
93 | self.layer.txn.commit() |
94 | - self.assertEquals(BuildStatus.FAILEDTOUPLOAD, build.status) |
95 | + self.assertEquals( |
96 | + BuildStatus.FAILEDTOUPLOAD, build.status) |
97 | + self.assertEquals(builder, build.builder) |
98 | + self.assertIsNot(None, build.date_finished) |
99 | + self.assertIsNot(None, build.duration) |
100 | log_contents = build.upload_log.read() |
101 | self.assertTrue('ERROR: Exception while processing upload ' |
102 | in log_contents) |
103 | @@ -1904,6 +1926,8 @@ |
104 | in log_contents) |
105 | |
106 | def testSuccess(self): |
107 | + # Properly uploaded binaries should result in the |
108 | + # build status changing to FULLYBUILT. |
109 | # Upload a source package |
110 | upload_dir = self.queueUpload("bar_1.0-1") |
111 | self.processUpload(self.uploadprocessor, upload_dir) |
112 | @@ -1916,21 +1940,27 @@ |
113 | version="1.0-1", name="bar") |
114 | queue_item.setDone() |
115 | |
116 | - build.jobStarted() |
117 | - build.builder = self.factory.makeBuilder() |
118 | + build.buildqueue_record.markAsBuilding(self.factory.makeBuilder()) |
119 | + |
120 | + build.status = BuildStatus.UPLOADING |
121 | |
122 | # Upload and accept a binary for the primary archive source. |
123 | shutil.rmtree(upload_dir) |
124 | self.layer.txn.commit() |
125 | - leaf_name = "%d-%d" % (build.id, 60) |
126 | + leaf_name = build.getUploadDirLeaf(build.getBuildCookie()) |
127 | upload_dir = self.queueUpload("bar_1.0-1_binary", |
128 | queue_entry=leaf_name) |
129 | self.options.context = 'buildd' |
130 | self.options.builds = True |
131 | - self.uploadprocessor.processBuildUpload(self.incoming_folder, leaf_name) |
132 | + last_stub_mail_count = len(stub.test_emails) |
133 | + self.uploadprocessor.processBuildUpload( |
134 | + self.incoming_folder, leaf_name) |
135 | self.layer.txn.commit() |
136 | + # No emails are sent on success |
137 | + self.assertEquals(len(stub.test_emails), last_stub_mail_count) |
138 | self.assertEquals(BuildStatus.FULLYBUILT, build.status) |
139 | - log_lines = build.upload_log.read().splitlines() |
140 | + log_contents = build.upload_log.read() |
141 | + log_lines = log_contents.splitlines() |
142 | self.assertTrue( |
143 | 'INFO: Processing upload bar_1.0-1_i386.changes' in log_lines) |
144 | self.assertTrue( |
145 | @@ -1942,10 +1972,9 @@ |
146 | """Tests for parse_build_upload_leaf_name.""" |
147 | |
148 | def test_valid(self): |
149 | - self.assertEquals((42, 60), parse_build_upload_leaf_name("42-60")) |
150 | - |
151 | - def test_invalid_chars(self): |
152 | - self.assertRaises(ValueError, parse_build_upload_leaf_name, "a42-460") |
153 | - |
154 | - def test_no_dash(self): |
155 | - self.assertRaises(ValueError, parse_build_upload_leaf_name, "32") |
156 | + self.assertEquals( |
157 | + 60, parse_build_upload_leaf_name("20100812-42-PACKAGEBUILD-60")) |
158 | + |
159 | + def test_invalid_jobid(self): |
160 | + self.assertRaises( |
161 | + ValueError, parse_build_upload_leaf_name, "aaba-a42-PACKAGEBUILD-abc") |
162 | |
163 | === modified file 'lib/lp/archiveuploader/uploadprocessor.py' |
164 | --- lib/lp/archiveuploader/uploadprocessor.py 2010-08-27 11:19:54 +0000 |
165 | +++ lib/lp/archiveuploader/uploadprocessor.py 2010-09-16 00:48:58 +0000 |
166 | @@ -74,14 +74,16 @@ |
167 | SOURCE_PACKAGE_RECIPE_UPLOAD_POLICY_NAME, |
168 | UploadPolicyError, |
169 | ) |
170 | -from lp.buildmaster.enums import BuildStatus |
171 | +from lp.buildmaster.enums import ( |
172 | + BuildStatus, |
173 | + ) |
174 | +from lp.buildmaster.interfaces.buildfarmjob import IBuildFarmJobSet |
175 | from lp.registry.interfaces.distribution import IDistributionSet |
176 | from lp.registry.interfaces.person import IPersonSet |
177 | from lp.soyuz.interfaces.archive import ( |
178 | IArchiveSet, |
179 | NoSuchPPA, |
180 | ) |
181 | -from lp.soyuz.interfaces.binarypackagebuild import IBinaryPackageBuildSet |
182 | |
183 | |
184 | __all__ = [ |
185 | @@ -104,11 +106,11 @@ |
186 | """Parse the leaf directory name of a build upload. |
187 | |
188 | :param name: Directory name. |
189 | - :return: Tuple with build id and build queue record id. |
190 | + :return: Tuple with build farm job id. |
191 | """ |
192 | - (build_id_str, queue_record_str) = name.split("-", 1) |
193 | + (job_id_str,) = name.split("-")[-1:] |
194 | try: |
195 | - return int(build_id_str), int(queue_record_str) |
196 | + return int(job_id_str) |
197 | except TypeError: |
198 | raise ValueError |
199 | |
200 | @@ -191,7 +193,7 @@ |
201 | continue |
202 | if self.builds: |
203 | # Upload directories contain build results, |
204 | - # directories are named after build ids. |
205 | + # directories are named after job ids. |
206 | self.processBuildUpload(fsroot, upload) |
207 | else: |
208 | self.processUpload(fsroot, upload) |
209 | @@ -206,13 +208,24 @@ |
210 | Build uploads always contain a single package per leaf. |
211 | """ |
212 | try: |
213 | - (build_id, build_queue_record_id) = parse_build_upload_leaf_name( |
214 | - upload) |
215 | + job_id = parse_build_upload_leaf_name(upload) |
216 | except ValueError: |
217 | self.log.warn("Unable to extract build id from leaf name %s," |
218 | " skipping." % upload) |
219 | return |
220 | - build = getUtility(IBinaryPackageBuildSet).getByBuildID(int(build_id)) |
221 | + try: |
222 | + buildfarm_job = getUtility(IBuildFarmJobSet).getByID(job_id) |
223 | + except NotFoundError: |
224 | + self.log.warn( |
225 | + "Unable to find package build job with id %d. Skipping." % |
226 | + job_id) |
227 | + return |
228 | + build = buildfarm_job.getSpecificJob() |
229 | + if build.status != BuildStatus.UPLOADING: |
230 | + self.log.warn( |
231 | + "Expected build status to be 'UPLOADING', was %s. Skipping.", |
232 | + build.status.name) |
233 | + return |
234 | self.log.debug("Build %s found" % build.id) |
235 | logger = BufferLogger() |
236 | upload_path = os.path.join(fsroot, upload) |
237 | @@ -246,6 +259,9 @@ |
238 | build.notify(extra_info="Uploading build %s failed." % upload) |
239 | build.storeUploadLog(logger.buffer.getvalue()) |
240 | |
241 | + # Remove BuildQueue record. |
242 | + build.buildqueue_record.destroySelf() |
243 | + |
244 | def processUpload(self, fsroot, upload): |
245 | """Process an upload's changes files, and move it to a new directory. |
246 | |
247 | |
248 | === modified file 'lib/lp/buildmaster/enums.py' |
249 | --- lib/lp/buildmaster/enums.py 2010-08-27 15:03:18 +0000 |
250 | +++ lib/lp/buildmaster/enums.py 2010-09-16 00:48:58 +0000 |
251 | @@ -97,6 +97,13 @@ |
252 | will be notified via process-upload about the reason of the rejection. |
253 | """) |
254 | |
255 | + UPLOADING = DBItem(8, """ |
256 | + Uploading build |
257 | + |
258 | + The build has completed and is waiting to be processed by the |
259 | + upload processor. |
260 | + """) |
261 | + |
262 | |
263 | class BuildFarmJobType(DBEnumeratedType): |
264 | """Soyuz build farm job type. |
265 | @@ -128,6 +135,3 @@ |
266 | |
267 | Generate translation templates from a bazaar branch. |
268 | """) |
269 | - |
270 | - |
271 | - |
272 | |
273 | === modified file 'lib/lp/buildmaster/interfaces/buildfarmjob.py' |
274 | --- lib/lp/buildmaster/interfaces/buildfarmjob.py 2010-08-30 15:00:23 +0000 |
275 | +++ lib/lp/buildmaster/interfaces/buildfarmjob.py 2010-09-16 00:48:58 +0000 |
276 | @@ -292,3 +292,7 @@ |
277 | that should be included. |
278 | :return: a `ResultSet` representing the requested builds. |
279 | """ |
280 | + |
281 | + def getByID(job_id): |
282 | + """Look up a `IBuildFarmJob` record by id. |
283 | + """ |
284 | |
285 | === modified file 'lib/lp/buildmaster/interfaces/packagebuild.py' |
286 | --- lib/lp/buildmaster/interfaces/packagebuild.py 2010-08-27 11:19:54 +0000 |
287 | +++ lib/lp/buildmaster/interfaces/packagebuild.py 2010-09-16 00:48:58 +0000 |
288 | @@ -91,13 +91,6 @@ |
289 | title=_("Distribution series"), required=True, |
290 | description=_("Shortcut for its distribution series."))) |
291 | |
292 | - def getUploaderCommand(package_build, upload_leaf, uploader_logfilename): |
293 | - """Get the command to run as the uploader. |
294 | - |
295 | - :return: A list of command line arguments, beginning with the |
296 | - executable. |
297 | - """ |
298 | - |
299 | def getUploadDirLeaf(build_cookie, now=None): |
300 | """Return the directory-leaf where files to be uploaded are stored. |
301 | |
302 | @@ -106,24 +99,13 @@ |
303 | directory name. If not provided, defaults to now. |
304 | """ |
305 | |
306 | - def getUploadDir(upload_leaf): |
307 | - """Return the full directory where files to be uploaded are stored. |
308 | - |
309 | - :param upload_leaf: The leaf directory name where things will be |
310 | - stored. |
311 | + def getBuildCookie(): |
312 | + """Return the build cookie (build id and build queue record id). |
313 | """ |
314 | |
315 | def getLogFromSlave(build): |
316 | """Get last buildlog from slave. """ |
317 | |
318 | - def getUploadLogContent(root, leaf): |
319 | - """Retrieve the upload log contents. |
320 | - |
321 | - :param root: Root directory for the uploads |
322 | - :param leaf: Leaf for this particular upload |
323 | - :return: Contents of log file or message saying no log file was found. |
324 | - """ |
325 | - |
326 | def estimateDuration(): |
327 | """Estimate the build duration.""" |
328 | |
329 | |
330 | === modified file 'lib/lp/buildmaster/model/buildfarmjob.py' |
331 | --- lib/lp/buildmaster/model/buildfarmjob.py 2010-08-30 15:00:23 +0000 |
332 | +++ lib/lp/buildmaster/model/buildfarmjob.py 2010-09-16 00:48:58 +0000 |
333 | @@ -52,6 +52,7 @@ |
334 | IStoreSelector, |
335 | MAIN_STORE, |
336 | ) |
337 | +from lp.app.errors import NotFoundError |
338 | from lp.buildmaster.enums import BuildStatus |
339 | from lp.buildmaster.enums import BuildFarmJobType |
340 | from lp.buildmaster.interfaces.buildfarmjob import ( |
341 | @@ -339,6 +340,7 @@ |
342 | """See `IBuild`""" |
343 | return self.status not in [BuildStatus.NEEDSBUILD, |
344 | BuildStatus.BUILDING, |
345 | + BuildStatus.UPLOADING, |
346 | BuildStatus.SUPERSEDED] |
347 | |
348 | def getSpecificJob(self): |
349 | @@ -431,3 +433,11 @@ |
350 | filtered_builds.config(distinct=True) |
351 | |
352 | return filtered_builds |
353 | + |
354 | + def getByID(self, job_id): |
355 | + """See `IBuildfarmJobSet`.""" |
356 | + job = IStore(BuildFarmJob).find(BuildFarmJob, |
357 | + BuildFarmJob.id == job_id).one() |
358 | + if job is None: |
359 | + raise NotFoundError(job_id) |
360 | + return job |
361 | |
362 | === modified file 'lib/lp/buildmaster/model/packagebuild.py' |
363 | --- lib/lp/buildmaster/model/packagebuild.py 2010-09-09 17:02:33 +0000 |
364 | +++ lib/lp/buildmaster/model/packagebuild.py 2010-09-16 00:48:58 +0000 |
365 | @@ -14,7 +14,6 @@ |
366 | import datetime |
367 | import logging |
368 | import os.path |
369 | -import subprocess |
370 | |
371 | from cStringIO import StringIO |
372 | from lazr.delegates import delegates |
373 | @@ -36,11 +35,6 @@ |
374 | |
375 | from canonical.config import config |
376 | from canonical.database.enumcol import DBEnum |
377 | -from canonical.database.sqlbase import ( |
378 | - clear_current_connection_cache, |
379 | - cursor, |
380 | - flush_database_updates, |
381 | - ) |
382 | from canonical.launchpad.browser.librarian import ProxiedLibraryFileAlias |
383 | from canonical.launchpad.helpers import filenameToContentType |
384 | from canonical.launchpad.interfaces.librarian import ILibraryFileAliasSet |
385 | @@ -73,7 +67,6 @@ |
386 | |
387 | |
388 | SLAVE_LOG_FILENAME = 'buildlog' |
389 | -UPLOAD_LOG_FILENAME = 'uploader.log' |
390 | |
391 | |
392 | class PackageBuild(BuildFarmJobDerived, Storm): |
393 | @@ -164,30 +157,11 @@ |
394 | timestamp = now.strftime("%Y%m%d-%H%M%S") |
395 | return '%s-%s' % (timestamp, build_cookie) |
396 | |
397 | - def getUploadDir(self, upload_leaf): |
398 | - """See `IPackageBuild`.""" |
399 | - return os.path.join(config.builddmaster.root, 'incoming', upload_leaf) |
400 | - |
401 | - @staticmethod |
402 | - def getUploaderCommand(package_build, upload_leaf, upload_logfilename): |
403 | - """See `IPackageBuild`.""" |
404 | - root = os.path.abspath(config.builddmaster.root) |
405 | - uploader_command = list(config.builddmaster.uploader.split()) |
406 | - |
407 | - # Add extra arguments for processing a package upload. |
408 | - extra_args = [ |
409 | - "--log-file", "%s" % upload_logfilename, |
410 | - "-d", "%s" % package_build.distribution.name, |
411 | - "-s", "%s" % ( |
412 | - package_build.distro_series.getSuite(package_build.pocket)), |
413 | - "-b", "%s" % package_build.id, |
414 | - "-J", "%s" % upload_leaf, |
415 | - '--context=%s' % package_build.policy_name, |
416 | - "%s" % root, |
417 | - ] |
418 | - |
419 | - uploader_command.extend(extra_args) |
420 | - return uploader_command |
421 | + def getBuildCookie(self): |
422 | + """See `IPackageBuild`.""" |
423 | + return '%s-%s-%s' % ( |
424 | + self.id, self.build_farm_job.job_type.name, |
425 | + self.build_farm_job.id) |
426 | |
427 | @staticmethod |
428 | def getLogFromSlave(package_build): |
429 | @@ -198,26 +172,6 @@ |
430 | package_build.buildqueue_record.getLogFileName(), |
431 | package_build.is_private) |
432 | |
433 | - @staticmethod |
434 | - def getUploadLogContent(root, leaf): |
435 | - """Retrieve the upload log contents. |
436 | - |
437 | - :param root: Root directory for the uploads |
438 | - :param leaf: Leaf for this particular upload |
439 | - :return: Contents of log file or message saying no log file was found. |
440 | - """ |
441 | - # Retrieve log file content. |
442 | - possible_locations = ( |
443 | - 'failed', 'failed-to-move', 'rejected', 'accepted') |
444 | - for location_dir in possible_locations: |
445 | - log_filepath = os.path.join(root, location_dir, leaf, |
446 | - UPLOAD_LOG_FILENAME) |
447 | - if os.path.exists(log_filepath): |
448 | - with open(log_filepath, 'r') as uploader_log_file: |
449 | - return uploader_log_file.read() |
450 | - else: |
451 | - return 'Could not find upload log file' |
452 | - |
453 | def estimateDuration(self): |
454 | """See `IPackageBuild`.""" |
455 | raise NotImplementedError |
456 | @@ -346,19 +300,16 @@ |
457 | root = os.path.abspath(config.builddmaster.root) |
458 | |
459 | # Create a single directory to store build result files. |
460 | - upload_leaf = self.getUploadDirLeaf( |
461 | - '%s-%s' % (self.id, self.buildqueue_record.id)) |
462 | - upload_dir = self.getUploadDir(upload_leaf) |
463 | - logger.debug("Storing build result at '%s'" % upload_dir) |
464 | + upload_leaf = self.getUploadDirLeaf(self.getBuildCookie()) |
465 | + grab_dir = os.path.join(root, "grabbing", upload_leaf) |
466 | + logger.debug("Storing build result at '%s'" % grab_dir) |
467 | |
468 | # Build the right UPLOAD_PATH so the distribution and archive |
469 | # can be correctly found during the upload: |
470 | # <archive_id>/distribution_name |
471 | # for all destination archive types. |
472 | - archive = self.archive |
473 | - distribution_name = self.distribution.name |
474 | - target_path = '%s/%s' % (archive.id, distribution_name) |
475 | - upload_path = os.path.join(upload_dir, target_path) |
476 | + upload_path = os.path.join( |
477 | + grab_dir, str(self.archive.id), self.distribution.name) |
478 | os.makedirs(upload_path) |
479 | |
480 | slave = removeSecurityProxy(self.buildqueue_record.builder.slave) |
481 | @@ -379,106 +330,35 @@ |
482 | slave_file = slave.getFile(filemap[filename]) |
483 | copy_and_close(slave_file, out_file) |
484 | |
485 | + # Store build information, build record was already updated during |
486 | + # the binary upload. |
487 | + self.storeBuildInfo(self, librarian, slave_status) |
488 | + |
489 | # We only attempt the upload if we successfully copied all the |
490 | # files from the slave. |
491 | if successful_copy_from_slave: |
492 | - uploader_logfilename = os.path.join( |
493 | - upload_dir, UPLOAD_LOG_FILENAME) |
494 | - uploader_command = self.getUploaderCommand( |
495 | - self, upload_leaf, uploader_logfilename) |
496 | - logger.debug("Saving uploader log at '%s'" % uploader_logfilename) |
497 | - |
498 | - logger.info("Invoking uploader on %s" % root) |
499 | - logger.info("%s" % uploader_command) |
500 | - |
501 | - uploader_process = subprocess.Popen( |
502 | - uploader_command, stdout=subprocess.PIPE, |
503 | - stderr=subprocess.PIPE) |
504 | - |
505 | - # Nothing should be written to the stdout/stderr. |
506 | - upload_stdout, upload_stderr = uploader_process.communicate() |
507 | - |
508 | - # XXX cprov 2007-04-17: we do not check uploader_result_code |
509 | - # anywhere. We need to find out what will be best strategy |
510 | - # when it failed HARD (there is a huge effort in process-upload |
511 | - # to not return error, it only happen when the code is broken). |
512 | - uploader_result_code = uploader_process.returncode |
513 | - logger.info("Uploader returned %d" % uploader_result_code) |
514 | - |
515 | - # Quick and dirty hack to carry on on process-upload failures |
516 | - if os.path.exists(upload_dir): |
517 | - logger.warning("The upload directory did not get moved.") |
518 | - failed_dir = os.path.join(root, "failed-to-move") |
519 | - if not os.path.exists(failed_dir): |
520 | - os.mkdir(failed_dir) |
521 | - os.rename(upload_dir, os.path.join(failed_dir, upload_leaf)) |
522 | - |
523 | - # The famous 'flush_updates + clear_cache' will make visible |
524 | - # the DB changes done in process-upload, considering that the |
525 | - # transaction was set with ISOLATION_LEVEL_READ_COMMITED |
526 | - # isolation level. |
527 | - cur = cursor() |
528 | - cur.execute('SHOW transaction_isolation') |
529 | - isolation_str = cur.fetchone()[0] |
530 | - assert isolation_str == 'read committed', ( |
531 | - 'BuildMaster/BuilderGroup transaction isolation should be ' |
532 | - 'ISOLATION_LEVEL_READ_COMMITTED (not "%s")' % isolation_str) |
533 | - |
534 | - original_slave = self.buildqueue_record.builder.slave |
535 | - |
536 | - # XXX Robert Collins, Celso Providelo 2007-05-26 bug=506256: |
537 | - # 'Refreshing' objects procedure is forced on us by using a |
538 | - # different process to do the upload, but as that process runs |
539 | - # in the same unix account, it is simply double handling and we |
540 | - # would be better off to do it within this process. |
541 | - flush_database_updates() |
542 | - clear_current_connection_cache() |
543 | - |
544 | - # XXX cprov 2007-06-15: Re-issuing removeSecurityProxy is forced on |
545 | - # us by sqlobject refreshing the builder object during the |
546 | - # transaction cache clearing. Once we sort the previous problem |
547 | - # this step should probably not be required anymore. |
548 | - self.buildqueue_record.builder.setSlaveForTesting( |
549 | - removeSecurityProxy(original_slave)) |
550 | - |
551 | - # Store build information, build record was already updated during |
552 | - # the binary upload. |
553 | - self.storeBuildInfo(self, librarian, slave_status) |
554 | - |
555 | - # Retrive the up-to-date build record and perform consistency |
556 | - # checks. The build record should be updated during the binary |
557 | - # upload processing, if it wasn't something is broken and needs |
558 | - # admins attention. Even when we have a FULLYBUILT build record, |
559 | - # if it is not related with at least one binary, there is also |
560 | - # a problem. |
561 | - # For both situations we will mark the builder as FAILEDTOUPLOAD |
562 | - # and the and update the build details (datebuilt, duration, |
563 | - # buildlog, builder) in LP. A build-failure-notification will be |
564 | - # sent to the lp-build-admin celebrity and to the sourcepackagerelease |
565 | - # uploader about this occurrence. The failure notification will |
566 | - # also contain the information required to manually reprocess the |
567 | - # binary upload when it was the case. |
568 | - if (self.status != BuildStatus.FULLYBUILT or |
569 | - not successful_copy_from_slave or |
570 | - not self.verifySuccessfulUpload()): |
571 | - logger.warning("Build %s upload failed." % self.id) |
572 | + logger.info( |
573 | + "Gathered %s %d completely. Moving %s to uploader queue." % ( |
574 | + self.__class__.__name__, self.id, upload_leaf)) |
575 | + target_dir = os.path.join(root, "incoming") |
576 | + self.status = BuildStatus.UPLOADING |
577 | + else: |
578 | + logger.warning( |
579 | + "Copy from slave for build %s was unsuccessful.", self.id) |
580 | self.status = BuildStatus.FAILEDTOUPLOAD |
581 | - uploader_log_content = self.getUploadLogContent(root, |
582 | - upload_leaf) |
583 | - # Store the upload_log_contents in librarian so it can be |
584 | - # accessed by anyone with permission to see the build. |
585 | - self.storeUploadLog(uploader_log_content) |
586 | - # Notify the build failure. |
587 | - self.notify(extra_info=uploader_log_content) |
588 | - else: |
589 | - logger.info( |
590 | - "Gathered %s %d completely" % ( |
591 | - self.__class__.__name__, self.id)) |
592 | + self.notify(extra_info='Copy from slave was unsuccessful.') |
593 | + target_dir = os.path.join(root, "failed") |
594 | + |
595 | + if not os.path.exists(target_dir): |
596 | + os.mkdir(target_dir) |
597 | + |
598 | + # Move the directory used to grab the binaries into |
599 | + # the incoming directory so the upload processor never |
600 | + # sees half-finished uploads. |
601 | + os.rename(grab_dir, os.path.join(target_dir, upload_leaf)) |
602 | |
603 | # Release the builder for another job. |
604 | self.buildqueue_record.builder.cleanSlave() |
605 | - # Remove BuildQueue record. |
606 | - self.buildqueue_record.destroySelf() |
607 | |
608 | def _handleStatus_PACKAGEFAIL(self, librarian, slave_status, logger): |
609 | """Handle a package that had failed to build. |
610 | @@ -583,7 +463,9 @@ |
611 | unfinished_states = [ |
612 | BuildStatus.NEEDSBUILD, |
613 | BuildStatus.BUILDING, |
614 | - BuildStatus.SUPERSEDED] |
615 | + BuildStatus.UPLOADING, |
616 | + BuildStatus.SUPERSEDED, |
617 | + ] |
618 | if status is None or status in unfinished_states: |
619 | result_set.order_by( |
620 | Desc(BuildFarmJob.date_created), BuildFarmJob.id) |
621 | |
622 | === modified file 'lib/lp/buildmaster/tests/test_buildfarmjob.py' |
623 | --- lib/lp/buildmaster/tests/test_buildfarmjob.py 2010-08-30 15:00:23 +0000 |
624 | +++ lib/lp/buildmaster/tests/test_buildfarmjob.py 2010-09-16 00:48:58 +0000 |
625 | @@ -22,6 +22,7 @@ |
626 | DatabaseFunctionalLayer, |
627 | LaunchpadFunctionalLayer, |
628 | ) |
629 | +from lp.app.errors import NotFoundError |
630 | from lp.buildmaster.enums import ( |
631 | BuildFarmJobType, |
632 | BuildStatus, |
633 | @@ -317,3 +318,17 @@ |
634 | result = self.build_farm_job_set.getBuildsForBuilder(self.builder) |
635 | |
636 | self.assertEqual([build_1, build_2], list(result)) |
637 | + |
638 | + def test_getByID(self): |
639 | + # getByID returns a job by id. |
640 | + build_1 = self.makeBuildFarmJob( |
641 | + builder=self.builder, |
642 | + date_finished=datetime(2008, 10, 10, tzinfo=pytz.UTC)) |
643 | + flush_database_updates() |
644 | + self.assertEquals( |
645 | + build_1, self.build_farm_job_set.getByID(build_1.id)) |
646 | + |
647 | + def test_getByID_nonexistant(self): |
648 | + # getByID raises NotFoundError for unknown job ids. |
649 | + self.assertRaises(NotFoundError, |
650 | + self.build_farm_job_set.getByID, 423432432432) |
651 | |
652 | === modified file 'lib/lp/buildmaster/tests/test_packagebuild.py' |
653 | --- lib/lp/buildmaster/tests/test_packagebuild.py 2010-09-09 17:02:33 +0000 |
654 | +++ lib/lp/buildmaster/tests/test_packagebuild.py 2010-09-16 00:48:58 +0000 |
655 | @@ -9,7 +9,7 @@ |
656 | |
657 | from datetime import datetime |
658 | import hashlib |
659 | -import os.path |
660 | +import os |
661 | |
662 | from storm.store import Store |
663 | from zope.component import getUtility |
664 | @@ -22,6 +22,9 @@ |
665 | LaunchpadFunctionalLayer, |
666 | LaunchpadZopelessLayer, |
667 | ) |
668 | +from lp.archiveuploader.uploadprocessor import ( |
669 | + parse_build_upload_leaf_name, |
670 | + ) |
671 | from lp.buildmaster.enums import ( |
672 | BuildFarmJobType, |
673 | BuildStatus, |
674 | @@ -34,7 +37,6 @@ |
675 | from lp.buildmaster.model.packagebuild import PackageBuild |
676 | from lp.registry.interfaces.pocket import ( |
677 | PackagePublishingPocket, |
678 | - pocketsuffix, |
679 | ) |
680 | from lp.soyuz.tests.soyuzbuilddhelpers import WaitingSlave |
681 | from lp.testing import ( |
682 | @@ -192,15 +194,14 @@ |
683 | '%s-%s' % (now.strftime("%Y%m%d-%H%M%S"), build_cookie), |
684 | upload_leaf) |
685 | |
686 | - def test_getUploadDir(self): |
687 | - # getUploadDir is the absolute path to the directory in which things |
688 | - # are uploaded to. |
689 | - build_cookie = self.factory.getUniqueInteger() |
690 | - upload_leaf = self.package_build.getUploadDirLeaf(build_cookie) |
691 | - upload_dir = self.package_build.getUploadDir(upload_leaf) |
692 | - self.assertEqual( |
693 | - os.path.join(config.builddmaster.root, 'incoming', upload_leaf), |
694 | - upload_dir) |
695 | + def test_getBuildCookie(self): |
696 | + # A build cookie is made up of the package build id and record id. |
697 | + # The uploadprocessor relies on this format. |
698 | + Store.of(self.package_build).flush() |
699 | + cookie = self.package_build.getBuildCookie() |
700 | + expected_cookie = "%d-PACKAGEBUILD-%d" % ( |
701 | + self.package_build.id, self.package_build.build_farm_job.id) |
702 | + self.assertEquals(expected_cookie, cookie) |
703 | |
704 | |
705 | class TestPackageBuildSet(TestPackageBuildBase): |
706 | @@ -257,57 +258,18 @@ |
707 | super(TestGetUploadMethodsMixin, self).setUp() |
708 | self.build = self.makeBuild() |
709 | |
710 | - def test_getUploadLogContent_nolog(self): |
711 | - """If there is no log file there, a string explanation is returned. |
712 | - """ |
713 | - self.useTempDir() |
714 | - self.assertEquals( |
715 | - 'Could not find upload log file', |
716 | - self.build.getUploadLogContent(os.getcwd(), "myleaf")) |
717 | - |
718 | - def test_getUploadLogContent_only_dir(self): |
719 | - """If there is a directory but no log file, expect the error string, |
720 | - not an exception.""" |
721 | - self.useTempDir() |
722 | - os.makedirs("accepted/myleaf") |
723 | - self.assertEquals( |
724 | - 'Could not find upload log file', |
725 | - self.build.getUploadLogContent(os.getcwd(), "myleaf")) |
726 | - |
727 | - def test_getUploadLogContent_readsfile(self): |
728 | - """If there is a log file, return its contents.""" |
729 | - self.useTempDir() |
730 | - os.makedirs("accepted/myleaf") |
731 | - with open('accepted/myleaf/uploader.log', 'w') as f: |
732 | - f.write('foo') |
733 | - self.assertEquals( |
734 | - 'foo', self.build.getUploadLogContent(os.getcwd(), "myleaf")) |
735 | - |
736 | - def test_getUploaderCommand(self): |
737 | - upload_leaf = self.factory.getUniqueString('upload-leaf') |
738 | - config_args = list(config.builddmaster.uploader.split()) |
739 | - log_file = self.factory.getUniqueString('logfile') |
740 | - config_args.extend( |
741 | - ['--log-file', log_file, |
742 | - '-d', self.build.distribution.name, |
743 | - '-s', (self.build.distro_series.name |
744 | - + pocketsuffix[self.build.pocket]), |
745 | - '-b', str(self.build.id), |
746 | - '-J', upload_leaf, |
747 | - '--context=%s' % self.build.policy_name, |
748 | - os.path.abspath(config.builddmaster.root), |
749 | - ]) |
750 | - uploader_command = self.build.getUploaderCommand( |
751 | - self.build, upload_leaf, log_file) |
752 | - self.assertEqual(config_args, uploader_command) |
753 | + def test_getUploadDirLeafCookie_parseable(self): |
754 | + # getUploadDirLeaf should return a directory name |
755 | + # that is parseable by the upload processor. |
756 | + upload_leaf = self.build.getUploadDirLeaf( |
757 | + self.build.getBuildCookie()) |
758 | + job_id = parse_build_upload_leaf_name(upload_leaf) |
759 | + self.assertEqual(job_id, self.build.build_farm_job.id) |
760 | |
761 | |
762 | class TestHandleStatusMixin: |
763 | """Tests for `IPackageBuild`s handleStatus method. |
764 | |
765 | - Note: these tests do *not* test the updating of the build |
766 | - status to FULLYBUILT as this happens during the upload which |
767 | - is stubbed out by a mock function. |
768 | """ |
769 | |
770 | layer = LaunchpadZopelessLayer |
771 | @@ -329,23 +291,23 @@ |
772 | builder.setSlaveForTesting(self.slave) |
773 | |
774 | # We overwrite the buildmaster root to use a temp directory. |
775 | - tmp_dir = self.makeTemporaryDirectory() |
776 | + self.upload_root = self.makeTemporaryDirectory() |
777 | tmp_builddmaster_root = """ |
778 | [builddmaster] |
779 | root: %s |
780 | - """ % tmp_dir |
781 | + """ % self.upload_root |
782 | config.push('tmp_builddmaster_root', tmp_builddmaster_root) |
783 | |
784 | # We stub out our builds getUploaderCommand() method so |
785 | # we can check whether it was called as well as |
786 | # verifySuccessfulUpload(). |
787 | - self.fake_getUploaderCommand = FakeMethod( |
788 | - result=['echo', 'noop']) |
789 | - removeSecurityProxy(self.build).getUploaderCommand = ( |
790 | - self.fake_getUploaderCommand) |
791 | removeSecurityProxy(self.build).verifySuccessfulUpload = FakeMethod( |
792 | result=True) |
793 | |
794 | + def assertResultCount(self, count, result): |
795 | + self.assertEquals( |
796 | + 1, len(os.listdir(os.path.join(self.upload_root, result)))) |
797 | + |
798 | def test_handleStatus_OK_normal_file(self): |
799 | # A filemap with plain filenames should not cause a problem. |
800 | # The call to handleStatus will attempt to get the file from |
801 | @@ -354,8 +316,8 @@ |
802 | 'filemap': {'myfile.py': 'test_file_hash'}, |
803 | }) |
804 | |
805 | - self.assertEqual(BuildStatus.FULLYBUILT, self.build.status) |
806 | - self.assertEqual(1, self.fake_getUploaderCommand.call_count) |
807 | + self.assertEqual(BuildStatus.UPLOADING, self.build.status) |
808 | + self.assertResultCount(1, "incoming") |
809 | |
810 | def test_handleStatus_OK_absolute_filepath(self): |
811 | # A filemap that tries to write to files outside of |
812 | @@ -364,7 +326,7 @@ |
813 | 'filemap': {'/tmp/myfile.py': 'test_file_hash'}, |
814 | }) |
815 | self.assertEqual(BuildStatus.FAILEDTOUPLOAD, self.build.status) |
816 | - self.assertEqual(0, self.fake_getUploaderCommand.call_count) |
817 | + self.assertResultCount(0, "failed") |
818 | |
819 | def test_handleStatus_OK_relative_filepath(self): |
820 | # A filemap that tries to write to files outside of |
821 | @@ -373,7 +335,7 @@ |
822 | 'filemap': {'../myfile.py': 'test_file_hash'}, |
823 | }) |
824 | self.assertEqual(BuildStatus.FAILEDTOUPLOAD, self.build.status) |
825 | - self.assertEqual(0, self.fake_getUploaderCommand.call_count) |
826 | + self.assertResultCount(0, "failed") |
827 | |
828 | def test_handleStatus_OK_sets_build_log(self): |
829 | # The build log is set during handleStatus. |
830 | |
831 | === modified file 'lib/lp/code/browser/sourcepackagerecipebuild.py' |
832 | --- lib/lp/code/browser/sourcepackagerecipebuild.py 2010-08-27 11:19:54 +0000 |
833 | +++ lib/lp/code/browser/sourcepackagerecipebuild.py 2010-09-16 00:48:58 +0000 |
834 | @@ -82,6 +82,7 @@ |
835 | return 'No suitable builders' |
836 | return { |
837 | BuildStatus.NEEDSBUILD: 'Pending build', |
838 | + BuildStatus.UPLOADING: 'Build uploading', |
839 | BuildStatus.FULLYBUILT: 'Successful build', |
840 | BuildStatus.MANUALDEPWAIT: ( |
841 | 'Could not build because of missing dependencies'), |
842 | |
843 | === modified file 'lib/lp/code/model/tests/test_sourcepackagerecipebuild.py' |
844 | --- lib/lp/code/model/tests/test_sourcepackagerecipebuild.py 2010-09-09 17:02:33 +0000 |
845 | +++ lib/lp/code/model/tests/test_sourcepackagerecipebuild.py 2010-09-16 00:48:58 +0000 |
846 | @@ -354,15 +354,15 @@ |
847 | queue_record.builder.setSlaveForTesting(slave) |
848 | return build |
849 | |
850 | - def assertNotifyOnce(status, build): |
851 | + def assertNotifyCount(status, build, count): |
852 | build.handleStatus(status, None, {'filemap': {}}) |
853 | - self.assertEqual(1, len(pop_notifications())) |
854 | - for status in ['PACKAGEFAIL', 'OK']: |
855 | - assertNotifyOnce(status, prepare_build()) |
856 | + self.assertEqual(count, len(pop_notifications())) |
857 | + assertNotifyCount("PACKAGEFAIL", prepare_build(), 1) |
858 | + assertNotifyCount("OK", prepare_build(), 0) |
859 | build = prepare_build() |
860 | removeSecurityProxy(build).verifySuccessfulUpload = FakeMethod( |
861 | - result=True) |
862 | - assertNotifyOnce('OK', prepare_build()) |
863 | + result=True) |
864 | + assertNotifyCount("OK", prepare_build(), 0) |
865 | |
866 | |
867 | class MakeSPRecipeBuildMixin: |
868 | |
869 | === modified file 'lib/lp/registry/model/sourcepackage.py' |
870 | --- lib/lp/registry/model/sourcepackage.py 2010-08-27 11:19:54 +0000 |
871 | +++ lib/lp/registry/model/sourcepackage.py 2010-09-16 00:48:58 +0000 |
872 | @@ -597,11 +597,15 @@ |
873 | % sqlvalues(BuildStatus.FULLYBUILT)) |
874 | |
875 | # Ordering according status |
876 | - # * NEEDSBUILD & BUILDING by -lastscore |
877 | + # * NEEDSBUILD, BUILDING & UPLOADING by -lastscore |
878 | # * SUPERSEDED by -datecreated |
879 | # * FULLYBUILT & FAILURES by -datebuilt |
880 | # It should present the builds in a more natural order. |
881 | - if build_state in [BuildStatus.NEEDSBUILD, BuildStatus.BUILDING]: |
882 | + if build_state in [ |
883 | + BuildStatus.NEEDSBUILD, |
884 | + BuildStatus.BUILDING, |
885 | + BuildStatus.UPLOADING, |
886 | + ]: |
887 | orderBy = ["-BuildQueue.lastscore"] |
888 | clauseTables.append('BuildPackageJob') |
889 | condition_clauses.append( |
890 | |
891 | === modified file 'lib/lp/soyuz/browser/archive.py' |
892 | --- lib/lp/soyuz/browser/archive.py 2010-08-31 11:31:04 +0000 |
893 | +++ lib/lp/soyuz/browser/archive.py 2010-09-16 00:48:58 +0000 |
894 | @@ -947,6 +947,7 @@ |
895 | 'NEEDSBUILD': 'Waiting to build', |
896 | 'FAILEDTOBUILD': 'Failed to build:', |
897 | 'BUILDING': 'Currently building', |
898 | + 'UPLOADING': 'Currently uploading', |
899 | } |
900 | |
901 | now = datetime.now(tz=pytz.UTC) |
902 | |
903 | === modified file 'lib/lp/soyuz/doc/buildd-slavescanner.txt' |
904 | --- lib/lp/soyuz/doc/buildd-slavescanner.txt 2010-08-30 02:07:38 +0000 |
905 | +++ lib/lp/soyuz/doc/buildd-slavescanner.txt 2010-09-16 00:48:58 +0000 |
906 | @@ -319,87 +319,27 @@ |
907 | This situation happens when the builder has finished the job and is |
908 | waiting for the master to collect its results. |
909 | |
910 | -The build record in question can end up in the following states: |
911 | - |
912 | - * FULLYBUILT: when binaries were collected and uploaded correctly; |
913 | - * FAILEDTOUPLOAD: binaries were collected but the upload was |
914 | - rejected/failed. |
915 | - |
916 | - |
917 | -=== Failed to Upload (FAILEDTOUPLOAD) === |
918 | +The build record in question will end up as UPLOADING. |
919 | + |
920 | +=== Uploading (UPLOADING) === |
921 | |
922 | >>> bqItem10 = a_build.queueBuild() |
923 | >>> setupBuildQueue(bqItem10, a_builder) |
924 | - >>> last_stub_mail_count = len(stub.test_emails) |
925 | |
926 | Create a mock slave so the builder gets the right responses for this test. |
927 | |
928 | >>> bqItem10.builder.setSlaveForTesting( |
929 | ... WaitingSlave('BuildStatus.OK')) |
930 | |
931 | -If the build record wasn't updated before/during the updateBuild |
932 | -(precisely on binary upload time), the build will be considered |
933 | -FAILEDTOUPLOAD: |
934 | +The build will progress to the UPLOADING state if the status from |
935 | +the builder was OK: |
936 | |
937 | >>> build = getUtility(IBinaryPackageBuildSet).getByQueueEntry(bqItem10) |
938 | >>> a_builder.updateBuild(bqItem10) |
939 | - WARNING:slave-scanner:Build ... upload failed. |
940 | - >>> build.builder is not None |
941 | - True |
942 | - >>> build.date_finished is not None |
943 | - True |
944 | - >>> build.duration is not None |
945 | - True |
946 | - >>> build.log is not None |
947 | - True |
948 | - >>> check_mail_sent(last_stub_mail_count) |
949 | - True |
950 | >>> build.status.title |
951 | - 'Failed to upload' |
952 | - |
953 | -Let's check the emails generated by this 'failure' |
954 | -(see build-failedtoupload-workflow.txt for more information): |
955 | - |
956 | - >>> from operator import itemgetter |
957 | - >>> local_test_emails = stub.test_emails[last_stub_mail_count:] |
958 | - >>> local_test_emails.sort(key=itemgetter(1), reverse=True) |
959 | - >>> for from_addr, to_addrs, raw_msg in local_test_emails: |
960 | - ... print to_addrs |
961 | - ['mark@example.com'] |
962 | - ['foo.bar@canonical.com'] |
963 | - ['celso.providelo@canonical.com'] |
964 | - |
965 | -Note that a real failed-to-upload notification contains the respective |
966 | -upload log information: |
967 | - |
968 | - >>> one_email = stub.test_emails.pop() |
969 | - >>> from_addr, to_addrs, raw_msg = one_email |
970 | - >>> print raw_msg |
971 | - Content-Type: text/plain; charset="utf-8" |
972 | - ... |
973 | - X-Launchpad-Build-State: FAILEDTOUPLOAD |
974 | - ... |
975 | - * Build Log: http://.../...i386.mozilla-firefox_0.9_BUILDING.txt.gz |
976 | - ... |
977 | - Upload log: |
978 | - DEBUG ... |
979 | - DEBUG Initialising connection. |
980 | - ... |
981 | - DEBUG Removing lock file: /var/lock/process-upload-buildd.lock |
982 | - ... |
983 | - |
984 | -When a failure in processing the generated binaries occurs, the log |
985 | -output is both emailed in an immediate notification, and stored in the |
986 | -librarian for future reference. |
987 | - |
988 | - >>> build.upload_log is not None |
989 | - True |
990 | - |
991 | -What we can clearly notice is that the log is still containing |
992 | -the old build state (BUILDING) in its name. This is a minor problem |
993 | -that can be sorted by modifying the execution order of procedures |
994 | -inside Buildergroup.buildStatus_OK method. |
995 | - |
996 | + 'Uploading build' |
997 | + |
998 | + >>> bqItem10.destroySelf() |
999 | |
1000 | === Successfully collected and uploaded (FULLYBUILT) === |
1001 | |
1002 | @@ -426,36 +366,14 @@ |
1003 | |
1004 | >>> bqItem10.builder.setSlaveForTesting(WaitingSlave('BuildStatus.OK')) |
1005 | |
1006 | -Now in order to emulate a successfully binary upload we will update |
1007 | -the build record to FULLYBUILT, as the process-upload would do: |
1008 | - |
1009 | - >>> from lp.buildmaster.enums import BuildStatus |
1010 | - >>> build.status = BuildStatus.FULLYBUILT |
1011 | - |
1012 | -Now the updateBuild should recognize this build record as a |
1013 | -Successfully built and uploaded procedure, not sending any |
1014 | -notification and updating the build information: |
1015 | - |
1016 | - >>> a_builder.updateBuild(bqItem10) |
1017 | - >>> build.builder is not None |
1018 | - True |
1019 | - >>> build.date_finished is not None |
1020 | - True |
1021 | - >>> build.duration is not None |
1022 | - True |
1023 | - >>> build.log is not None |
1024 | - True |
1025 | - >>> build.status.title |
1026 | - 'Successfully built' |
1027 | - >>> check_mail_sent(last_stub_mail_count) |
1028 | - False |
1029 | - |
1030 | We do not store any build log information when the binary upload |
1031 | processing succeeded. |
1032 | |
1033 | >>> build.upload_log is None |
1034 | True |
1035 | |
1036 | + >>> bqItem10.destroySelf() |
1037 | + |
1038 | WAITING -> GIVENBACK - slave requested build record to be rescheduled. |
1039 | |
1040 | >>> bqItem11 = a_build.queueBuild() |
1041 | @@ -523,6 +441,7 @@ |
1042 | ... 6).queueBuild() |
1043 | >>> setupBuildQueue(bqItem10, a_builder) |
1044 | >>> build = bqItem10.specific_job.build |
1045 | + >>> from lp.buildmaster.enums import BuildStatus |
1046 | >>> build.status = BuildStatus.FULLYBUILT |
1047 | >>> bqItem10.builder.setSlaveForTesting(WaitingSlave('BuildStatus.OK')) |
1048 | |
1049 | @@ -613,29 +532,6 @@ |
1050 | >>> print headers['content-type'] |
1051 | text/plain |
1052 | |
1053 | -Check the log from the uploader run has made it into the upload directory: |
1054 | - |
1055 | - >>> failed_dir = os.path.join(config.builddmaster.root, 'failed') |
1056 | - >>> failed_uploads = sorted(os.listdir(failed_dir)) |
1057 | - >>> len(failed_uploads) |
1058 | - 2 |
1059 | - |
1060 | - >>> failed_upload = failed_uploads[0] |
1061 | - >>> uploader_log = open(os.path.join(failed_dir, failed_upload, |
1062 | - ... 'uploader.log')) |
1063 | - |
1064 | - >>> print uploader_log.read() |
1065 | - DEBUG ... |
1066 | - DEBUG Initialising connection. |
1067 | - DEBUG Beginning processing |
1068 | - DEBUG Creating directory /var/tmp/builddmaster/accepted |
1069 | - DEBUG Creating directory /var/tmp/builddmaster/rejected |
1070 | - DEBUG Creating directory /var/tmp/builddmaster/failed |
1071 | - ... |
1072 | - DEBUG Rolling back any remaining transactions. |
1073 | - DEBUG Removing lock file: /var/lock/process-upload-buildd.lock |
1074 | - <BLANKLINE> |
1075 | - |
1076 | Remove build upload results root |
1077 | |
1078 | >>> shutil.rmtree(config.builddmaster.root) |
1079 | @@ -1156,7 +1052,6 @@ |
1080 | >>> build.upload_log = None |
1081 | >>> candidate.builder.setSlaveForTesting(WaitingSlave('BuildStatus.OK')) |
1082 | >>> a_builder.updateBuild(candidate) |
1083 | - WARNING:slave-scanner:Build ... upload failed. |
1084 | >>> local_transaction.commit() |
1085 | |
1086 | >>> build.archive.private |
1087 | @@ -1167,6 +1062,7 @@ |
1088 | True |
1089 | >>> print lfa.filename |
1090 | buildlog_ubuntu-hoary-i386.mozilla-firefox_0.9_BUILDING.txt.gz |
1091 | + >>> candidate.destroySelf() |
1092 | |
1093 | The attempt to fetch the buildlog from the common librarian will fail |
1094 | since this is a build in a private archive and the buildlog was thus |
1095 | |
1096 | === modified file 'lib/lp/soyuz/model/archive.py' |
1097 | --- lib/lp/soyuz/model/archive.py 2010-08-31 11:31:04 +0000 |
1098 | +++ lib/lp/soyuz/model/archive.py 2010-09-16 00:48:58 +0000 |
1099 | @@ -1004,10 +1004,9 @@ |
1100 | BuildStatus.FAILEDTOUPLOAD, |
1101 | BuildStatus.MANUALDEPWAIT, |
1102 | ), |
1103 | - # The 'pending' count is a list because we may append to it |
1104 | - # later. |
1105 | 'pending': [ |
1106 | BuildStatus.BUILDING, |
1107 | + BuildStatus.UPLOADING, |
1108 | ], |
1109 | 'succeeded': ( |
1110 | BuildStatus.FULLYBUILT, |
1111 | @@ -1023,6 +1022,7 @@ |
1112 | BuildStatus.FAILEDTOUPLOAD, |
1113 | BuildStatus.MANUALDEPWAIT, |
1114 | BuildStatus.BUILDING, |
1115 | + BuildStatus.UPLOADING, |
1116 | BuildStatus.FULLYBUILT, |
1117 | BuildStatus.SUPERSEDED, |
1118 | ], |
1119 | @@ -2007,6 +2007,7 @@ |
1120 | ), |
1121 | 'pending': ( |
1122 | BuildStatus.BUILDING, |
1123 | + BuildStatus.UPLOADING, |
1124 | BuildStatus.NEEDSBUILD, |
1125 | ), |
1126 | 'succeeded': ( |
1127 | |
1128 | === modified file 'lib/lp/soyuz/model/binarypackagebuild.py' |
1129 | --- lib/lp/soyuz/model/binarypackagebuild.py 2010-09-09 17:02:33 +0000 |
1130 | +++ lib/lp/soyuz/model/binarypackagebuild.py 2010-09-16 00:48:58 +0000 |
1131 | @@ -251,6 +251,7 @@ |
1132 | """See `IBuild`""" |
1133 | return self.status not in [BuildStatus.NEEDSBUILD, |
1134 | BuildStatus.BUILDING, |
1135 | + BuildStatus.UPLOADING, |
1136 | BuildStatus.SUPERSEDED] |
1137 | |
1138 | @property |
1139 | @@ -671,6 +672,10 @@ |
1140 | buildduration = 'not available' |
1141 | buildlog_url = 'not available' |
1142 | builder_url = 'not available' |
1143 | + elif self.status == BuildStatus.UPLOADING: |
1144 | + buildduration = 'uploading' |
1145 | + buildlog_url = 'see builder page' |
1146 | + builder_url = 'not available' |
1147 | elif self.status == BuildStatus.BUILDING: |
1148 | # build in process |
1149 | buildduration = 'not finished' |
1150 | @@ -959,11 +964,14 @@ |
1151 | % sqlvalues(BuildStatus.FULLYBUILT)) |
1152 | |
1153 | # Ordering according status |
1154 | - # * NEEDSBUILD & BUILDING by -lastscore |
1155 | + # * NEEDSBUILD, BUILDING & UPLOADING by -lastscore |
1156 | # * SUPERSEDED & All by -datecreated |
1157 | # * FULLYBUILT & FAILURES by -datebuilt |
1158 | # It should present the builds in a more natural order. |
1159 | - if status in [BuildStatus.NEEDSBUILD, BuildStatus.BUILDING]: |
1160 | + if status in [ |
1161 | + BuildStatus.NEEDSBUILD, |
1162 | + BuildStatus.BUILDING, |
1163 | + BuildStatus.UPLOADING]: |
1164 | orderBy = ["-BuildQueue.lastscore", "BinaryPackageBuild.id"] |
1165 | clauseTables.append('BuildQueue') |
1166 | clauseTables.append('BuildPackageJob') |
1167 | @@ -1079,7 +1087,8 @@ |
1168 | BuildStatus.CHROOTWAIT, |
1169 | BuildStatus.FAILEDTOUPLOAD) |
1170 | needsbuild = collect_builds(BuildStatus.NEEDSBUILD) |
1171 | - building = collect_builds(BuildStatus.BUILDING) |
1172 | + building = collect_builds(BuildStatus.BUILDING, |
1173 | + BuildStatus.UPLOADING) |
1174 | successful = collect_builds(BuildStatus.FULLYBUILT) |
1175 | |
1176 | # Note: the BuildStatus DBItems are used here to summarize the |
Jelmer this branch looks ok. As I mentioned on IRC I'm getting test failures. I'm re-running them now and will paste the results when they are done.
Other than that I only find this one typo that needs fixing.
typo: s/nonexisting/ nonexistent
Keeping the branch unapproved until the tests are sorted out.