Merge lp:~al-maisan/launchpad/merge-conflict into lp:launchpad/db-devel

Proposed by Muharem Hrnjadovic
Status: Merged
Merged at revision: not available
Proposed branch: lp:~al-maisan/launchpad/merge-conflict
Merge into: lp:launchpad/db-devel
Diff against target: 664 lines (+301/-45)
15 files modified
cronscripts/code-import-dispatcher.py (+1/-1)
lib/canonical/launchpad/doc/db-policy.txt (+126/-0)
lib/canonical/launchpad/doc/storm.txt (+5/-0)
lib/canonical/launchpad/ftests/test_system_documentation.py (+4/-0)
lib/canonical/launchpad/webapp/dbpolicy.py (+11/-0)
lib/canonical/launchpad/webapp/interfaces.py (+14/-0)
lib/lp/code/mail/codereviewcomment.py (+1/-1)
lib/lp/code/mail/tests/test_codereviewcomment.py (+8/-0)
lib/lp/codehosting/codeimport/dispatcher.py (+28/-3)
lib/lp/codehosting/codeimport/tests/test_dispatcher.py (+34/-9)
lib/lp/codehosting/scanner/email.py (+10/-5)
lib/lp/codehosting/scanner/tests/test_email.py (+15/-4)
lib/lp/scripts/utilities/importfascist.py (+24/-21)
lib/lp/soyuz/model/buildqueue.py (+3/-0)
lib/lp/soyuz/tests/test_buildqueue.py (+17/-1)
To merge this branch: bzr merge lp:~al-maisan/launchpad/merge-conflict
Reviewer Review Type Date Requested Status
Graham Binns (community) code Approve
Review via email: mp+19981@code.launchpad.net
To post a comment you must log in.
Revision history for this message
Muharem Hrnjadovic (al-maisan) wrote :

Fixes the merge conflict between stable and db-devel.

Revision history for this message
Graham Binns (gmb) :
review: Approve (code)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'cronscripts/code-import-dispatcher.py'
--- cronscripts/code-import-dispatcher.py 2010-02-22 01:36:30 +0000
+++ cronscripts/code-import-dispatcher.py 2010-02-23 16:20:35 +0000
@@ -37,7 +37,7 @@
37 globalErrorUtility.configure('codeimportdispatcher')37 globalErrorUtility.configure('codeimportdispatcher')
3838
39 dispatcher = CodeImportDispatcher(self.logger, self.options.max_jobs)39 dispatcher = CodeImportDispatcher(self.logger, self.options.max_jobs)
40 dispatcher.findAndDispatchJob(40 dispatcher.findAndDispatchJobs(
41 ServerProxy(config.codeimportdispatcher.codeimportscheduler_url))41 ServerProxy(config.codeimportdispatcher.codeimportscheduler_url))
4242
4343
4444
=== added file 'lib/canonical/launchpad/doc/db-policy.txt'
--- lib/canonical/launchpad/doc/db-policy.txt 1970-01-01 00:00:00 +0000
+++ lib/canonical/launchpad/doc/db-policy.txt 2010-02-23 16:20:35 +0000
@@ -0,0 +1,126 @@
1Storm Stores & Database Policies
2================================
3
4Launchpad has multiple master and slave databases. Changes to data are
5made on the master and replicated asynchronously to the slave
6databases. Slave databases will usually lag a few seconds behind their
7master. Under high load they may lag a few minutes behind, during
8maintenance they may lag a few hours behind and if things explode
9while admins are on holiday they may lag days behind.
10
11If know your code needs to change data, or must have the latest posible
12information, you retrieve objects from the master databases that stores
13the data for your database class.
14
15 >>> from canonical.launchpad.interfaces.lpstorm import IMasterStore
16 >>> from lp.registry.model.person import Person
17 >>> import transaction
18
19 >>> writable_janitor = IMasterStore(Person).find(
20 ... Person, Person.name == 'janitor').one()
21
22 >>> writable_janitor.displayname = 'Jack the Janitor'
23 >>> transaction.commit()
24
25Sometimes though we know we will not make changes and don't care much
26if the information is a little out of date. In these cases you should
27explicitly retrieve objects from a slave.
28
29The more agressively we retrieve objects from slave databases instead
30of the master, the better the overall performance of Launchpad will be.
31We can distribute this load over many slave databases but are limited to
32a single master.
33
34 >>> from canonical.launchpad.interfaces.lpstorm import ISlaveStore
35 >>> ro_janitor = ISlaveStore(Person).find(
36 ... Person, Person.name == 'janitor').one()
37 >>> ro_janitor is writable_janitor
38 False
39
40 >>> ro_janitor.displayname = 'Janice the Janitor'
41 >>> transaction.commit()
42 Traceback (most recent call last):
43 ...
44 InternalError: transaction is read-only
45
46 >>> transaction.abort()
47
48Much of our code does not know if the objects being retrieved need to be
49updatable to or have to be absolutely up to date. In this case, we
50retrieve objects from the default store. What object being returned
51depends on the currently installed database policy.
52
53 >>> from canonical.launchpad.interfaces.lpstorm import IStore
54 >>> default_janitor = IStore(Person).find(
55 ... Person, Person.name == 'janitor').one()
56 >>> default_janitor is writable_janitor
57 True
58
59As you can see, the default database policy retrieves objects from
60the master database. This allows our code written before database
61replication was implemented to keep working.
62
63To alter this behavior, you can install a different database policy.
64
65 >>> from canonical.launchpad.webapp.dbpolicy import SlaveDatabasePolicy
66 >>> with SlaveDatabasePolicy():
67 ... default_janitor = IStore(Person).find(
68 ... Person, Person.name == 'janitor').one()
69 >>> default_janitor is writable_janitor
70 False
71
72The database policy can also affect what happens when objects are
73explicitly retrieved from a slave or master database. For example,
74if we have code that needs to run during database maintenance or
75code we want to prove only accesses slave database resources, we can
76raise an exception if an attempt is made to access master database
77resources.
78
79 >>> from canonical.launchpad.webapp.dbpolicy import (
80 ... SlaveOnlyDatabasePolicy)
81 >>> with SlaveOnlyDatabasePolicy():
82 ... whoops = IMasterStore(Person).find(
83 ... Person, Person.name == 'janitor').one()
84 Traceback (most recent call last):
85 ...
86 DisallowedStore: master
87
88We can even ensure no database activity occurs at all, for instance
89if we need to guarantee a potentially long running call doesn't access
90the database at all starting a new and potentially long running
91database transaction.
92
93 >>> from canonical.launchpad.webapp.dbpolicy import DatabaseBlockedPolicy
94 >>> with DatabaseBlockedPolicy():
95 ... whoops = IStore(Person).find(
96 ... Person, Person.name == 'janitor').one()
97 Traceback (most recent call last):
98 ...
99 DisallowedStore: ('main', 'default')
100
101Database policies can also be installed and uninstalled using the
102IStoreSelector utility for cases where the 'with' syntax cannot
103be used.
104
105 >>> from canonical.launchpad.webapp.interfaces import IStoreSelector
106 >>> getUtility(IStoreSelector).push(SlaveDatabasePolicy())
107 >>> try:
108 ... default_janitor = IStore(Person).find(
109 ... Person, Person.name == 'janitor').one()
110 ... finally:
111 ... db_policy = getUtility(IStoreSelector).pop()
112 >>> default_janitor is ro_janitor
113 True
114
115Casting
116-------
117
118If you need to change an object you have a read only copy of, or are
119unsure if the object is writable or not, you can easily cast it
120to a writable copy. This is a noop if the object is already writable
121so is good defensive programming.
122
123 >>> from canonical.launchpad.interfaces.lpstorm import IMasterObject
124 >>> IMasterObject(ro_janitor) is writable_janitor
125 True
126
0127
=== modified file 'lib/canonical/launchpad/doc/storm.txt'
--- lib/canonical/launchpad/doc/storm.txt 2009-08-21 17:43:28 +0000
+++ lib/canonical/launchpad/doc/storm.txt 2010-02-23 16:20:35 +0000
@@ -1,3 +1,8 @@
1Note: A more readable version of this is in db-policy.txt. Most of this
2doctest will disappear soon when the auth replication set is collapsed
3back into the main replication set as part of login server seperation.
4-- StuartBishop 20100222
5
1In addition to what Storm provides, we also have some Launchpad6In addition to what Storm provides, we also have some Launchpad
2specific Storm tools to cope with our master and slave store arrangement.7specific Storm tools to cope with our master and slave store arrangement.
38
49
=== modified file 'lib/canonical/launchpad/ftests/test_system_documentation.py'
--- lib/canonical/launchpad/ftests/test_system_documentation.py 2010-02-10 23:14:56 +0000
+++ lib/canonical/launchpad/ftests/test_system_documentation.py 2010-02-23 16:20:35 +0000
@@ -7,6 +7,8 @@
7"""7"""
8# pylint: disable-msg=C01038# pylint: disable-msg=C0103
99
10from __future__ import with_statement
11
10import logging12import logging
11import os13import os
12import unittest14import unittest
@@ -391,6 +393,8 @@
391 one_test = LayeredDocFileSuite(393 one_test = LayeredDocFileSuite(
392 path, setUp=setUp, tearDown=tearDown,394 path, setUp=setUp, tearDown=tearDown,
393 layer=LaunchpadFunctionalLayer,395 layer=LaunchpadFunctionalLayer,
396 # 'icky way of running doctests with __future__ imports
397 globs={'with_statement': with_statement},
394 stdout_logging_level=logging.WARNING398 stdout_logging_level=logging.WARNING
395 )399 )
396 suite.addTest(one_test)400 suite.addTest(one_test)
397401
=== modified file 'lib/canonical/launchpad/webapp/dbpolicy.py'
--- lib/canonical/launchpad/webapp/dbpolicy.py 2010-02-05 12:17:56 +0000
+++ lib/canonical/launchpad/webapp/dbpolicy.py 2010-02-23 16:20:35 +0000
@@ -111,6 +111,17 @@
111 """See `IDatabasePolicy`."""111 """See `IDatabasePolicy`."""
112 pass112 pass
113113
114 def __enter__(self):
115 """See `IDatabasePolicy`."""
116 getUtility(IStoreSelector).push(self)
117
118 def __exit__(self, exc_type, exc_value, traceback):
119 """See `IDatabasePolicy`."""
120 policy = getUtility(IStoreSelector).pop()
121 assert policy is self, (
122 "Unexpected database policy %s returned by store selector"
123 % repr(policy))
124
114125
115class DatabaseBlockedPolicy(BaseDatabasePolicy):126class DatabaseBlockedPolicy(BaseDatabasePolicy):
116 """`IDatabasePolicy` that blocks all access to the database."""127 """`IDatabasePolicy` that blocks all access to the database."""
117128
=== modified file 'lib/canonical/launchpad/webapp/interfaces.py'
--- lib/canonical/launchpad/webapp/interfaces.py 2010-02-17 11:13:06 +0000
+++ lib/canonical/launchpad/webapp/interfaces.py 2010-02-23 16:20:35 +0000
@@ -756,6 +756,20 @@
756 The publisher adapts the request to `IDatabasePolicy` to756 The publisher adapts the request to `IDatabasePolicy` to
757 instantiate the policy for the current request.757 instantiate the policy for the current request.
758 """758 """
759 def __enter__():
760 """Standard Python context manager interface.
761
762 The IDatabasePolicy will install itself using the IStoreSelector
763 utility.
764 """
765
766 def __exit__(exc_type, exc_value, traceback):
767 """Standard Python context manager interface.
768
769 The IDatabasePolicy will uninstall itself using the IStoreSelector
770 utility.
771 """
772
759 def getStore(name, flavor):773 def getStore(name, flavor):
760 """Retrieve a Store.774 """Retrieve a Store.
761775
762776
=== modified file 'lib/lp/code/mail/codereviewcomment.py'
--- lib/lp/code/mail/codereviewcomment.py 2009-10-29 23:51:35 +0000
+++ lib/lp/code/mail/codereviewcomment.py 2010-02-23 16:20:35 +0000
@@ -133,7 +133,7 @@
133 def _addAttachments(self, ctrl, email):133 def _addAttachments(self, ctrl, email):
134 """Add the attachments from the original message."""134 """Add the attachments from the original message."""
135 # Only reattach the display_aliases.135 # Only reattach the display_aliases.
136 for content, content_type, filename in self.attachments:136 for content, filename, content_type in self.attachments:
137 # Append directly to the controller's list.137 # Append directly to the controller's list.
138 ctrl.addAttachment(138 ctrl.addAttachment(
139 content, content_type=content_type, filename=filename)139 content, content_type=content_type, filename=filename)
140140
=== modified file 'lib/lp/code/mail/tests/test_codereviewcomment.py'
--- lib/lp/code/mail/tests/test_codereviewcomment.py 2009-11-01 23:13:29 +0000
+++ lib/lp/code/mail/tests/test_codereviewcomment.py 2010-02-23 16:20:35 +0000
@@ -215,6 +215,14 @@
215 [outgoing_attachment] = mailer.attachments215 [outgoing_attachment] = mailer.attachments
216 self.assertEqual('inc.diff', outgoing_attachment[1])216 self.assertEqual('inc.diff', outgoing_attachment[1])
217 self.assertEqual('text/x-diff', outgoing_attachment[2])217 self.assertEqual('text/x-diff', outgoing_attachment[2])
218 # The attachments are attached to the outgoing message.
219 person = bmp.target_branch.owner
220 message = mailer.generateEmail(
221 person.preferredemail.email, person).makeMessage()
222 self.assertTrue(message.is_multipart())
223 attachment = message.get_payload()[1]
224 self.assertEqual('inc.diff', attachment.get_filename())
225 self.assertEqual('text/x-diff', attachment['content-type'])
218226
219 def makeCommentAndParticipants(self):227 def makeCommentAndParticipants(self):
220 """Create a merge proposal and comment.228 """Create a merge proposal and comment.
221229
=== modified file 'lib/lp/codehosting/codeimport/dispatcher.py'
--- lib/lp/codehosting/codeimport/dispatcher.py 2010-02-22 01:36:30 +0000
+++ lib/lp/codehosting/codeimport/dispatcher.py 2010-02-23 16:20:35 +0000
@@ -16,6 +16,7 @@
16import os16import os
17import socket17import socket
18import subprocess18import subprocess
19import time
1920
20from canonical.config import config21from canonical.config import config
2122
@@ -32,13 +33,14 @@
32 worker_script = os.path.join(33 worker_script = os.path.join(
33 config.root, 'scripts', 'code-import-worker-db.py')34 config.root, 'scripts', 'code-import-worker-db.py')
3435
35 def __init__(self, logger, worker_limit):36 def __init__(self, logger, worker_limit, _sleep=time.sleep):
36 """Initialize an instance.37 """Initialize an instance.
3738
38 :param logger: A `Logger` object.39 :param logger: A `Logger` object.
39 """40 """
40 self.logger = logger41 self.logger = logger
41 self.worker_limit = worker_limit42 self.worker_limit = worker_limit
43 self._sleep = _sleep
4244
43 def getHostname(self):45 def getHostname(self):
44 """Return the hostname of this machine.46 """Return the hostname of this machine.
@@ -65,15 +67,38 @@
6567
6668
67 def findAndDispatchJob(self, scheduler_client):69 def findAndDispatchJob(self, scheduler_client):
68 """Check for and dispatch a job if necessary."""70 """Check for and dispatch a job if necessary.
71
72 :return: A boolean, true if a job was found and dispatched.
73 """
6974
70 job_id = scheduler_client.getJobForMachine(75 job_id = scheduler_client.getJobForMachine(
71 self.getHostname(), self.worker_limit)76 self.getHostname(), self.worker_limit)
7277
73 if job_id == 0:78 if job_id == 0:
74 self.logger.info("No jobs pending.")79 self.logger.info("No jobs pending.")
75 return80 return False
7681
77 self.logger.info("Dispatching job %d." % job_id)82 self.logger.info("Dispatching job %d." % job_id)
7883
79 self.dispatchJob(job_id)84 self.dispatchJob(job_id)
85 return True
86
87 def _getSleepInterval(self):
88 """How long to sleep for until asking for a new job.
89
90 The basic idea is to wait longer if the machine is more heavily
91 loaded, so that less loaded slaves get a chance to grab some jobs.
92
93 We assume worker_limit will be roughly the number of CPUs in the
94 machine, so load/worker_limit is roughly how loaded the machine is.
95 """
96 return 5*os.getloadavg()[0]/self.worker_limit
97
98 def findAndDispatchJobs(self, scheduler_client):
99 """Call findAndDispatchJob until no job is found."""
100 while True:
101 found = self.findAndDispatchJob(scheduler_client)
102 if not found:
103 break
104 self._sleep(self._getSleepInterval())
80105
=== modified file 'lib/lp/codehosting/codeimport/tests/test_dispatcher.py'
--- lib/lp/codehosting/codeimport/tests/test_dispatcher.py 2010-02-22 02:06:57 +0000
+++ lib/lp/codehosting/codeimport/tests/test_dispatcher.py 2010-02-23 16:20:35 +0000
@@ -24,11 +24,11 @@
24class StubSchedulerClient:24class StubSchedulerClient:
25 """A scheduler client that returns a pre-arranged answer."""25 """A scheduler client that returns a pre-arranged answer."""
2626
27 def __init__(self, id_to_return):27 def __init__(self, ids_to_return):
28 self.id_to_return = id_to_return28 self.ids_to_return = ids_to_return
2929
30 def getJobForMachine(self, machine, limit):30 def getJobForMachine(self, machine, limit):
31 return self.id_to_return31 return self.ids_to_return.pop(0)
3232
3333
34class MockSchedulerClient:34class MockSchedulerClient:
@@ -51,9 +51,10 @@
51 TestCase.setUp(self)51 TestCase.setUp(self)
52 self.pushConfig('codeimportdispatcher', forced_hostname='none')52 self.pushConfig('codeimportdispatcher', forced_hostname='none')
5353
54 def makeDispatcher(self, worker_limit=10):54 def makeDispatcher(self, worker_limit=10, _sleep=lambda delay: None):
55 """Make a `CodeImportDispatcher`."""55 """Make a `CodeImportDispatcher`."""
56 return CodeImportDispatcher(QuietFakeLogger(), worker_limit)56 return CodeImportDispatcher(
57 QuietFakeLogger(), worker_limit, _sleep=_sleep)
5758
58 def test_getHostname(self):59 def test_getHostname(self):
59 # By default, getHostname return the same as socket.gethostname()60 # By default, getHostname return the same as socket.gethostname()
@@ -111,16 +112,16 @@
111 calls = []112 calls = []
112 dispatcher = self.makeDispatcher()113 dispatcher = self.makeDispatcher()
113 dispatcher.dispatchJob = lambda job_id: calls.append(job_id)114 dispatcher.dispatchJob = lambda job_id: calls.append(job_id)
114 dispatcher.findAndDispatchJob(StubSchedulerClient(10))115 found = dispatcher.findAndDispatchJob(StubSchedulerClient([10]))
115 self.assertEqual([10], calls)116 self.assertEqual(([10], True), (calls, found))
116117
117 def test_findAndDispatchJob_noJobWaiting(self):118 def test_findAndDispatchJob_noJobWaiting(self):
118 # If there is no job to dispatch, then we just exit quietly.119 # If there is no job to dispatch, then we just exit quietly.
119 calls = []120 calls = []
120 dispatcher = self.makeDispatcher()121 dispatcher = self.makeDispatcher()
121 dispatcher.dispatchJob = lambda job_id: calls.append(job_id)122 dispatcher.dispatchJob = lambda job_id: calls.append(job_id)
122 dispatcher.findAndDispatchJob(StubSchedulerClient(0))123 found = dispatcher.findAndDispatchJob(StubSchedulerClient([0]))
123 self.assertEqual([], calls)124 self.assertEqual(([], False), (calls, found))
124125
125 def test_findAndDispatchJob_calls_getJobForMachine_with_limit(self):126 def test_findAndDispatchJob_calls_getJobForMachine_with_limit(self):
126 # findAndDispatchJob calls getJobForMachine on the scheduler client127 # findAndDispatchJob calls getJobForMachine on the scheduler client
@@ -133,5 +134,29 @@
133 [(dispatcher.getHostname(), worker_limit)],134 [(dispatcher.getHostname(), worker_limit)],
134 scheduler_client.calls)135 scheduler_client.calls)
135136
137 def test_findAndDispatchJobs(self):
138 # findAndDispatchJobs calls getJobForMachine on the scheduler_client,
139 # dispatching jobs, until it indicates that there are no more jobs to
140 # dispatch.
141 calls = []
142 dispatcher = self.makeDispatcher()
143 dispatcher.dispatchJob = lambda job_id: calls.append(job_id)
144 dispatcher.findAndDispatchJobs(StubSchedulerClient([10, 9, 0]))
145 self.assertEqual([10, 9], calls)
146
147 def test_findAndDispatchJobs_sleeps(self):
148 # After finding a job, findAndDispatchJobs sleeps for an interval as
149 # returned by _getSleepInterval.
150 sleep_calls = []
151 interval = self.factory.getUniqueInteger()
152 def _sleep(delay):
153 sleep_calls.append(delay)
154 dispatcher = self.makeDispatcher(_sleep=_sleep)
155 dispatcher.dispatchJob = lambda job_id: None
156 dispatcher._getSleepInterval = lambda : interval
157 dispatcher.findAndDispatchJobs(StubSchedulerClient([10, 0]))
158 self.assertEqual([interval], sleep_calls)
159
160
136def test_suite():161def test_suite():
137 return TestLoader().loadTestsFromName(__name__)162 return TestLoader().loadTestsFromName(__name__)
138163
=== modified file 'lib/lp/codehosting/scanner/email.py'
--- lib/lp/codehosting/scanner/email.py 2010-01-06 12:15:42 +0000
+++ lib/lp/codehosting/scanner/email.py 2010-02-23 16:20:35 +0000
@@ -38,15 +38,18 @@
38 if number_removed == 0:38 if number_removed == 0:
39 return39 return
40 if number_removed == 1:40 if number_removed == 1:
41 contents = '1 revision was removed from the branch.'41 count = '1 revision'
42 contents = '%s was removed from the branch.' % count
42 else:43 else:
43 contents = ('%d revisions were removed from the branch.'44 count = '%d revisions' % number_removed
44 % number_removed)45 contents = '%s were removed from the branch.' % count
45 # No diff is associated with the removed email.46 # No diff is associated with the removed email.
47 subject = "[Branch %s] %s removed" % (
48 revisions_removed.db_branch.unique_name, count)
46 getUtility(IRevisionMailJobSource).create(49 getUtility(IRevisionMailJobSource).create(
47 revisions_removed.db_branch, revno='removed',50 revisions_removed.db_branch, revno='removed',
48 from_address=config.canonical.noreply_from_address,51 from_address=config.canonical.noreply_from_address,
49 body=contents, perform_diff=False, subject=None)52 body=contents, perform_diff=False, subject=subject)
5053
5154
52@adapter(events.TipChanged)55@adapter(events.TipChanged)
@@ -62,9 +65,11 @@
62 message = ('First scan of the branch detected %s'65 message = ('First scan of the branch detected %s'
63 ' in the revision history of the branch.' %66 ' in the revision history of the branch.' %
64 revisions)67 revisions)
68 subject = "[Branch %s] %s" % (
69 tip_changed.db_branch.unique_name, revisions)
65 getUtility(IRevisionMailJobSource).create(70 getUtility(IRevisionMailJobSource).create(
66 tip_changed.db_branch, 'initial',71 tip_changed.db_branch, 'initial',
67 config.canonical.noreply_from_address, message, False, None)72 config.canonical.noreply_from_address, message, False, subject)
68 else:73 else:
69 getUtility(IRevisionsAddedJobSource).create(74 getUtility(IRevisionsAddedJobSource).create(
70 tip_changed.db_branch, tip_changed.db_branch.last_scanned_id,75 tip_changed.db_branch, tip_changed.db_branch.last_scanned_id,
7176
=== modified file 'lib/lp/codehosting/scanner/tests/test_email.py'
--- lib/lp/codehosting/scanner/tests/test_email.py 2009-07-17 00:26:05 +0000
+++ lib/lp/codehosting/scanner/tests/test_email.py 2010-02-23 16:20:35 +0000
@@ -63,8 +63,12 @@
63 self.assertEqual(len(stub.test_emails), 1)63 self.assertEqual(len(stub.test_emails), 1)
64 [initial_email] = stub.test_emails64 [initial_email] = stub.test_emails
65 expected = 'First scan of the branch detected 0 revisions'65 expected = 'First scan of the branch detected 0 revisions'
66 email_body = email.message_from_string(initial_email[2]).get_payload()66 message = email.message_from_string(initial_email[2])
67 email_body = message.get_payload()
67 self.assertTextIn(expected, email_body)68 self.assertTextIn(expected, email_body)
69 self.assertEqual(
70 '[Branch %s] 0 revisions' % self.db_branch.unique_name,
71 message['Subject'])
6872
69 def test_import_revision(self):73 def test_import_revision(self):
70 self.commitRevision()74 self.commitRevision()
@@ -74,8 +78,12 @@
74 [initial_email] = stub.test_emails78 [initial_email] = stub.test_emails
75 expected = ('First scan of the branch detected 1 revision'79 expected = ('First scan of the branch detected 1 revision'
76 ' in the revision history of the=\n branch.')80 ' in the revision history of the=\n branch.')
77 email_body = email.message_from_string(initial_email[2]).get_payload()81 message = email.message_from_string(initial_email[2])
82 email_body = message.get_payload()
78 self.assertTextIn(expected, email_body)83 self.assertTextIn(expected, email_body)
84 self.assertEqual(
85 '[Branch %s] 1 revision' % self.db_branch.unique_name,
86 message['Subject'])
7987
80 def test_import_uncommit(self):88 def test_import_uncommit(self):
81 self.commitRevision()89 self.commitRevision()
@@ -88,9 +96,12 @@
88 self.assertEqual(len(stub.test_emails), 1)96 self.assertEqual(len(stub.test_emails), 1)
89 [uncommit_email] = stub.test_emails97 [uncommit_email] = stub.test_emails
90 expected = '1 revision was removed from the branch.'98 expected = '1 revision was removed from the branch.'
91 email_body = email.message_from_string(99 message = email.message_from_string(uncommit_email[2])
92 uncommit_email[2]).get_payload()100 email_body = message.get_payload()
93 self.assertTextIn(expected, email_body)101 self.assertTextIn(expected, email_body)
102 self.assertEqual(
103 '[Branch %s] 1 revision removed' % self.db_branch.unique_name,
104 message['Subject'])
94105
95 def test_import_recommit(self):106 def test_import_recommit(self):
96 # When scanning the uncommit and new commit there should be an email107 # When scanning the uncommit and new commit there should be an email
97108
=== modified file 'lib/lp/scripts/utilities/importfascist.py'
--- lib/lp/scripts/utilities/importfascist.py 2010-02-04 03:07:25 +0000
+++ lib/lp/scripts/utilities/importfascist.py 2010-02-23 16:20:35 +0000
@@ -173,12 +173,15 @@
173 % self.import_into)173 % self.import_into)
174174
175175
176# The names of the arguments form part of the interface of __import__(...), and
177# must not be changed, as code may choose to invoke __import__ using keyword
178# arguments - e.g. the encodings module in Python 2.6.
176# pylint: disable-msg=W0102,W0602179# pylint: disable-msg=W0102,W0602
177def import_fascist(module_name, globals={}, locals={}, from_list=[], level=-1):180def import_fascist(name, globals={}, locals={}, fromlist=[], level=-1):
178 global naughty_imports181 global naughty_imports
179182
180 try:183 try:
181 module = original_import(module_name, globals, locals, from_list, level)184 module = original_import(name, globals, locals, fromlist, level)
182 except ImportError:185 except ImportError:
183 # XXX sinzui 2008-04-17 bug=277274:186 # XXX sinzui 2008-04-17 bug=277274:
184 # import_fascist screws zope configuration module which introspects187 # import_fascist screws zope configuration module which introspects
@@ -188,18 +191,18 @@
188 # time doesn't exist and dies a horrible death because of the import191 # time doesn't exist and dies a horrible death because of the import
189 # fascist. That's the long explanation for why we special case this192 # fascist. That's the long explanation for why we special case this
190 # module.193 # module.
191 if module_name.startswith('zope.app.layers.'):194 if name.startswith('zope.app.layers.'):
192 module_name = module_name[16:]195 name = name[16:]
193 module = original_import(module_name, globals, locals, from_list, level)196 module = original_import(name, globals, locals, fromlist, level)
194 else:197 else:
195 raise198 raise
196 # Python's re module imports some odd stuff every time certain regexes199 # Python's re module imports some odd stuff every time certain regexes
197 # are used. Let's optimize this.200 # are used. Let's optimize this.
198 if module_name == 'sre':201 if name == 'sre':
199 return module202 return module
200203
201 # Mailman 2.1 code base is originally circa 1998, so yeah, no __all__'s.204 # Mailman 2.1 code base is originally circa 1998, so yeah, no __all__'s.
202 if module_name.startswith('Mailman'):205 if name.startswith('Mailman'):
203 return module206 return module
204207
205 # Some uses of __import__ pass None for globals, so handle that.208 # Some uses of __import__ pass None for globals, so handle that.
@@ -215,14 +218,14 @@
215218
216 # Check the "NotFoundError" policy.219 # Check the "NotFoundError" policy.
217 if (import_into.startswith('canonical.launchpad.database') and220 if (import_into.startswith('canonical.launchpad.database') and
218 module_name == 'zope.exceptions'):221 name == 'zope.exceptions'):
219 if from_list and 'NotFoundError' in from_list:222 if fromlist and 'NotFoundError' in fromlist:
220 raise NotFoundPolicyViolation(import_into)223 raise NotFoundPolicyViolation(import_into)
221224
222 # Check the database import policy.225 # Check the database import policy.
223 if (module_name.startswith(database_root) and226 if (name.startswith(database_root) and
224 not database_import_allowed_into(import_into)):227 not database_import_allowed_into(import_into)):
225 error = DatabaseImportPolicyViolation(import_into, module_name)228 error = DatabaseImportPolicyViolation(import_into, name)
226 naughty_imports.add(error)229 naughty_imports.add(error)
227 # Raise an error except in the case of browser.traversers.230 # Raise an error except in the case of browser.traversers.
228 # This exception to raising an error is only temporary, until231 # This exception to raising an error is only temporary, until
@@ -231,28 +234,28 @@
231 raise error234 raise error
232235
233 # Check the import from __all__ policy.236 # Check the import from __all__ policy.
234 if from_list is not None and (237 if fromlist is not None and (
235 import_into.startswith('canonical') or import_into.startswith('lp')):238 import_into.startswith('canonical') or import_into.startswith('lp')):
236 # We only want to warn about "from foo import bar" violations in our239 # We only want to warn about "from foo import bar" violations in our
237 # own code.240 # own code.
238 from_list = list(from_list)241 fromlist = list(fromlist)
239 module_all = getattr(module, '__all__', None)242 module_all = getattr(module, '__all__', None)
240 if module_all is None:243 if module_all is None:
241 if from_list == ['*']:244 if fromlist == ['*']:
242 # "from foo import *" is naughty if foo has no __all__245 # "from foo import *" is naughty if foo has no __all__
243 error = FromStarPolicyViolation(import_into, module_name)246 error = FromStarPolicyViolation(import_into, name)
244 naughty_imports.add(error)247 naughty_imports.add(error)
245 raise error248 raise error
246 else:249 else:
247 if from_list == ['*']:250 if fromlist == ['*']:
248 # "from foo import *" is allowed if foo has an __all__251 # "from foo import *" is allowed if foo has an __all__
249 return module252 return module
250 if is_test_module(import_into):253 if is_test_module(import_into):
251 # We don't bother checking imports into test modules.254 # We don't bother checking imports into test modules.
252 return module255 return module
253 allowed_from_list = valid_imports_not_in_all.get(256 allowed_fromlist = valid_imports_not_in_all.get(
254 module_name, set())257 name, set())
255 for attrname in from_list:258 for attrname in fromlist:
256 # Check that each thing we are importing into the module is259 # Check that each thing we are importing into the module is
257 # either in __all__, is a module itself, or is a specific260 # either in __all__, is a module itself, or is a specific
258 # exception.261 # exception.
@@ -264,13 +267,13 @@
264 # You can import modules even when they aren't declared in267 # You can import modules even when they aren't declared in
265 # __all__.268 # __all__.
266 continue269 continue
267 if attrname in allowed_from_list:270 if attrname in allowed_fromlist:
268 # Some things can be imported even if they aren't in271 # Some things can be imported even if they aren't in
269 # __all__.272 # __all__.
270 continue273 continue
271 if attrname not in module_all:274 if attrname not in module_all:
272 error = NotInModuleAllPolicyViolation(275 error = NotInModuleAllPolicyViolation(
273 import_into, module_name, attrname)276 import_into, name, attrname)
274 naughty_imports.add(error)277 naughty_imports.add(error)
275 return module278 return module
276279
277280
=== modified file 'lib/lp/soyuz/model/buildqueue.py'
--- lib/lp/soyuz/model/buildqueue.py 2010-02-02 14:01:14 +0000
+++ lib/lp/soyuz/model/buildqueue.py 2010-02-23 16:20:35 +0000
@@ -506,6 +506,9 @@
506 result_set = store.find(506 result_set = store.find(
507 BuildQueue,507 BuildQueue,
508 BuildQueue.job == Job.id,508 BuildQueue.job == Job.id,
509 # XXX Michael Nelson 2010-02-22 bug=499421
510 # Avoid corrupt build jobs where the builder is None.
511 BuildQueue.builder != None,
509 # status is a property. Let's use _status.512 # status is a property. Let's use _status.
510 Job._status == JobStatus.RUNNING,513 Job._status == JobStatus.RUNNING,
511 Job.date_started != None)514 Job.date_started != None)
512515
=== modified file 'lib/lp/soyuz/tests/test_buildqueue.py'
--- lib/lp/soyuz/tests/test_buildqueue.py 2010-02-22 22:50:46 +0000
+++ lib/lp/soyuz/tests/test_buildqueue.py 2010-02-23 16:20:35 +0000
@@ -177,6 +177,19 @@
177 buildqueue = BuildQueue(job=job.id)177 buildqueue = BuildQueue(job=job.id)
178 self.assertEquals(buildqueue, self.buildqueueset.getByJob(job))178 self.assertEquals(buildqueue, self.buildqueueset.getByJob(job))
179179
180 def test_getActiveBuildJobs_no_builder_bug499421(self):
181 # An active build queue item that does not have a builder will
182 # not be included in the results and so will not block the
183 # buildd-manager.
184 active_jobs = self.buildqueueset.getActiveBuildJobs()
185 self.assertEqual(1, active_jobs.count())
186 active_job = active_jobs[0]
187 active_job.builder = None
188 self.assertTrue(
189 self.buildqueueset.getActiveBuildJobs().is_empty(),
190 "An active build job must have a builder.")
191
192
180193
181class TestBuildQueueBase(TestCaseWithFactory):194class TestBuildQueueBase(TestCaseWithFactory):
182 """Setup the test publisher and some builders."""195 """Setup the test publisher and some builders."""
@@ -248,7 +261,10 @@
248261
249 # hppa native262 # hppa native
250 self.builders[(self.hppa_proc.id, False)] = [263 self.builders[(self.hppa_proc.id, False)] = [
251 self.h5, self.h6, self.h7]264 self.h5,
265 self.h6,
266 self.h7,
267 ]
252 # hppa virtual268 # hppa virtual
253 self.builders[(self.hppa_proc.id, True)] = [269 self.builders[(self.hppa_proc.id, True)] = [
254 self.h1, self.h2, self.h3, self.h4]270 self.h1, self.h2, self.h3, self.h4]

Subscribers

People subscribed via source and target branches

to status/vote changes: