Merge lp:~edwin-grubbs/launchpad/bug-615654-jobqueue-cron-script into lp:launchpad/db-devel

Proposed by Edwin Grubbs
Status: Merged
Approved by: Aaron Bentley
Approved revision: no longer in the source branch.
Merged at revision: 9873
Proposed branch: lp:~edwin-grubbs/launchpad/bug-615654-jobqueue-cron-script
Merge into: lp:launchpad/db-devel
Prerequisite: lp:~edwin-grubbs/launchpad/bug-615654-queue-addmember-emails
Diff against target: 854 lines (+512/-71)
17 files modified
cronscripts/merge-proposal-jobs.py (+1/-1)
cronscripts/process-job-source-groups.py (+114/-0)
cronscripts/process-job-source.py (+65/-0)
database/schema/security.cfg (+16/-0)
lib/canonical/config/schema-lazr.conf (+14/-0)
lib/canonical/launchpad/scripts/tests/__init__.py (+1/-1)
lib/lp/archiveuploader/tests/test_ppauploadprocessor.py (+2/-0)
lib/lp/code/interfaces/branchmergeproposal.py (+2/-1)
lib/lp/registry/interfaces/persontransferjob.py (+10/-0)
lib/lp/registry/model/persontransferjob.py (+0/-10)
lib/lp/registry/stories/person/xx-approve-members.txt (+3/-0)
lib/lp/registry/tests/test_membership_notification_job_creation.py (+65/-0)
lib/lp/registry/tests/test_process_job_sources_cronjob.py (+125/-0)
lib/lp/services/job/interfaces/job.py (+8/-0)
lib/lp/services/job/runner.py (+32/-16)
lib/lp/services/scripts/base.py (+44/-32)
lib/lp/testing/__init__.py (+10/-10)
To merge this branch: bzr merge lp:~edwin-grubbs/launchpad/bug-615654-jobqueue-cron-script
Reviewer Review Type Date Requested Status
Aaron Bentley (community) Approve
Gavin Panella (community) Approve
Review via email: mp+36910@code.launchpad.net

Description of the change

Summary
-------

This branch adds a cron script to process the
IMembershipNotificationJobs. The cronscript has been abstracted so that
new job sources can be processed by just adding configs to
schema-lazr.conf, as long as it uses a crontab_group that already has a
crontab entry configured in production.

Implementation details
----------------------

New db users and configs for cronjobs:
    database/schema/security.cfg
    lib/canonical/config/schema-lazr.conf

Cronscripts and tests.
    cronscripts/process-job-source-groups.py
    cronscripts/process-job-source.py
    lib/lp/registry/tests/test_membership_notification_job_creation.py
    lib/lp/registry/tests/test_process_job_sources_cronjob.py

Improved error message.
    lib/canonical/launchpad/webapp/errorlog.py

Added get() to IJobSource definition, since TwistedJobRunner needs it.
Moved the get() method into the class that actually provides IJobSource.
    lib/lp/services/job/interfaces/job.py
    lib/lp/registry/model/persontransferjob.py

Cruft from previous branch.
    lib/lp/registry/model/teammembership.py

Eliminate the need for TwistedJobRunner cron scripts to be run with
bin/py by importing _pythonpath in ampoule child processes like normal
cron scripts. Eliminated the need to specify the error_dir, oops_prefix,
and copy_to_zlog when the defaults in [error_reports] are fine.
    lib/lp/services/job/runner.py

Tests
-----

./bin/test -vv -t 'test_membership_notification_job_creation|test_process_job_sources_cronjob'

Demo and Q/A
------------

* Add a member to a team.
  * Change the member status to ADMIN.
  * Run ./cronscripts/process-job-source-groups.py -v MAIN
  * Check email.

To post a comment you must log in.
Revision history for this message
Gavin Panella (allenap) wrote :

Hi Edwin,

This branch looks good.

I know you're adding ProcessJobSourceGroups in here as well as the
membership stuff, but it still seems like a lot of work to do
something asynchronously. Moving something out of the app server takes
a lot of plumbing work, and it's not trivial plumbing work either.

I'd really like a way to create, in the app server, something like a
persistent closure that can be run later, there or somewhere
else. When run it would have the same user and database access as in
the app server thread that spawned it. Creating a new job now seems to
involve creating tables, database users, job sources, and so on, and
it's getting in the way of the actual work that needs doing.

I'm rambling on, and it's not a fault in this branch, but after doing
this recently, do you have any thoughts on that?

Gavin.

[1]

+ for job_source in selected_job_sources:
+ child = subprocess.Popen(child_args + [job_source])
+ children.append(child)
+ for child in children:
+ child.wait()

Is selected_job_sources ever likely to be large? If not now, but could
be one day, consider filing a bug to limit the number of concurrency
processes.

[2]

+ for child in children:
+ child.wait()

Is there any value in checking and doing something with the exit code
from the child?

[3]

+ packages=('_pythonpath', 'twisted', 'ampoule'), env=env)

I just looked again at _pythonpath.py and now I can't see.

review: Approve
Revision history for this message
Aaron Bentley (abentley) wrote :

Gavin, radical changes in the way we create and run tasks are the purview of the NewTaskSystem. Not having to create new database users is already listed in https://dev.launchpad.net/Foundations/NewTaskSystem/Requirements and you may wish to add your other wishes under "nice to have".

Revision history for this message
Edwin Grubbs (edwin-grubbs) wrote :
Download full text (3.2 KiB)

> Hi Edwin,
>
> This branch looks good.

Thanks for the review. Unfortunately, abentley had already started reviewing it, but it was all done on IRC up to this point. I should have just listed him as the requested reviewer. Sorry for the duplicate effort. I'll try to answer your questions below.

> I know you're adding ProcessJobSourceGroups in here as well as the
> membership stuff, but it still seems like a lot of work to do
> something asynchronously. Moving something out of the app server takes
> a lot of plumbing work, and it's not trivial plumbing work either.
>
> I'd really like a way to create, in the app server, something like a
> persistent closure that can be run later, there or somewhere
> else. When run it would have the same user and database access as in
> the app server thread that spawned it. Creating a new job now seems to
> involve creating tables, database users, job sources, and so on, and
> it's getting in the way of the actual work that needs doing.
>
> I'm rambling on, and it's not a fault in this branch, but after doing
> this recently, do you have any thoughts on that?
>
> Gavin.

It seems that the reason we don't support a closure method for processing the jobs is that there are a lot of security restrictions placed on us. For example, each job class is required to have its own dbuser, so that the losas can figure out which job class is causing problems on the db. Of course, the job system seems much better suited for really slow processes, and it might be worthwhile to have a secondary solution that just moves the processing after the request response.

> [1]
>
> + for job_source in selected_job_sources:
> + child = subprocess.Popen(child_args + [job_source])
> + children.append(child)
> + for child in children:
> + child.wait()
>
> Is selected_job_sources ever likely to be large? If not now, but could
> be one day, consider filing a bug to limit the number of concurrency
> processes.

Currently, each job source class is given its own process. If we run into a problem with too many concurrent jobs being processed, the job source classes can be reorganized into different crontab_groups, and then different crontab entries can process the jobs at different times. We are not really interested in a long term solution for this, since the job system will most likely be moved to some kind of message queue, so the cron scripts for job processing will disappear.

> [2]
>
> + for child in children:
> + child.wait()
>
> Is there any value in checking and doing something with the exit code
> from the child?

No, waiting for the children to finish is really only helpful for testing, so that all the output from the child processes can be gathered. The cron scripts log to files, and I am actually making the default behavior not to wait on the child processes, so that one really long running child process does not cause the other job sources not to run the next time the crontab entry calls it, because the parent process still has its lock file.

> [3]
>
> + packages=('_pythonpath', 'twisted', 'ampoule'), env=env)
>
> I just looked again at ...

Read more...

Revision history for this message
Aaron Bentley (abentley) wrote :

Waiting for children to exit will mean that no new jobs can be processed until all the jobs complete. This may severely impair responsiveness. I suggest starting the subprocesses and then exiting.

The NoErrorOptionParser should not exist. Option handling should be taken care of in the ProcessJobSource class. If the layering is getting in our way, we should fix it, not work around it.

Unless you are specifically testing stringification, you should not stringify objects in tests, so that your tests are not fragile.

In test_setstatus_admin, what's the purpose of transaction.commit?

Since you do person_set.getByEmail('<email address hidden>'), you can use login_person or with person_logged_in instead of login. Also, lifeless requested that you use the constant for <email address hidden>' instead of spelling it out.

Please use lp.testing.run_script instead of rolling your own.

In test_processed, I don't think you really care what the database id of the job is, so use assertTextMatchesExpressionIgnoreWhitespace. (You'll need to move it to TestCaseWithFactory.)

get isn't required to run jobs under the non-twisted job runner, so it seems like an unnecessary requirement on IJobSource.

Perhaps ProcessJobSource should go in lp.services.job.runner ?

test_empty_queue looks fragile because it includes the directory of the lock file. Perhaps another case for assertTextMatchesExpressionIgnoreWhitespace?

Perhaps PYTHONPATH and LPCONFIG should be handled by common code, eg:
for name in ['PYTHONPATH', 'LPCONFIG']:
    if name in os.environ:
        env[name] = os.environ[name]

review: Needs Fixing
Revision history for this message
Gavin Panella (allenap) wrote :
Download full text (3.6 KiB)

> Thanks for the review. Unfortunately, abentley had already started reviewing
> it, but it was all done on IRC up to this point. I should have just listed him
> as the requested reviewer. Sorry for the duplicate effort. I'll try to answer
> your questions below.

No worries. It's good that Aaron also reviewed this. He spotted a lot
of stuff that I skimmed over; I didn't take as long on this review as
I ought to have done.

>
> > I know you're adding ProcessJobSourceGroups in here as well as the
> > membership stuff, but it still seems like a lot of work to do
> > something asynchronously. Moving something out of the app server takes
> > a lot of plumbing work, and it's not trivial plumbing work either.
> >
> > I'd really like a way to create, in the app server, something like a
> > persistent closure that can be run later, there or somewhere
> > else. When run it would have the same user and database access as in
> > the app server thread that spawned it. Creating a new job now seems to
> > involve creating tables, database users, job sources, and so on, and
> > it's getting in the way of the actual work that needs doing.
> >
> > I'm rambling on, and it's not a fault in this branch, but after doing
> > this recently, do you have any thoughts on that?
> >
> > Gavin.
>
>
> It seems that the reason we don't support a closure method for processing the
> jobs is that there are a lot of security restrictions placed on us. For
> example, each job class is required to have its own dbuser, so that the losas
> can figure out which job class is causing problems on the db. Of course, the
> job system seems much better suited for really slow processes, and it might be
> worthwhile to have a secondary solution that just moves the processing after
> the request response.

Yes, that sounds good. As suggested by Aaron, I'll add it to the
wishlist.

>
> > [1]
> >
> > + for job_source in selected_job_sources:
> > + child = subprocess.Popen(child_args + [job_source])
> > + children.append(child)
> > + for child in children:
> > + child.wait()
> >
> > Is selected_job_sources ever likely to be large? If not now, but could
> > be one day, consider filing a bug to limit the number of concurrency
> > processes.
>
>
> Currently, each job source class is given its own process. If we run into a
> problem with too many concurrent jobs being processed, the job source classes
> can be reorganized into different crontab_groups, and then different crontab
> entries can process the jobs at different times. We are not really interested
> in a long term solution for this, since the job system will most likely be
> moved to some kind of message queue, so the cron scripts for job processing
> will disappear.

Okay, fair enough.

>
> > [2]
> >
> > + for child in children:
> > + child.wait()
> >
> > Is there any value in checking and doing something with the exit code
> > from the child?
>
>
> No, waiting for the children to finish is really only helpful for testing, so
> that all the output from the child processes can be gathered. The cron scripts
> log to files, and I am actually making the default behavi...

Read more...

Revision history for this message
Edwin Grubbs (edwin-grubbs) wrote :
Download full text (25.6 KiB)

On Thu, Sep 30, 2010 at 11:02 AM, Aaron Bentley <email address hidden> wrote:
> Review: Needs Fixing
> Waiting for children to exit will mean that no new jobs can be processed until all the jobs complete.  This may severely impair responsiveness.  I suggest starting the subprocesses and then exiting.

It no longer waits for child processes by default. I added an option
to wait, which is convenient when running it from the command line
when experimenting.

> The NoErrorOptionParser should not exist.  Option handling should be taken care of in the ProcessJobSource class.  If the layering is getting in our way, we should fix it, not work around it.

I moved most of the setup code to the run() methods in LaunchpadScript
and LaunchpadCronScript so __init__ does very little now besides
setting up the parser attribute.

> Unless you are specifically testing stringification, you should not stringify objects in tests, so that your tests are not fragile.

Fixed.

> In test_setstatus_admin, what's the purpose of transaction.commit?

I'm not sure why I was thinking I would need that. Removed.

> Since you do person_set.getByEmail('<email address hidden>'), you can use login_person or with person_logged_in instead of login.  Also, lifeless requested that you use the constant for <email address hidden>' instead of spelling it out.

Fixed.

> Please use lp.testing.run_script instead of rolling your own.

Fixed.

> In test_processed, I don't think you really care what the database id of the job is, so use  assertTextMatchesExpressionIgnoreWhitespace.  (You'll need to move it to TestCaseWithFactory.)

Fixed.

> get isn't required to run jobs under the non-twisted job runner, so it seems like an unnecessary requirement on IJobSource.

ok, I've moved get() to a new ITwistedJobSource interface.

> Perhaps ProcessJobSource should go in lp.services.job.runner ?

I'd rather wait until we have a reason for subclassing from it, which
I hope won't happen before we stop using cron for job processing.

> test_empty_queue looks fragile because it includes the directory of the lock file.  Perhaps another case for assertTextMatchesExpressionIgnoreWhitespace?

Fixed.

> Perhaps PYTHONPATH and LPCONFIG should be handled by common code, eg:
> for name in ['PYTHONPATH', 'LPCONFIG']:
>    if name in os.environ:
>        env[name] = os.environ[name]

Fixed.

Incremental diff:

=== modified file 'cronscripts/process-job-source-groups.py'
--- cronscripts/process-job-source-groups.py 2010-09-28 15:50:05 +0000
+++ cronscripts/process-job-source-groups.py 2010-09-30 20:33:49 +0000
@@ -47,6 +47,13 @@
             '-e', '--exclude', dest='excluded_job_sources',
             metavar="JOB_SOURCE", default=[], action='append',
             help="Exclude specific job sources.")
+ self.parser.add_option(
+ '--wait', dest='do_wait', default=False, action='store_true',
+ help="Wait for the child processes to finish. This is useful "
+ "for testing, but it shouldn't be used in a cronjob, since "
+ "it would prevent the cronjob from processing new jobs "
+ "if just one of the child processes is still processing, "
...

Revision history for this message
Edwin Grubbs (edwin-grubbs) wrote :

I realized that I could make process-job-source.py simpler. Here is the diff.

=== modified file 'cronscripts/process-job-source.py'
--- cronscripts/process-job-source.py 2010-09-30 20:39:02 +0000
+++ cronscripts/process-job-source.py 2010-10-01 13:39:54 +0000
@@ -24,12 +24,8 @@
         "For more help, run:\n"
         " cronscripts/process-job-source-groups.py --help")

- def configure(self):
- """Override configs main() is called by run().
-
- It is done here instead of __init__(), so that we get to use its
- option parser.
- """
+ def __init__(self):
+ super(ProcessJobSource, self).__init__()
         if len(self.args) != 1:
             self.parser.print_help()
             sys.exit(1)
@@ -54,5 +50,4 @@

 if __name__ == '__main__':
     script = ProcessJobSource()
- script.configure()
     script.lock_and_run()

Revision history for this message
Edwin Grubbs (edwin-grubbs) wrote :
Download full text (10.9 KiB)

I ran into some problems merging in db-devel due to some changes in LaunchpadCronScript. I switched to using properties to make it easy for subclasses to override the values without interfering with __init__() logic.

Instead of an incremental diff, here is a full diff of the affected files.

=== added file 'cronscripts/process-job-source.py'
--- cronscripts/process-job-source.py 1970-01-01 00:00:00 +0000
+++ cronscripts/process-job-source.py 2010-10-04 04:01:36 +0000
@@ -0,0 +1,69 @@
+#!/usr/bin/python -S
+#
+# Copyright 2009, 2010 Canonical Ltd. This software is licensed under the
+# GNU Affero General Public License version 3 (see the file LICENSE).
+
+"""Handle jobs for a specified job source class."""
+
+__metaclass__ = type
+
+import sys
+
+import _pythonpath
+from twisted.python import log
+
+from canonical.config import config
+from lp.services.job import runner
+from lp.services.job.runner import JobCronScript
+
+
+class ProcessJobSource(JobCronScript):
+ """Run jobs for a specified job source class."""
+ usage = (
+ "Usage: %prog [options] JOB_SOURCE\n\n"
+ "For more help, run:\n"
+ " cronscripts/process-job-source-groups.py --help")
+
+ def __init__(self):
+ super(ProcessJobSource, self).__init__()
+ # The fromlist argument is necessary so that __import__()
+ # returns the bottom submodule instead of the top one.
+ module = __import__(self.config_section.module,
+ fromlist=[self.job_source_name])
+ self.source_interface = getattr(module, self.job_source_name)
+
+ @property
+ def config_name(self):
+ return self.job_source_name
+
+ @property
+ def name(self):
+ return 'process-job-source-%s' % self.job_source_name
+
+ @property
+ def dbuser(self):
+ return self.config_section.dbuser
+
+ @property
+ def runner_class(self):
+ runner_class_name = getattr(
+ self.config_section, 'runner_class', 'JobRunner')
+ # Override attributes that are normally set in __init__().
+ return getattr(runner, runner_class_name)
+
+ def handle_options(self):
+ if len(self.args) != 1:
+ self.parser.print_help()
+ sys.exit(1)
+ self.job_source_name = self.args[0]
+ super(ProcessJobSource, self).handle_options()
+
+ def main(self):
+ if self.options.verbose:
+ log.startLogging(sys.stdout)
+ super(ProcessJobSource, self).main()
+
+
+if __name__ == '__main__':
+ script = ProcessJobSource()
+ script.lock_and_run()

=== modified file 'lib/lp/services/job/runner.py'
--- lib/lp/services/job/runner.py 2010-09-20 09:48:58 +0000
+++ lib/lp/services/job/runner.py 2010-10-04 04:01:36 +0000
@@ -77,7 +77,6 @@

     # We redefine __eq__ and __ne__ here to prevent the security proxy
     # from mucking up our comparisons in tests and elsewhere.
-
     def __eq__(self, job):
         return (
             self.__class__ is removeSecurityProxy(job.__class__)
@@ -337,13 +336,12 @@
     """Run Jobs via twisted."""

     def __init__(self, job_source, dbuser, logger=None, error_utility=None):
- env = {'PYTHONPATH': os.e...

Revision history for this message
Aaron Bentley (abentley) wrote :

This looks good, except that there are now two definitions assertTextMatchesExpressionIgnoreWhitespace; one in TestCaseWithFactory and one in BrowserTestCase. Actually, it's not factory-specific, so it should probably go in TestCase.

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'cronscripts/merge-proposal-jobs.py'
--- cronscripts/merge-proposal-jobs.py 2010-05-19 18:07:56 +0000
+++ cronscripts/merge-proposal-jobs.py 2010-10-07 14:14:12 +0000
@@ -32,7 +32,7 @@
32 def __init__(self):32 def __init__(self):
33 super(RunMergeProposalJobs, self).__init__(33 super(RunMergeProposalJobs, self).__init__(
34 runner_class=TwistedJobRunner,34 runner_class=TwistedJobRunner,
35 script_name='merge-proposal-jobs')35 name='merge-proposal-jobs')
3636
3737
38if __name__ == '__main__':38if __name__ == '__main__':
3939
=== added file 'cronscripts/process-job-source-groups.py'
--- cronscripts/process-job-source-groups.py 1970-01-01 00:00:00 +0000
+++ cronscripts/process-job-source-groups.py 2010-10-07 14:14:12 +0000
@@ -0,0 +1,114 @@
1#!/usr/bin/python -S
2#
3# Copyright 2009, 2010 Canonical Ltd. This software is licensed under the
4# GNU Affero General Public License version 3 (see the file LICENSE).
5
6"""Handle jobs for multiple job source classes."""
7
8__metaclass__ = type
9
10from optparse import IndentedHelpFormatter
11import os
12import subprocess
13import sys
14import textwrap
15
16import _pythonpath
17
18from canonical.config import config
19from lp.services.propertycache import cachedproperty
20from lp.services.scripts.base import LaunchpadCronScript
21
22
23class LongEpilogHelpFormatter(IndentedHelpFormatter):
24 """Preserve newlines in epilog."""
25
26 def format_epilog(self, epilog):
27 if epilog:
28 return '\n%s\n' % epilog
29 else:
30 return ""
31
32
33class ProcessJobSourceGroups(LaunchpadCronScript):
34 """Handle each job source in a separate process with ProcessJobSource."""
35
36 def add_my_options(self):
37 self.parser.usage = "%prog [ -e JOB_SOURCE ] GROUP [GROUP]..."
38 self.parser.epilog = (
39 textwrap.fill(
40 "At least one group must be specified. Excluding job sources "
41 "is useful when you want to run all the other job sources in "
42 "a group.")
43 + "\n\n" + self.group_help)
44
45 self.parser.formatter = LongEpilogHelpFormatter()
46 self.parser.add_option(
47 '-e', '--exclude', dest='excluded_job_sources',
48 metavar="JOB_SOURCE", default=[], action='append',
49 help="Exclude specific job sources.")
50 self.parser.add_option(
51 '--wait', dest='do_wait', default=False, action='store_true',
52 help="Wait for the child processes to finish. This is useful "
53 "for testing, but it shouldn't be used in a cronjob, since "
54 "it would prevent the cronjob from processing new jobs "
55 "if just one of the child processes is still processing, "
56 "and each process only handles a single job source class.")
57
58 def main(self):
59 selected_groups = self.args
60 if len(selected_groups) == 0:
61 self.parser.print_help()
62 sys.exit(1)
63
64 selected_job_sources = set()
65 # Include job sources from selected groups.
66 for group in selected_groups:
67 selected_job_sources.update(self.grouped_sources[group])
68 # Then, exclude job sources.
69 for source in self.options.excluded_job_sources:
70 if source not in selected_job_sources:
71 self.logger.info('%r is not in job source groups %s'
72 % (source, self.options.groups))
73 else:
74 selected_job_sources.remove(source)
75 # Process job sources.
76 command = os.path.join(
77 os.path.dirname(sys.argv[0]), 'process-job-source.py')
78 child_args = [command]
79 if self.options.verbose:
80 child_args.append('-v')
81 children = []
82 for job_source in selected_job_sources:
83 child = subprocess.Popen(child_args + [job_source])
84 children.append(child)
85 if self.options.do_wait:
86 for child in children:
87 child.wait()
88
89 @cachedproperty
90 def all_job_sources(self):
91 job_sources = config['process-job-source-groups'].job_sources
92 return [job_source.strip() for job_source in job_sources.split(',')]
93
94 @cachedproperty
95 def grouped_sources(self):
96 groups = {}
97 for source in self.all_job_sources:
98 if source not in config:
99 continue
100 section = config[source]
101 group = groups.setdefault(section.crontab_group, [])
102 group.append(source)
103 return groups
104
105 @cachedproperty
106 def group_help(self):
107 return '\n\n'.join(
108 'Group: %s\n %s' % (group, '\n '.join(sources))
109 for group, sources in sorted(self.grouped_sources.items()))
110
111
112if __name__ == '__main__':
113 script = ProcessJobSourceGroups()
114 script.lock_and_run()
0115
=== added file 'cronscripts/process-job-source.py'
--- cronscripts/process-job-source.py 1970-01-01 00:00:00 +0000
+++ cronscripts/process-job-source.py 2010-10-07 14:14:12 +0000
@@ -0,0 +1,65 @@
1#!/usr/bin/python -S
2#
3# Copyright 2009, 2010 Canonical Ltd. This software is licensed under the
4# GNU Affero General Public License version 3 (see the file LICENSE).
5
6"""Handle jobs for a specified job source class."""
7
8__metaclass__ = type
9
10import sys
11
12import _pythonpath
13from twisted.python import log
14
15from canonical.config import config
16from lp.services.job import runner
17from lp.services.job.runner import JobCronScript
18
19
20class ProcessJobSource(JobCronScript):
21 """Run jobs for a specified job source class."""
22 usage = (
23 "Usage: %prog [options] JOB_SOURCE\n\n"
24 "For more help, run:\n"
25 " cronscripts/process-job-source-groups.py --help")
26
27 def __init__(self):
28 super(ProcessJobSource, self).__init__()
29 # The fromlist argument is necessary so that __import__()
30 # returns the bottom submodule instead of the top one.
31 module = __import__(self.config_section.module,
32 fromlist=[self.job_source_name])
33 self.source_interface = getattr(module, self.job_source_name)
34
35 @property
36 def config_name(self):
37 return self.job_source_name
38
39 @property
40 def name(self):
41 return 'process-job-source-%s' % self.job_source_name
42
43 @property
44 def runner_class(self):
45 runner_class_name = getattr(
46 self.config_section, 'runner_class', 'JobRunner')
47 # Override attributes that are normally set in __init__().
48 return getattr(runner, runner_class_name)
49
50 def handle_options(self):
51 if len(self.args) != 1:
52 self.parser.print_help()
53 sys.exit(1)
54 self.job_source_name = self.args[0]
55 super(ProcessJobSource, self).handle_options()
56
57 def main(self):
58 if self.options.verbose:
59 log.startLogging(sys.stdout)
60 super(ProcessJobSource, self).main()
61
62
63if __name__ == '__main__':
64 script = ProcessJobSource()
65 script.lock_and_run()
066
=== modified file 'database/schema/security.cfg'
--- database/schema/security.cfg 2010-10-07 14:14:09 +0000
+++ database/schema/security.cfg 2010-10-07 14:14:12 +0000
@@ -715,6 +715,8 @@
715public.teamparticipation = SELECT, DELETE715public.teamparticipation = SELECT, DELETE
716public.person = SELECT716public.person = SELECT
717public.emailaddress = SELECT717public.emailaddress = SELECT
718public.job = SELECT, INSERT
719public.persontransferjob = SELECT, INSERT
718720
719[karma]721[karma]
720# Update the KarmaCache table722# Update the KarmaCache table
@@ -1877,6 +1879,20 @@
1877public.product = SELECT, UPDATE1879public.product = SELECT, UPDATE
1878public.bugtracker = SELECT1880public.bugtracker = SELECT
18791881
1882[process-job-source-groups]
1883# Does not need access to tables.
1884type=user
1885groups=script
1886
1887[person-transfer-job]
1888type=user
1889groups=script
1890public.emailaddress = SELECT
1891public.job = SELECT, UPDATE
1892public.person = SELECT
1893public.persontransferjob = SELECT
1894public.teammembership = SELECT
1895
1880[weblogstats]1896[weblogstats]
1881# For the script that parses our Apache/Squid logfiles and updates statistics1897# For the script that parses our Apache/Squid logfiles and updates statistics
1882type=user1898type=user
18831899
=== modified file 'lib/canonical/config/schema-lazr.conf'
--- lib/canonical/config/schema-lazr.conf 2010-09-24 22:30:48 +0000
+++ lib/canonical/config/schema-lazr.conf 2010-10-07 14:14:12 +0000
@@ -2046,3 +2046,17 @@
20462046
2047# datatype: boolean2047# datatype: boolean
2048send_email: true2048send_email: true
2049
2050[process-job-source-groups]
2051# This section is used by cronscripts/process-job-source-groups.py.
2052dbuser: process-job-source-groups
2053# Each job source class also needs its own config section to specify the
2054# dbuser, the crontab_group, and the module that the job source class
2055# can be loaded from.
2056job_sources: IMembershipNotificationJobSource
2057
2058[IMembershipNotificationJobSource]
2059# This section is used by cronscripts/process-job-source.py.
2060module: lp.registry.interfaces.persontransferjob
2061dbuser: person-transfer-job
2062crontab_group: MAIN
20492063
=== modified file 'lib/canonical/launchpad/scripts/tests/__init__.py'
--- lib/canonical/launchpad/scripts/tests/__init__.py 2009-06-25 05:39:50 +0000
+++ lib/canonical/launchpad/scripts/tests/__init__.py 2010-10-07 14:14:12 +0000
@@ -31,5 +31,5 @@
31 args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)31 args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
32 stdout, stderr = process.communicate()32 stdout, stderr = process.communicate()
33 if process.returncode != expect_returncode:33 if process.returncode != expect_returncode:
34 raise AssertionError('Failed:\n%s' % stderr)34 raise AssertionError('Failed:\n%s\n%s' % (stdout, stderr))
35 return (process.returncode, stdout, stderr)35 return (process.returncode, stdout, stderr)
3636
=== modified file 'lib/lp/archiveuploader/tests/test_ppauploadprocessor.py'
--- lib/lp/archiveuploader/tests/test_ppauploadprocessor.py 2010-10-03 15:30:06 +0000
+++ lib/lp/archiveuploader/tests/test_ppauploadprocessor.py 2010-10-07 14:14:12 +0000
@@ -49,6 +49,7 @@
49 )49 )
50from lp.soyuz.model.publishing import BinaryPackagePublishingHistory50from lp.soyuz.model.publishing import BinaryPackagePublishingHistory
51from lp.soyuz.tests.test_publishing import SoyuzTestPublisher51from lp.soyuz.tests.test_publishing import SoyuzTestPublisher
52from lp.testing.mail_helpers import run_mail_jobs
5253
5354
54class TestPPAUploadProcessorBase(TestUploadProcessorBase):55class TestPPAUploadProcessorBase(TestUploadProcessorBase):
@@ -668,6 +669,7 @@
668 ILaunchpadCelebrities).launchpad_beta_testers669 ILaunchpadCelebrities).launchpad_beta_testers
669 self.name16.leave(beta_testers)670 self.name16.leave(beta_testers)
670 # Pop the message notifying the membership modification.671 # Pop the message notifying the membership modification.
672 run_mail_jobs()
671 unused = stub.test_emails.pop()673 unused = stub.test_emails.pop()
672674
673 upload_dir = self.queueUpload("bar_1.0-1", "~name16/ubuntu")675 upload_dir = self.queueUpload("bar_1.0-1", "~name16/ubuntu")
674676
=== modified file 'lib/lp/code/interfaces/branchmergeproposal.py'
--- lib/lp/code/interfaces/branchmergeproposal.py 2010-10-03 15:30:06 +0000
+++ lib/lp/code/interfaces/branchmergeproposal.py 2010-10-07 14:14:12 +0000
@@ -85,6 +85,7 @@
85 IJob,85 IJob,
86 IJobSource,86 IJobSource,
87 IRunnableJob,87 IRunnableJob,
88 ITwistedJobSource,
88 )89 )
8990
9091
@@ -551,7 +552,7 @@
551 """Destroy this object."""552 """Destroy this object."""
552553
553554
554class IBranchMergeProposalJobSource(IJobSource):555class IBranchMergeProposalJobSource(ITwistedJobSource):
555 """A job source that will get all supported merge proposal jobs."""556 """A job source that will get all supported merge proposal jobs."""
556557
557558
558559
=== modified file 'lib/lp/registry/interfaces/persontransferjob.py'
--- lib/lp/registry/interfaces/persontransferjob.py 2010-10-07 14:14:09 +0000
+++ lib/lp/registry/interfaces/persontransferjob.py 2010-10-07 14:14:12 +0000
@@ -61,6 +61,16 @@
61class IMembershipNotificationJob(IPersonTransferJob):61class IMembershipNotificationJob(IPersonTransferJob):
62 """A Job to notify new members of a team of that change."""62 """A Job to notify new members of a team of that change."""
6363
64 member = PublicPersonChoice(
65 title=_('Alias for minor_person attribute'),
66 vocabulary='ValidPersonOrTeam',
67 required=True)
68
69 team = PublicPersonChoice(
70 title=_('Alias for major_person attribute'),
71 vocabulary='ValidPersonOrTeam',
72 required=True)
73
6474
65class IMembershipNotificationJobSource(IJobSource):75class IMembershipNotificationJobSource(IJobSource):
66 """An interface for acquiring IMembershipNotificationJobs."""76 """An interface for acquiring IMembershipNotificationJobs."""
6777
=== modified file 'lib/lp/registry/model/persontransferjob.py'
--- lib/lp/registry/model/persontransferjob.py 2010-10-07 14:14:09 +0000
+++ lib/lp/registry/model/persontransferjob.py 2010-10-07 14:14:12 +0000
@@ -104,16 +104,6 @@
104 # but the DB representation is unicode.104 # but the DB representation is unicode.
105 self._json_data = json_data.decode('utf-8')105 self._json_data = json_data.decode('utf-8')
106106
107 @classmethod
108 def get(cls, key):
109 """Return the instance of this class whose key is supplied."""
110 store = IMasterStore(PersonTransferJob)
111 instance = store.get(cls, key)
112 if instance is None:
113 raise SQLObjectNotFound(
114 'No occurrence of %s has key %s' % (cls.__name__, key))
115 return instance
116
117107
118class PersonTransferJobDerived(BaseRunnableJob):108class PersonTransferJobDerived(BaseRunnableJob):
119 """Intermediate class for deriving from PersonTransferJob.109 """Intermediate class for deriving from PersonTransferJob.
120110
=== modified file 'lib/lp/registry/stories/person/xx-approve-members.txt'
--- lib/lp/registry/stories/person/xx-approve-members.txt 2009-10-26 14:37:31 +0000
+++ lib/lp/registry/stories/person/xx-approve-members.txt 2010-10-07 14:14:12 +0000
@@ -23,6 +23,9 @@
23 >>> browser.getControl(name='action_7').value = ['approve']23 >>> browser.getControl(name='action_7').value = ['approve']
24 >>> browser.getControl(name='comment').value = 'Thanks for your interest'24 >>> browser.getControl(name='comment').value = 'Thanks for your interest'
25 >>> browser.getControl('Save changes').click()25 >>> browser.getControl('Save changes').click()
26 >>> from lp.testing.mail_helpers import run_mail_jobs
27 >>> login(ANONYMOUS)
28 >>> run_mail_jobs()
26 >>> len(stub.test_emails)29 >>> len(stub.test_emails)
27 1230 12
28 >>> for from_addr, to_addrs, raw_msg in sorted(stub.test_emails):31 >>> for from_addr, to_addrs, raw_msg in sorted(stub.test_emails):
2932
=== added file 'lib/lp/registry/tests/test_membership_notification_job_creation.py'
--- lib/lp/registry/tests/test_membership_notification_job_creation.py 1970-01-01 00:00:00 +0000
+++ lib/lp/registry/tests/test_membership_notification_job_creation.py 2010-10-07 14:14:12 +0000
@@ -0,0 +1,65 @@
1# Copyright 2010 Canonical Ltd. This software is licensed under the
2# GNU Affero General Public License version 3 (see the file LICENSE).
3
4"""Test that MembershipNotificationJobs are created appropriately."""
5
6__metaclass__ = type
7
8from zope.component import getUtility
9
10from canonical.testing import LaunchpadFunctionalLayer
11from lp.registry.interfaces.person import IPersonSet
12from lp.registry.interfaces.persontransferjob import (
13 IMembershipNotificationJobSource,
14 )
15from lp.registry.interfaces.teammembership import (
16 ITeamMembershipSet,
17 TeamMembershipStatus,
18 )
19from lp.registry.model.persontransferjob import MembershipNotificationJob
20from lp.services.job.interfaces.job import JobStatus
21from lp.testing import (
22 login_person,
23 TestCaseWithFactory,
24 )
25from lp.testing.sampledata import ADMIN_EMAIL
26
27
28class CreateMembershipNotificationJobTest(TestCaseWithFactory):
29 """Test that MembershipNotificationJobs are created appropriately."""
30 layer = LaunchpadFunctionalLayer
31
32 def setUp(self):
33 super(CreateMembershipNotificationJobTest, self).setUp()
34 self.person = self.factory.makePerson(name='murdock')
35 self.team = self.factory.makeTeam(name='a-team')
36 self.job_source = getUtility(IMembershipNotificationJobSource)
37
38 def test_setstatus_admin(self):
39 login_person(self.team.teamowner)
40 self.team.addMember(self.person, self.team.teamowner)
41 membership_set = getUtility(ITeamMembershipSet)
42 tm = membership_set.getByPersonAndTeam(self.person, self.team)
43 tm.setStatus(TeamMembershipStatus.ADMIN, self.team.teamowner)
44 jobs = list(self.job_source.iterReady())
45 job_info = [
46 (job.__class__, job.member, job.team, job.status)
47 for job in jobs]
48 self.assertEqual(
49 [(MembershipNotificationJob,
50 self.person,
51 self.team,
52 JobStatus.WAITING),
53 ],
54 job_info)
55
56 def test_setstatus_silent(self):
57 person_set = getUtility(IPersonSet)
58 admin = person_set.getByEmail(ADMIN_EMAIL)
59 login_person(admin)
60 self.team.addMember(self.person, self.team.teamowner)
61 membership_set = getUtility(ITeamMembershipSet)
62 tm = membership_set.getByPersonAndTeam(self.person, self.team)
63 tm.setStatus(
64 TeamMembershipStatus.ADMIN, admin, silent=True)
65 self.assertEqual([], list(self.job_source.iterReady()))
066
=== added file 'lib/lp/registry/tests/test_process_job_sources_cronjob.py'
--- lib/lp/registry/tests/test_process_job_sources_cronjob.py 1970-01-01 00:00:00 +0000
+++ lib/lp/registry/tests/test_process_job_sources_cronjob.py 2010-10-07 14:14:12 +0000
@@ -0,0 +1,125 @@
1# Copyright 2010 Canonical Ltd. This software is licensed under the
2# GNU Affero General Public License version 3 (see the file LICENSE).
3
4"""Test cron script for processing jobs from any job source class."""
5
6__metaclass__ = type
7
8import os
9import subprocess
10
11import transaction
12from zope.component import getUtility
13
14from canonical.config import config
15from canonical.launchpad.scripts.tests import run_script
16from canonical.testing import LaunchpadScriptLayer
17from lp.registry.interfaces.teammembership import (
18 ITeamMembershipSet,
19 TeamMembershipStatus,
20 )
21from lp.testing import (
22 login_person,
23 TestCaseWithFactory,
24 )
25
26
27class ProcessJobSourceTest(TestCaseWithFactory):
28 """Test the process-job-source.py script."""
29 layer = LaunchpadScriptLayer
30 script = 'cronscripts/process-job-source.py'
31
32 def tearDown(self):
33 super(ProcessJobSourceTest, self).tearDown()
34 self.layer.force_dirty_database()
35
36 def test_missing_argument(self):
37 # The script should display usage info when called without any
38 # arguments.
39 returncode, output, error = run_script(
40 self.script, [], expect_returncode=1)
41 self.assertIn('Usage:', output)
42 self.assertIn('process-job-source.py [options] JOB_SOURCE', output)
43
44 def test_empty_queue(self):
45 # The script should just create a lockfile and exit if no jobs
46 # are in the queue.
47 returncode, output, error = run_script(
48 self.script, ['IMembershipNotificationJobSource'])
49 expected = (
50 'INFO Creating lockfile: .*launchpad-process-job-'
51 'source-IMembershipNotificationJobSource.lock.*')
52 self.assertTextMatchesExpressionIgnoreWhitespace(expected, error)
53
54 def test_processed(self):
55 # The script should output the number of jobs it processed.
56 person = self.factory.makePerson(name='murdock')
57 team = self.factory.makeTeam(name='a-team')
58 login_person(team.teamowner)
59 team.addMember(person, team.teamowner)
60 membership_set = getUtility(ITeamMembershipSet)
61 tm = membership_set.getByPersonAndTeam(person, team)
62 tm.setStatus(TeamMembershipStatus.ADMIN, team.teamowner)
63 transaction.commit()
64 returncode, output, error = run_script(
65 self.script, ['-v', 'IMembershipNotificationJobSource'])
66 self.assertIn(
67 ('DEBUG Running <MEMBERSHIP_NOTIFICATION branch job (1) '
68 'for murdock as part of a-team. status=Waiting>'),
69 error)
70 self.assertIn('DEBUG MembershipNotificationJob sent email', error)
71 self.assertIn('Ran 1 MembershipNotificationJob jobs.', error)
72
73
74class ProcessJobSourceGroupsTest(TestCaseWithFactory):
75 """Test the process-job-source-groups.py script."""
76 layer = LaunchpadScriptLayer
77 script = 'cronscripts/process-job-source-groups.py'
78
79 def tearDown(self):
80 super(ProcessJobSourceGroupsTest, self).tearDown()
81 self.layer.force_dirty_database()
82
83 def test_missing_argument(self):
84 # The script should display usage info when called without any
85 # arguments.
86 returncode, output, error = run_script(
87 self.script, [], expect_returncode=1)
88 self.assertIn(
89 ('Usage: process-job-source-groups.py '
90 '[ -e JOB_SOURCE ] GROUP [GROUP]...'),
91 output)
92 self.assertIn('-e JOB_SOURCE, --exclude=JOB_SOURCE', output)
93 self.assertIn('At least one group must be specified.', output)
94 self.assertIn('Group: MAIN\n IMembershipNotificationJobSource',
95 output)
96
97 def test_empty_queue(self):
98 # The script should just create a lockfile, launch a child for
99 # each job source class, and then exit if no jobs are in the queue.
100 returncode, output, error = run_script(self.script, ['MAIN'])
101 expected = (
102 '.*Creating lockfile:.*launchpad-processjobsourcegroups.lock'
103 '.*Creating lockfile:.*launchpad-process-job-'
104 'source-IMembershipNotificationJobSource.lock.*')
105 self.assertTextMatchesExpressionIgnoreWhitespace(expected, error)
106
107 def test_processed(self):
108 # The script should output the number of jobs that have been
109 # processed by its child processes.
110 person = self.factory.makePerson(name='murdock')
111 team = self.factory.makeTeam(name='a-team')
112 login_person(team.teamowner)
113 team.addMember(person, team.teamowner)
114 membership_set = getUtility(ITeamMembershipSet)
115 tm = membership_set.getByPersonAndTeam(person, team)
116 tm.setStatus(TeamMembershipStatus.ADMIN, team.teamowner)
117 transaction.commit()
118 returncode, output, error = run_script(
119 self.script, ['-v', '--wait', 'MAIN'])
120 self.assertTextMatchesExpressionIgnoreWhitespace(
121 ('DEBUG Running <MEMBERSHIP_NOTIFICATION branch job (.*) '
122 'for murdock as part of a-team. status=Waiting>'),
123 error)
124 self.assertIn('DEBUG MembershipNotificationJob sent email', error)
125 self.assertIn('Ran 1 MembershipNotificationJob jobs.', error)
0126
=== modified file 'lib/lp/services/job/interfaces/job.py'
--- lib/lp/services/job/interfaces/job.py 2010-08-20 20:31:18 +0000
+++ lib/lp/services/job/interfaces/job.py 2010-10-07 14:14:12 +0000
@@ -11,6 +11,7 @@
11 'IJob',11 'IJob',
12 'IJobSource',12 'IJobSource',
13 'IRunnableJob',13 'IRunnableJob',
14 'ITwistedJobSource',
14 'JobStatus',15 'JobStatus',
15 'LeaseHeld',16 'LeaseHeld',
16 ]17 ]
@@ -166,3 +167,10 @@
166167
167 def contextManager():168 def contextManager():
168 """Get a context for running this kind of job in."""169 """Get a context for running this kind of job in."""
170
171
172class ITwistedJobSource(IJobSource):
173 """Interface for a job source that is usable by the TwistedJobRunner."""
174
175 def get(id):
176 """Get a job by its id."""
169177
=== modified file 'lib/lp/services/job/runner.py'
--- lib/lp/services/job/runner.py 2010-09-20 09:48:58 +0000
+++ lib/lp/services/job/runner.py 2010-10-07 14:14:12 +0000
@@ -77,7 +77,6 @@
7777
78 # We redefine __eq__ and __ne__ here to prevent the security proxy78 # We redefine __eq__ and __ne__ here to prevent the security proxy
79 # from mucking up our comparisons in tests and elsewhere.79 # from mucking up our comparisons in tests and elsewhere.
80
81 def __eq__(self, job):80 def __eq__(self, job):
82 return (81 return (
83 self.__class__ is removeSecurityProxy(job.__class__)82 self.__class__ is removeSecurityProxy(job.__class__)
@@ -337,13 +336,12 @@
337 """Run Jobs via twisted."""336 """Run Jobs via twisted."""
338337
339 def __init__(self, job_source, dbuser, logger=None, error_utility=None):338 def __init__(self, job_source, dbuser, logger=None, error_utility=None):
340 env = {'PYTHONPATH': os.environ['PYTHONPATH'],339 env = {'PATH': os.environ['PATH']}
341 'PATH': os.environ['PATH']}340 for name in ('PYTHONPATH', 'LPCONFIG'):
342 lp_config = os.environ.get('LPCONFIG')341 if name in os.environ:
343 if lp_config is not None:342 env[name] = os.environ[name]
344 env['LPCONFIG'] = lp_config
345 starter = main.ProcessStarter(343 starter = main.ProcessStarter(
346 packages=('twisted', 'ampoule'), env=env)344 packages=('_pythonpath', 'twisted', 'ampoule'), env=env)
347 super(TwistedJobRunner, self).__init__(logger, error_utility)345 super(TwistedJobRunner, self).__init__(logger, error_utility)
348 self.job_source = job_source346 self.job_source = job_source
349 import_name = '%s.%s' % (347 import_name = '%s.%s' % (
@@ -367,7 +365,8 @@
367 transaction.commit()365 transaction.commit()
368 job_id = job.id366 job_id = job.id
369 deadline = timegm(job.lease_expires.timetuple())367 deadline = timegm(job.lease_expires.timetuple())
370 self.logger.debug('Running %r, lease expires %s', job, job.lease_expires)368 self.logger.debug(
369 'Running %r, lease expires %s', job, job.lease_expires)
371 deferred = self.pool.doWork(370 deferred = self.pool.doWork(
372 RunJobCommand, job_id = job_id, _deadline=deadline)371 RunJobCommand, job_id = job_id, _deadline=deadline)
373 def update(response):372 def update(response):
@@ -442,14 +441,21 @@
442class JobCronScript(LaunchpadCronScript):441class JobCronScript(LaunchpadCronScript):
443 """Base class for scripts that run jobs."""442 """Base class for scripts that run jobs."""
444443
445 def __init__(self, runner_class=JobRunner, test_args=None,444 config_name = None
446 script_name=None):445
447 self.dbuser = getattr(config, self.config_name).dbuser446 def __init__(self, runner_class=JobRunner, test_args=None, name=None):
448 if script_name is None:
449 script_name = self.config_name
450 super(JobCronScript, self).__init__(447 super(JobCronScript, self).__init__(
451 script_name, self.dbuser, test_args)448 name=name, dbuser=None, test_args=test_args)
452 self.runner_class = runner_class449 self._runner_class = runner_class
450
451 @property
452 def dbuser(self):
453 return self.config_section.dbuser
454
455 @property
456 def runner_class(self):
457 """Enable subclasses to override with command-line arguments."""
458 return self._runner_class
453459
454 def job_counts(self, jobs):460 def job_counts(self, jobs):
455 """Return a list of tuples containing the job name and counts."""461 """Return a list of tuples containing the job name and counts."""
@@ -458,8 +464,18 @@
458 counts[job.__class__.__name__] += 1464 counts[job.__class__.__name__] += 1
459 return sorted(counts.items())465 return sorted(counts.items())
460466
467 @property
468 def config_section(self):
469 return getattr(config, self.config_name)
470
461 def main(self):471 def main(self):
462 errorlog.globalErrorUtility.configure(self.config_name)472 section = self.config_section
473 if (getattr(section, 'error_dir', None) is not None
474 and getattr(section, 'oops_prefix', None) is not None
475 and getattr(section, 'copy_to_zlog', None) is not None):
476 # If the three variables are not set, we will let the error
477 # utility default to using the [error_reports] config.
478 errorlog.globalErrorUtility.configure(self.config_name)
463 job_source = getUtility(self.source_interface)479 job_source = getUtility(self.source_interface)
464 runner = self.runner_class.runFromSource(480 runner = self.runner_class.runFromSource(
465 job_source, self.dbuser, self.logger)481 job_source, self.dbuser, self.logger)
466482
=== modified file 'lib/lp/services/scripts/base.py'
--- lib/lp/services/scripts/base.py 2010-10-03 15:30:06 +0000
+++ lib/lp/services/scripts/base.py 2010-10-07 14:14:12 +0000
@@ -37,9 +37,6 @@
37 )37 )
38from canonical.lp import initZopeless38from canonical.lp import initZopeless
3939
40from lp.services.log.mappingfilter import MappingFilter
41from lp.services.log.nullhandler import NullHandler
42
4340
44LOCK_PATH = "/var/lock/"41LOCK_PATH = "/var/lock/"
45UTC = pytz.UTC42UTC = pytz.UTC
@@ -152,22 +149,21 @@
152 useful in test scripts.149 useful in test scripts.
153 """150 """
154 if name is None:151 if name is None:
155 self.name = self.__class__.__name__.lower()152 self._name = self.__class__.__name__.lower()
156 else:153 else:
157 self.name = name154 self._name = name
158155
159 self.dbuser = dbuser156 self._dbuser = dbuser
160
161 if self.description is None:
162 description = self.__doc__
163 else:
164 description = self.description
165157
166 # The construction of the option parser is a bit roundabout, but158 # The construction of the option parser is a bit roundabout, but
167 # at least it's isolated here. First we build the parser, then159 # at least it's isolated here. First we build the parser, then
168 # we add options that our logger object uses, then call our160 # we add options that our logger object uses, then call our
169 # option-parsing hook, and finally pull out and store the161 # option-parsing hook, and finally pull out and store the
170 # supplied options and args.162 # supplied options and args.
163 if self.description is None:
164 description = self.__doc__
165 else:
166 description = self.description
171 self.parser = OptionParser(usage=self.usage,167 self.parser = OptionParser(usage=self.usage,
172 description=description)168 description=description)
173 scripts.logger_options(self.parser, default=self.loglevel)169 scripts.logger_options(self.parser, default=self.loglevel)
@@ -177,9 +173,23 @@
177 "profiling stats in FILE."))173 "profiling stats in FILE."))
178 self.add_my_options()174 self.add_my_options()
179 self.options, self.args = self.parser.parse_args(args=test_args)175 self.options, self.args = self.parser.parse_args(args=test_args)
180 self.logger = scripts.logger(self.options, name)176
181177 # Enable subclasses to easily override these __init__()
182 self.lockfilepath = os.path.join(LOCK_PATH, self.lockfilename)178 # arguments using command-line arguments.
179 self.handle_options()
180
181 def handle_options(self):
182 self.logger = scripts.logger(self.options, self.name)
183
184 @property
185 def name(self):
186 """Enable subclasses to override with command-line arguments."""
187 return self._name
188
189 @property
190 def dbuser(self):
191 """Enable subclasses to override with command-line arguments."""
192 return self._dbuser
183193
184 #194 #
185 # Hooks that we expect users to redefine.195 # Hooks that we expect users to redefine.
@@ -226,6 +236,10 @@
226 """236 """
227 return "launchpad-%s.lock" % self.name237 return "launchpad-%s.lock" % self.name
228238
239 @property
240 def lockfilepath(self):
241 return os.path.join(LOCK_PATH, self.lockfilename)
242
229 def setup_lock(self):243 def setup_lock(self):
230 """Create lockfile.244 """Create lockfile.
231245
@@ -275,8 +289,10 @@
275289
276 @log_unhandled_exception_and_exit290 @log_unhandled_exception_and_exit
277 def run(self, use_web_security=False, implicit_begin=True,291 def run(self, use_web_security=False, implicit_begin=True,
278 isolation=ISOLATION_LEVEL_DEFAULT):292 isolation=None):
279 """Actually run the script, executing zcml and initZopeless."""293 """Actually run the script, executing zcml and initZopeless."""
294 if isolation is None:
295 isolation = ISOLATION_LEVEL_DEFAULT
280 self._init_zca(use_web_security=use_web_security)296 self._init_zca(use_web_security=use_web_security)
281 self._init_db(implicit_begin=implicit_begin, isolation=isolation)297 self._init_db(implicit_begin=implicit_begin, isolation=isolation)
282298
@@ -339,17 +355,15 @@
339 """Logs successful script runs in the database."""355 """Logs successful script runs in the database."""
340356
341 def __init__(self, name=None, dbuser=None, test_args=None):357 def __init__(self, name=None, dbuser=None, test_args=None):
342 """Initialize, and sys.exit() if the cronscript is disabled.
343
344 Rather than hand editing crontab files, cronscripts can be
345 enabled and disabled using a config file.
346
347 The control file location is specified by
348 config.canonical.cron_control_url.
349 """
350 super(LaunchpadCronScript, self).__init__(name, dbuser, test_args)358 super(LaunchpadCronScript, self).__init__(name, dbuser, test_args)
351359
352 name = self.name360 # self.name is used instead of the name argument, since it may have
361 # have been overridden by command-line parameters or by
362 # overriding the name property.
363 enabled = cronscript_enabled(
364 config.canonical.cron_control_url, self.name, self.logger)
365 if not enabled:
366 sys.exit(0)
353367
354 # Configure the IErrorReportingUtility we use with defaults.368 # Configure the IErrorReportingUtility we use with defaults.
355 # Scripts can override this if they want.369 # Scripts can override this if they want.
@@ -359,12 +373,10 @@
359 globalErrorUtility.copy_to_zlog = False373 globalErrorUtility.copy_to_zlog = False
360374
361 # WARN and higher log messages should generate OOPS reports.375 # WARN and higher log messages should generate OOPS reports.
362 logging.getLogger().addHandler(OopsHandler(name))376 # self.name is used instead of the name argument, since it may have
363377 # have been overridden by command-line parameters or by
364 enabled = cronscript_enabled(378 # overriding the name property.
365 config.canonical.cron_control_url, name, self.logger)379 logging.getLogger().addHandler(OopsHandler(self.name))
366 if not enabled:
367 sys.exit(0)
368380
369 @log_unhandled_exception_and_exit381 @log_unhandled_exception_and_exit
370 def record_activity(self, date_started, date_completed):382 def record_activity(self, date_started, date_completed):
371383
=== modified file 'lib/lp/testing/__init__.py'
--- lib/lp/testing/__init__.py 2010-09-29 20:29:43 +0000
+++ lib/lp/testing/__init__.py 2010-10-07 14:14:12 +0000
@@ -441,6 +441,16 @@
441 "Expected %s to be %s, but it was %s."441 "Expected %s to be %s, but it was %s."
442 % (attribute_name, date, getattr(sql_object, attribute_name)))442 % (attribute_name, date, getattr(sql_object, attribute_name)))
443443
444 def assertTextMatchesExpressionIgnoreWhitespace(self,
445 regular_expression_txt,
446 text):
447 def normalise_whitespace(text):
448 return ' '.join(text.split())
449 pattern = re.compile(
450 normalise_whitespace(regular_expression_txt), re.S)
451 self.assertIsNot(
452 None, pattern.search(normalise_whitespace(text)), text)
453
444 def assertIsInstance(self, instance, assert_class):454 def assertIsInstance(self, instance, assert_class):
445 """Assert that an instance is an instance of assert_class.455 """Assert that an instance is an instance of assert_class.
446456
@@ -686,16 +696,6 @@
686 super(BrowserTestCase, self).setUp()696 super(BrowserTestCase, self).setUp()
687 self.user = self.factory.makePerson(password='test')697 self.user = self.factory.makePerson(password='test')
688698
689 def assertTextMatchesExpressionIgnoreWhitespace(self,
690 regular_expression_txt,
691 text):
692 def normalise_whitespace(text):
693 return ' '.join(text.split())
694 pattern = re.compile(
695 normalise_whitespace(regular_expression_txt), re.S)
696 self.assertIsNot(
697 None, pattern.search(normalise_whitespace(text)), text)
698
699 def getViewBrowser(self, context, view_name=None, no_login=False):699 def getViewBrowser(self, context, view_name=None, no_login=False):
700 login(ANONYMOUS)700 login(ANONYMOUS)
701 url = canonical_url(context, view_name=view_name)701 url = canonical_url(context, view_name=view_name)

Subscribers

People subscribed via source and target branches

to status/vote changes: